recentpopularlog in

robertogreco : google   546

« earlier  
401(k)s, abortion, youth football: 15 things we do now that will be considered unthinkable in 50 years - Vox
[via: https://kottke.org/19/04/what-do-we-do-now-that-will-be-unthinkable-in-50-years ]

"Youth tackle football
Bosses
Eating meat
Conspicuous consumption
The drug war
The way we die
Banning sex work
401(k)s
Ending the draft
Facebook and Google
Abortion
Self-driving cars
Our obsession with rationality
Abandoning public education
The idea of a “wrong side of history”



"Some 50 years ago, in 1964, 42 percent of Americans smoked cigarettes. Smoking in bars and offices was normal and cigarettes were given to soldiers as part of military rations. Half of American physicians smoked. Ads for cigarettes bombarded the American public. That year, the surgeon general released a report outlining the health risks of smoking. Two years later, only 40 percent of Americans said that they believed smoking was a major cause of cancer.

Today, we know that smoking is bad for our health. We’ve banned smoking in most indoor public spaces. We stopped allowing tobacco companies to advertise and forced them to put warning labels on cigarette boxes. By 2001, 71 percent of the country said they recognized smoking was a major cause of cancer, and by 2017, the rate of smokers dropped to 14 percent. The habit is now looked at as a relic of the past, something we’ve come to accept as unquestionably harmful.

When we think about what common habits, social norms, or laws that are widely considered unthinkable in today’s world, a variety of past atrocities come to mind. We could point to bloodletting, Jim Crow-era segregation, and drinking and driving as being on the “wrong side” of history.

But what modern practices will we one day think of as barbaric? It’s a framework invoked frequently in political or scientific beliefs: Actor Harrison Ford recently said leaders who deny climate change are on the “wrong side of history.” President Barack Obama said Russia’s military intervention in Ukraine was on the “wrong side of history.” Filmmaker Spike Lee said that President Donald Trump himself is on the “wrong side of history.”

So what, by 2070 — some 50 years in the future — will join this group? We asked 15 thinkers, writers, and advocates to take their best guess.

Bioethicist Peter Singer says people will stop the habit of conspicuous consumption. “The ostentatious display of wealth, in a world that still has many people in need, is not in good taste. Within 50 years, we’ll wonder how people did not see that,” he writes.

Historian Jennifer Mittelstadt predicts that our volunteer army will be widely considered a mistake: “Fifty years from now Americans will observe with shock the damage to both foreign policy and domestic institutions wrought by our acceptance of an increasingly privatized, socially isolated, and politically powerful US military.”

For philosopher Jacob T. Levy, the very idea of there being a “wrong side of history” is wrong itself.

Other answers range from kids playing tackle football to expecting workers to invest in 401(k)s."
us  future  obsolescence  barbarity  draft  cars  self-drivingcars  retirement  saving  drugwar  football  americanfootball  conspicuousconsumption  capitalism  consumption  rationality  scientism  publiceducations  publicschools  schools  schooling  education  facebook  google  abortion  war  military  sexwork  death  dying  meat  food  howwelive  predictions  history  petersinger  kristatippett  jaboblevy  jennifermittelstadt  haiderwarraich  kathleenfrydl  meredithbroussard  chrisnowinski  adiaharveywingfield  bhaskarsunkara  horizontality  hierarchy  inequality  jacobhacker  economics  society  transportation 
4 weeks ago by robertogreco
San Francisco; or, How to Destroy a City | Public Books
"As New York City and Greater Washington, DC, prepared for the arrival of Amazon’s new secondary headquarters, Torontonians opened a section of their waterfront to Alphabet’s Sidewalk Labs, which plans to prototype a new neighborhood “from the internet up.” Fervent resistance arose in all three locations, particularly as citizens and even some elected officials discovered that many of the terms of these public-private partnerships were hashed out in closed-door deals, secreted by nondisclosure agreements. Critics raised questions about the generous tax incentives and other subsidies granted to these multibillion-dollar corporations, their plans for data privacy and digital governance, what kind of jobs they’d create and housing they’d provide, and how their arrival could impact local infrastructures, economies, and cultures. While such questioning led Amazon to cancel their plans for Long Island City in mid-February, other initiatives press forward. What does it mean when Silicon Valley—a geographic region that’s become shorthand for an integrated ideology and management style usually equated with libertarian techno-utopianism—serves as landlord, utility provider, urban developer, (unelected) city official, and employer, all rolled into one?1

We can look to Alphabet’s and Amazon’s home cities for clues. Both the San Francisco Bay Area and Seattle have been dramatically remade by their local tech powerhouses: Amazon and Microsoft in Seattle; and Google, Facebook, and Apple (along with countless other firms) around the Bay. As Jennifer Light, Louise Mozingo, Margaret O’Mara, and Fred Turner have demonstrated, technology companies have been reprogramming urban and suburban landscapes for decades.2 And “company towns” have long sprung up around mills, mines, and factories.3 But over the past few years, as development has boomed and income inequality has dramatically increased in the Bay Area, we’ve witnessed the arrival of several new books reflecting on the region’s transformation.

These titles, while focusing on the Bay, offer lessons to New York, DC, Toronto, and the countless other cities around the globe hoping to spur growth and economic development by hosting and ingesting tech—by fostering the growth of technology companies, boosting STEM education, and integrating new sensors and screens into their streetscapes and city halls. For years, other municipalities, fashioning themselves as “the Silicon Valley of [elsewhere],” have sought to reverse-engineer the Bay’s blueprint for success. As we’ll see, that blueprint, drafted to optimize the habits and habitats of a privileged few, commonly elides the material needs of marginalized populations and fragile ecosystems. It prioritizes efficiency and growth over the maintenance of community and the messiness of public life. Yet perhaps we can still redraw those plans, modeling cities that aren’t only made by powerbrokers, and that thrive when they prioritize the stewardship of civic resources over the relentless pursuit of innovation and growth."



"We must also recognize the ferment and diversity inherent in Bay Area urban historiography, even in the chronicles of its large-scale development projects. Isenberg reminds us that even within the institutions and companies responsible for redevelopment, which are often vilified for exacerbating urban ills, we find pockets of heterogeneity and progressivism. Isenberg seeks to supplement the dominant East Coast narratives, which tend to frame urban renewal as a battle between development and preservation.

In surveying a variety of Bay Area projects, from Ghirardelli Square to The Sea Ranch to the Transamerica Pyramid, Isenberg shifts our attention from star architects and planners to less prominent, but no less important, contributors in allied design fields: architectural illustration, model-making, publicity, journalism, property management, retail planning, the arts, and activism. “People who are elsewhere peripheral and invisible in the history of urban design are,” in her book, “networked through the center”; they play critical roles in shaping not only the urban landscape, but also the discourses and processes through which that landscape takes shape.

For instance, debates over public art in Ghirardelli Square—particularly Ruth Asawa’s mermaid sculpture, which featured breastfeeding lesbian mermaids—“provoked debates about gender, sexuality, and the role of urban open space in San Francisco.” Property manager Caree Rose, who worked alongside her husband, Stuart, coordinated with designers to master-plan the Square, acknowledging that retail, restaurants, and parking are also vital ingredients of successful public space. Publicist Marion Conrad and graphic designer Bobbie Stauffacher were key members of many San Francisco design teams, including that for The Sea Ranch community, in Sonoma County. Illustrators and model-makers, many of them women, created objects that mediated design concepts for clients and typically sat at the center of public debates.

These creative collaborators “had the capacity to swing urban design decisions, structure competition for land, and generally set in motion the fate of neighborhoods.” We see the rhetorical power of diverse visualization strategies reflected across these four books, too: Solnit’s offers dozens of photographs, by Susan Schwartzenberg—of renovations, construction sites, protests, dot-com workplaces, SRO hotels, artists’ studios—while Walker’s dense text is supplemented with charts, graphs, and clinical maps. McClelland’s book, with its relatively large typeface and extra-wide leading, makes space for his interviewees’ words to resonate, while Isenberg generously illustrates her pages with archival photos, plans, and design renderings, many reproduced in evocative technicolor.

By decentering the star designer and master planner, Isenberg reframes urban (re)development as a collaborative enterprise involving participants with diverse identities, skills, and values. And in elevating the work of “allied” practitioners, Isenberg also aims to shift the focus from design to land: public awareness of land ownership and commitment to responsible public land stewardship. She introduces us to several mid-century alternative publications—weekly newspapers, Black periodicals, activists’ manuals, and books that never made it to the best-seller list … or never even made it to press—that advocated for a focus on land ownership and politics. Yet the discursive power of Jacobs and Caro, which framed the debate in terms of urban development vs. preservation, pushed these other texts off the shelf—and, along with them, the “moral questions of land stewardship” they highlighted.

These alternative tales and supporting casts serve as reminders that the modern city need not succumb to Haussmannization or Moses-ification or, now, Googlization. Mid-century urban development wasn’t necessarily the monolithic, patriarchal, hegemonic force we imagined it to be—a realization that should steel us to expect more and better of our contemporary city-building projects. Today, New York, Washington, DC, and Toronto—and other cities around the world—are being reshaped not only by architects, planners, and municipal administrators, but also by technologists, programmers, data scientists, “user experience” experts and logistics engineers. These are urbanism’s new “allied” professions, and their work deals not only with land and buildings, but also, increasingly, with data and algorithms.

Some critics have argued that the real reason behind Amazon’s nationwide HQ2 search was to gather data from hundreds of cities—both quantitative and qualitative data that “could guide it in its expansion of the physical footprint, in the kinds of services it rolls out next, and in future negotiations and lobbying with states and municipalities.”5 This “trove of information” could ultimately be much more valuable than all those tax incentives and grants. If this is the future of urban development, our city officials and citizens must attend to the ownership and stewardship not only of their public land, but also of their public data. The mismanagement of either could—to paraphrase our four books’ titles—elongate the dark shadows cast by growing inequality, abet the siege of exploitation and displacement, “hollow out” our already homogenizing neighborhoods, and expedite the departure of an already “gone” city.

As Beat poet Lawrence Ferlinghetti muses in his “Pictures of the Gone World 11,” which inspired Walker’s title: “The world is a beautiful place / to be born into / if you don’t mind some people dying / all the time / or maybe only starving / some of the time / which isn’t half so bad / if it isn’t you.” This is precisely the sort of solipsism and stratification that tech-libertarianism and capitalist development promotes—and that responsible planning, design, and public stewardship must prevent."
cities  shannonmattern  2019  sanfrancisco  siliconvalley  nyc  washingtondc  seattle  amazon  google  apple  facebook  technology  inequality  governance  libertarianism  urban  urbanism  microsoft  jenniferlight  louisemozingo  margareto'mara  fredturner  efficiency  growth  marginalization  publicgood  civics  innovation  rebeccasolnit  gentrification  privatization  homogenization  susanschwartzenberg  carymcclelland  economics  policy  politics  richardwalker  bayarea  lisonisenberg  janejacobs  robertmoses  diversity  society  inclusivity  inclusion  exclusion  counterculture  cybercultue  culture  progressive  progressivism  wealth  corporatism  labor  alexkaufman  imperialism  colonization  californianideology  california  neoliberalism  privacy  technosolutionism  urbanization  socialjustice  environment  history  historiography  redevelopment  urbanplanning  design  activism  landscape  ruthasawa  gender  sexuality  openspace  publicspace  searanch  toronto  larenceferlinghetti  susanschartzenberg  bobbiestauffacher  careerose  stuartrose  ghirardellisqure  marionconrad  illustration  a 
7 weeks ago by robertogreco
Goodbye Big Five
"Reporter Kashmir Hill spent six weeks blocking Amazon, Facebook, Google, Microsoft, and Apple from getting my money, data, and attention, using a custom-built VPN. Here’s what happened."
microsoft  google  facebook  amazon  apple  kashmirhill  technology  2019  internet  web  attention  online 
february 2019 by robertogreco
Bay Area Disrupted: Fred Turner on Vimeo
"Interview with Fred Turner in his office at Stanford University.

http://bayareadisrupted.com/

https://fredturner.stanford.edu

Graphics: Magda Tu
Editing: Michael Krömer
Concept: Andreas Bick"
fredturner  counterculture  california  opensource  bayarea  google  softare  web  internet  history  sanfrancisco  anarchism  siliconvalley  creativity  freedom  individualism  libertarianism  2014  social  sociability  governance  myth  government  infrastructure  research  online  burningman  culture  style  ideology  philosophy  apolitical  individuality  apple  facebook  startups  precarity  informal  bureaucracy  prejudice  1960s  1970s  bias  racism  classism  exclusion  inclusivity  inclusion  communes  hippies  charism  cultofpersonality  whiteness  youth  ageism  inequality  poverty  technology  sharingeconomy  gigeconomy  capitalism  economics  neoliberalism  henryford  ford  empowerment  virtue  us  labor  ork  disruption  responsibility  citizenship  purpose  extraction  egalitarianism  society  edtech  military  1940s  1950s  collaboration  sharedconsciousness  lsd  music  computers  computing  utopia  tools  techculture  location  stanford  sociology  manufacturing  values  socialchange  communalism  technosolutionism  business  entrepreneurship  open  liberalism  commons  peerproduction  product 
december 2018 by robertogreco
iPad Pro (2018) Review: Two weeks later! - YouTube
[at 7:40, problems mentioned with iOS on the iPad Pro as-is for Rene Ritchie keeping it from being a laptop replacement]

"1. Import/export more than just photo/video [using USB drive, hard drive, etc]

2. Navigate with the keyboard [or trackpad/mouse]

3. 'Desktop Sites' in Safari [Why not a desktop browser (maybe in addition to Safari, something like a "pro" Safari with developer tools and extensions?]

4. Audio recording [system-wide like the screen recording for capturing conversations from Skype/Facetime/etc]

5. Develop for iPad on iPad

6. Multi-user for everyone [like on a Chromebook]"

[I'd be happy with just 1, 2, and 3. 6 would also be nice. 4 and 5 are not very important to me, but also make sense.]

[Some of my notes regarding the state of the tablet-as-laptop replacement in 2018, much overlap with what is above:

iOS tablets
no mouse/trackpad support, file system is still a work in process, no desktop browser equivalents, Pro models are super expensive given these tradeoffs, especially with additional keyboard and pen costs

Microsoft Surface
tablet experience is lacking, Go (closest to meeting my needs and price) seems a little overpriced for the top model (entry model needs more RAM and faster storage), also given the extra cost of keyboard and pen

Android tablets
going nowhere, missing desktop browser

ChromeOS tablets
underpowered (Acer Chromebook Tab 10) or very expensive (Google Pixel Slate) or I don’t like it enough (mostly the imbalance between screen and keyboard, and the keyboard feel) for the cost (HP x2), but ChromeOS tablets seem as promising as iPads as laptop replacements at this point

ChromeOS convertibles
strange having the keyboard in the back while using as a tablet (Samsung Chromebook Plus/Pro, ASUS Chromebook Flip C302CA, Google Pixelbook (expensive)) -- I used a Chromebook Pro for a year (as work laptop) and generally it was a great experience, but they are ~1.5 years old now and haven’t been refreshed. Also, the Samsung Chromebook Plus (daughter has one of these, used it for school and was happy with it until new college provided a MacBook Pro) refresh seems like a step back because of the lesser screen, the increase in weight, and a few other things.

Additional note:
Interesting how Microsoft led the way in this regard (tablet as laptop replacement), but again didn't get it right enough and is now being passed by the others, at least around me]

[finally, some additional discussion and comparison:

The Verge: "Is this a computer?" (Apr 11, 2018)
https://www.youtube.com/watch?v=K7imG4DYXlM

Apple's "What's a Computer?" iPad ad (Jan 23, 2018, no longer available directly from Apple)
https://www.youtube.com/watch?v=llZys3xg6sU

Apple's "iPad Pro — 5 Reasons iPad Pro can be your next computer — Apple" (Nov 19, 2018)
https://www.youtube.com/watch?v=tUQK7DMys54

The Verge: "Google Pixel Slate Review: half-baked" (Nov 27, 2018)
https://www.youtube.com/watch?v=BOa6HU_he2A
https://www.theverge.com/2018/11/27/18113447/google-pixel-slate-review-tablet-chrome-os-android-chromebook-slapdash

Unbox Therapy: "Can The Google Pixel Slate Beat The iPad Pro?" (Nov 28, 2018)
https://www.youtube.com/watch?v=lccvHF4ODNY

The Verge: "Google keeps failing to understand tablets" (Nov 29, 2018)
https://www.theverge.com/2018/11/29/18117520/google-tablet-android-chrome-os-pixel-slate-failure

The Verge: "Chrome OS isn't ready for tablets yet" (Jul 18, 2018)
https://www.youtube.com/watch?v=Eu9JBj7HNmM

The Verge: "New iPad Pro review: can it replace your laptop?" (Nov 5, 2018)
https://www.youtube.com/watch?v=LykS0TRSHLY
https://www.theverge.com/2018/11/5/18062612/apple-ipad-pro-review-2018-screen-usb-c-pencil-price-features

Navneet Alang: "The misguided attempts to take down the iPad Pro" (Nov 9, 2018)
https://theweek.com/articles/806270/misguided-attempts-take-down-ipad-pro

Navneet Alang: "Apple is trying to kill the laptop" (Oct 31, 2018)
https://theweek.com/articles/804670/apple-trying-kill-laptop

The Verge: "Microsoft Surface Go review: surprisingly good" (Aug 7, 2018)
https://www.youtube.com/watch?v=N7N2xunvO68
https://www.theverge.com/2018/8/7/17657174/microsoft-surface-go-review-tablet-windows-10

The Verge: "The Surface Go Is Microsoft's Hybrid PC Dream Made Real: It’s time to think of Surface as Surface, and not an iPad competitor" (Aug 8, 2018)
https://www.theverge.com/2018/8/8/17663494/microsoft-surface-go-review-specs-performance

The Verge: "Microsoft Surface Go hands-on" (Aug 2, 2018)
https://www.youtube.com/watch?v=dmENZqKPfws

Navneet Alang: "Is Microsoft's Surface Go doomed to fail?" (Jul 12, 2018)
https://theweek.com/articles/784014/microsofts-surface-doomed-fail

Chrome Unboxed: "Google Pixel Slate: Impressions After A Week" (Nov 27, 2018)
https://www.youtube.com/watch?v=ZfriNj2Ek68
https://chromeunboxed.com/news/google-pixel-slate-first-impressions/

Unbox Therapy: "I'm Quitting Computers" (Nov 18, 2018)
https://www.youtube.com/watch?v=w3oRJeReP8g

Unbox Therapy: "The Truth About The iPad Pro..." (Dec 5, 2018)
https://www.youtube.com/watch?v=JXqou3SVbMw

The Verge: "Tablet vs laptop" (Mar 22, 2018)
https://www.youtube.com/watch?v=Rm_zQP9JIJI

Marques Brownlee: "iPad Pro Review: The Best Ever... Still an iPad!" (Nov 14, 2018)
https://www.youtube.com/watch?v=N1e_voQvHYk

Engadget: "iPad Pro 2018 Review: Almost a laptop replacement" (Nov 6, 2018)
https://www.youtube.com/watch?v=jZzmMpP2BNw

Matthew Moniz: "iPad Pro 2018 - Overpowered Netflix Machine or Laptop Replacement?" (Nov 8, 2018)
https://www.youtube.com/watch?v=P0ZFlFG67kY

WSJ: "Can the New iPad Pro Be Your Only Computer?" (Nov 16, 2018)
https://www.youtube.com/watch?v=kMCyI-ymKfo
https://www.wsj.com/articles/apples-new-ipad-pro-great-tablet-still-cant-replace-your-laptop-1541415600

Ali Abdaal: "iPad vs Macbook for Students (2018) - Can a tablet replace your laptop?" (Oct 10, 2018)
https://www.youtube.com/watch?v=xIx2OQ6E6Mc

Washington Post: "Nope, Apple’s new iPad Pro still isn’t a laptop" (Nov 5, 2018)
https://www.washingtonpost.com/technology/2018/11/05/nope-apples-new-ipad-pro-still-isnt-laptop/

Canoopsy: "iPad Pro 2018 Review - My Student Perspective" (Nov 19, 2018)
https://www.youtube.com/watch?v=q4dgHuWBv14

Greg' Gadgets: "The iPad Pro (2018) CAN Replace Your Laptop!" (Nov 24, 2018)
https://www.youtube.com/watch?v=Y3SyXd04Q1E

Apple World: "iPad Pro has REPLACED my MacBook (my experience)" (May 9, 2018)
https://www.youtube.com/watch?v=vEu9Zf6AENU

Dave Lee: "iPad Pro 2018 - SUPER Fast, But Why?" (Nov 11, 2018)
https://www.youtube.com/watch?v=Aj6vXhN-g6k

Shahazad Bagwan: "A Week With iPad Pro // Yes It Replaced A Laptop!" (Oct 20, 2017)
https://www.youtube.com/watch?v=jhHwv9QsoP0

Apple's "Homework (Full Version)" iPad ad (Mar 27, 2018)
https://www.youtube.com/watch?v=IprmiOa2zH8

The Verge: "Intel's future computers have two screens" (Oct 18, 2018)
https://www.youtube.com/watch?v=deymf9CoY_M

"The Surface Book 2 is everything the MacBook Pro should be" (Jun 26, 208)
https://char.gd/blog/2018/the-surface-book-2-is-everything-the-macbook-pro-should-be-and-then-some

"Surface Go: the future PC that the iPad Pro failed to deliver" (Aug 27, 2018)
https://char.gd/blog/2018/surface-go-a-better-future-pc-than-the-ipad-pro

"Microsoft now has the best device lineup in the industry" (Oct 3, 2018)
https://char.gd/blog/2018/microsoft-has-the-best-device-lineup-in-the-industry ]
ipadpro  ipad  ios  computing  reneritchie  2018  computers  laptops  chromebooks  pixelslate  surfacego  microsoft  google  apple  android  microoftsurface  surface 
november 2018 by robertogreco
Are.na / Blog – Alternate Digital Realities
"Writer David Zweig, who interviewed Grosser about the Demetricator for The New Yorker, describes a familiar sentiment when he writes, “I’ve evaluated people I don’t know on the basis of their follower counts, judged the merit of tweets according to how many likes and retweets they garnered, and felt the rush of being liked or retweeted by someone with a large following. These metrics, I know, are largely irrelevant; since when does popularity predict quality? Yet, almost against my will, they exert a pull on me.” Metrics can be a drug. They can also influence who we think deserves to be heard. By removing metrics entirely, Grosser’s extension allows us to focus on the content—to be free to write and post without worrying about what will get likes, and to decide for ourselves if someone is worth listening to. Additionally, it allows us to push back against a system designed not to cultivate a healthy relationship with social media but to prioritize user-engagement in order to sell ads."
digital  online  extensions  metrics  web  socialmedia  internet  omayeliarenyeka  2018  race  racism  activism  davidzeig  bejamingrosser  twitter  google  search  hangdothiduc  reginafloresmir  dexterthomas  whitesupremacy  tolulopeedionwe  patriarchy  daniellesucher  jennyldavis  mosaid  shannoncoulter  taeyoonchoi  rodrigotello  elishacohen  maxfowler  jamesbaldwin  algorithms  danielhowe  helennissenbaum  mushonzer-aviv  browsers  data  tracking  surveillance  ads  facebook  privacy  are.na 
april 2018 by robertogreco
Google Docs have quietly revolutionized document editing.
"But Google’s update is far more than just a ploy to lure Office users away from Microsoft’s apps. Google is eliminating the need for distinct file types, making it easier to sign or edit documents regardless of the applications you have downloaded on your phone or desktop. It’s a novel idea, really—just being able to open a file, work on it, and not think about “what” it is. While Microsoft, Apple, and others continue to work in walled gardens, Google is making interoperability one of its primary focuses. For consumers inundated with ever more work but no additional hours in the day, it’s the kind of time and stress savings that are exceedingly worthwhile."
google  googledocs  googledrive  files  2018  christinabonnington  fileformats  filetypes  interoperability  pdf 
february 2018 by robertogreco
When disability tech is just a marketing exercise | The Outline
"This cycle is a common one. Companies know that accessibility projects can garner great press. They also probably know that many journalists are unlikely to follow up and see whether the big promises are actually coming true. So they flaunt their minimal or nonexistent ties to accessibility, reap the glowing media coverage, and let the projects slip quietly into the night.

BMW got great press for making four special chairs for the Paralympics, but it seems to have stopped at those four. The Dot, a braille smartwatch, is a darling among journalists who call it the “first smartwatch for the blind,” but all it does is display some text from your phone in braille. Apple’s smartwatch is actually far more useful for blind users. Companies also advertise products as being accessible, but these claims are rarely put to the test.

Google is a repeat offender when it comes to claiming accessibility brownie points while failing to provide truly accessible tech, said Kit Englard, an assistive technology specialist. “If you read anything from Google it says: Google is accessible, it works with screen readers. Eh, it doesn’t really,” she says. Google Docs and Google Drive are both notoriously hard to use with a screen reader (a system, usually incorporating audio, that blind and low-vision people use to access visual content). “The way to force a screen reader to work with Google Docs, you have to go into your screen reader, turn it off in some ways, and then go back into Google Doc,” Englard said. “You have to memorize a whole series of commands that are completely different from any other commands you’d be used to.”

Vaporware — the term for products and features touted to the press that never materialize — is endemic in tech. When that non-existent product is a smartwatch or a sex robot, the harm is minimal. But when companies claim they are building products for people with disabilities and then don’t, Englard says that does real damage. More and more big companies are adopting systems like Google Drive, thinking that they are accessible, when in fact they’re not, which could lock disabled people out of jobs and promotions. “When they ask ‘is our equipment accessible to you?’ and the answer is no, that person can’t have that job. It’s not okay to lock people out of educational opportunities or social engagements or research,” Englard said. “Think of how many surveys are done on Google Docs these days.”"
disabilities  disability  edtech  marketing  google  googledocs  googledrive  2017  roseeveleth  wheelchairs  deankamen  segwy  ibot  toyota  bmw  vaporare 
december 2017 by robertogreco
André Staltz - The Web began dying in 2014, here's how
"The events and data above describe how three internet companies have acquired massive influence on the Web, but why does that imply the beginning of the Web’s death? To answer that, we need to reflect on what the Web is.

The original vision for the Web according to its creator, Tim Berners-Lee, was a space with multilateral publishing and consumption of information. It was a peer-to-peer vision with no dependency on a single party. Tim himself claims the Web is dying: the Web he wanted and the Web he got are no longer the same."



"GOOG, MSFT, FB, and AMZN are mimicking AAPL’s strategy of building brand loyalty around high-end devices. Through a process I call “Appleification”, they are (1) setting up walled gardens, (2) becoming hardware companies, and (3) marketing the design while designing for the market. It is a threat to AAPL itself, because they are behind the other giants when it comes to big data collection and its uses. While AAPL’s early and bold introduction of an App Store shook the Web as the dominant software distribution platform, it wasn’t enough to replace it. The next wave of walled gardens might look different: less noticeable, but nonetheless disruptive to the Web."



"There is a tendency at GOOG-FB-AMZN to bypass the Web which is motivated by user experience and efficient communication, not by an agenda to avoid browsers. In the knowledge internet and the commerce internet, being efficient to provide what users want is the goal. In the social internet, the goal is to provide an efficient channel for communication between people. This explains FB’s 10-year strategy with Augmented Reality (AR) and Virtual Reality (VR) as the next medium for social interactions through the internet. This strategy would also bypass the Web, proving how more natural social AR would be than social real-time texting in browsers. Already today, most people on the internet communicate with other people via a mobile app, not via a browser.

The common pattern among these three internet giants is to grow beyond browsers, creating new virtual contexts where data is created and shared. The Web may die like most other technologies do: simply by becoming less attractive than newer technologies. And like most obsolete technologies, they don’t suddenly disappear, neither do they disappear completely. You can still buy a Walkman and listen to a tape with it, but the technology has nevertheless lost its collective relevance. The Web’s death will come as a gradual decay of its necessity, not as a dramatic loss.

The Trinet

The internet will survive longer than the Web will. GOOG-FB-AMZN will still depend on submarine internet cables (the “Backbone”), because it is a technical success. That said, many aspects of the internet will lose their relevance, and the underlying infrastructure could be optimized only for GOOG traffic, FB traffic, and AMZN traffic. It wouldn’t conceptually be anymore a “network of networks”, but just a “network of three networks”, the Trinet, if you will. The concept of workplace network which gave birth to the internet infrastructure would migrate to a more abstract level: Facebook Groups, Google Hangouts, G Suite, and other competing services which can be acquired by a tech giant. Workplace networks are already today emulated in software as a service, not as traditional Local Area Networks. To improve user experience, the Trinet would be a technical evolution of the internet. These efforts are already happening today, at GOOG. In the long-term, supporting routing for the old internet and the old Web would be an overhead, so it could be beneficial to cut support for the diverse internet on the protocol and hardware level. Access to the old internet could be emulated on GOOG’s cloud accessed through the Trinet, much like how Windows 95 can be today emulated in your browser. ISPs would recognize the obsolescence of the internet and support the Trinet only, driven by market demand for optimal user experience from GOOG-FB-AMZN.

Perhaps a future with great user experience in AR, VR, hands-free commerce and knowledge sharing could evoke an optimistic perspective for what these tech giants are building. But 25 years of the Web has gotten us used to foundational freedoms that we take for granted. We forget how useful it has been to remain anonymous and control what we share, or how easy it was to start an internet startup with its own independent servers operating with the same rights GOOG servers have. On the Trinet, if you are permanently banned from GOOG or FB, you would have no alternative. You could even be restricted from creating a new account. As private businesses, GOOG, FB, and AMZN don’t need to guarantee you access to their networks. You do not have a legal right to an account in their servers, and as societies we aren’t demanding for these rights as vehemently as we could, to counter the strategies that tech giants are putting forward.

The Web and the internet have represented freedom: efficient and unsupervised exchange of information between people of all nations. In the Trinet, we will have even more vivid exchange of information between people, but we will sacrifice freedom. Many of us will wake up to the tragedy of this tradeoff only once it is reality."
andréstaltz  amazon  facebook  google  internet  web  online  walledgardens  marketing  advertising  2014  2017  seo  publishing  amp  apple 
november 2017 by robertogreco
Zeynep Tufekci: We're building a dystopia just to make people click on ads | TED Talk | TED.com
"We're building an artificial intelligence-powered dystopia, one click at a time, says techno-sociologist Zeynep Tufekci. In an eye-opening talk, she details how the same algorithms companies like Facebook, Google and Amazon use to get you to click on ads are also used to organize your access to political and social information. And the machines aren't even the real threat. What we need to understand is how the powerful might use AI to control us -- and what we can do in response."

[See also: "Machine intelligence makes human morals more important"
https://www.ted.com/talks/zeynep_tufekci_machine_intelligence_makes_human_morals_more_important

"Machine intelligence is here, and we're already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don't fit human error patterns -- and in ways we won't expect or be prepared for. "We cannot outsource our responsibilities to machines," she says. "We must hold on ever tighter to human values and human ethics.""]
zeyneptufekci  machinelearning  ai  artificialintelligence  youtube  facebook  google  amazon  ethics  computing  advertising  politics  behavior  technology  web  online  internet  susceptibility  dystopia  sociology  donaldtrump 
october 2017 by robertogreco
The genius strategy behind Google's new Pixel 2 smartphones
"But a decade into the smartphone era, specifications aren't enough. When considering which phone to buy, consumers are no longer looking at a simple list of features; they're considering how all the parts work together to make a device — and the broader ecosystem — more compelling. Giving consumers the most effective experience requires vertically integrating hardware, software, and services so that the experience can be seamless. For example, technologies like Google Lens, which lets users identify objects using the camera, rely on a variety of things working perfectly — from the computing to the imaging tech to the machine learning. The Pixel 2's likely excellent camera also comes with free cloud storage from Google, making the device even more compelling. In mimicking this integrated Apple approach, Google can also leverage a key advantage over Apple: a head start in AI, as Apple has come to the field later and more clumsily than its competitors, while Google continues to be a pioneer.

The upsides of this holistic approach are clear: When a tech company controls each point in an ecosystem, it is better able to produce the very best experiences for users, and evade the pitfalls and lag of tech partnerships. It's something the entire industry is slowly recognizing. Google also announced the Google Pixelbook, a high-end Chromebook with a touchscreen and pen. It instantly evoked both the Microsoft Surface and the iPad Pro. The eerie similarity was symbolic: All three major computing companies are trying to achieve the same basic thing, locking consumers into an ecosystem.

There is a problem looming here for Google, though. In theory, the Pixel line is supposed to function like Microsoft's Surface in that it highlights the company's ecosystem at its very best, spurring on development from its broad range of partners. But inevitably, there are competing interests at work. Samsung is also recognizing that power lies in the stack; it developed its own voice assistant called Bixby rather than relying on Google's Assistant. You can, however, access both services on a new Samsung phone. It's redundant, and a little ridiculous, but perhaps demonstrative of the tension at work. Where once there was overlap and cross-pollination, things are tightening into vertical silos, and partnerships are a thing of the past. What remains to be seen is whether Google can keep on this path without alienating its partners — or, if push comes to shove, have Android continue to succeed on its own without them."
apple  google  samsung  android  ios  technology  navneetalang  iphone  pixel  hardware  2017 
october 2017 by robertogreco
How the Appetite for Emojis Complicates the Effort to Standardize the World’s Alphabets - The New York Times
"nshuman Pandey was intrigued. A graduate student in history at the University of Michigan, he was searching online for forgotten alphabets of South Asia when an image of a mysterious writing system popped up. In eight years of digging through British colonial archives both real and digital, he has found almost 200 alphabets across Asia that were previously undescribed in the West, but this one, which he came across in early 2011, stumped him. Its sinuous letters, connected to one another in cursive fashion and sometimes bearing dots and slashes above or below, resembled those of Arabic.

Pandey eventually identified the script as an alphabet for Rohingya, the language spoken by the stateless and persecuted Muslim people whose greatest numbers live in western Myanmar, where they’ve been the victims of brutal ethnic cleansing. Pandey wasn’t sure if the alphabet itself was in use anymore, until he lucked upon contemporary pictures of printed textbooks for children. That meant it wasn’t a historical footnote; it was alive.

An email query from Pandey bounced from expert to expert until it landed with Muhammad Noor, a Rohingya activist and television host who was living in Malaysia. He told Pandey the short history of this alphabet, which was developed in the 1980s by a group of scholars that included a man named Mohammed Hanif. It spread slowly through the 1990s in handwritten, photocopied books. After 2001, thanks to two computer fonts designed by Noor, it became possible to type the script in word-processing programs. But no email, text messages or (later) tweets could be sent or received in it, no Google searches conducted in it. The Rohingya had no digital alphabet of their own through which they could connect with one another.

Billions of people around the world no longer face this plight. Whether on computers or smartphones, they can write as they write, expressing themselves in their own linguistic culture. What makes this possible is a 26-year-old international industrial standard for text data called the Unicode standard, which prescribes the digital letters, numbers and punctuation marks of more than 100 different writing systems: Greek, Cherokee, Arabic, Latin, Devanagari — a world-spanning storehouse of languages. But the alphabet that Noor described wasn’t among them, and neither are more than 100 other scripts, just over half of them historical and the rest alphabets that could still be used by as many as 400 million people today.

Now a computational linguist and motivated by a desire to put his historical knowledge to use, Pandey knows how to get obscure alphabets into the Unicode standard. Since 2005, he has done so for 19 writing systems (and he’s currently working to add another eight). With Noor’s help, and some financial support from a research center at the University of California, Berkeley, he drew up the basic set of letters and defined how they combine, what rules govern punctuation and whether spaces exist between words, then submitted a proposal to the Unicode Consortium, the organization that maintains the standards for digital scripts. In 2018, seven years after Pandey’s discovery, what came to be called Hanifi Rohingya will be rolled out in Unicode’s 11th version. The Rohingya will be able to communicate online with one another, using their own alphabet."



"Unicode’s history is full of attacks by governments, activists and eccentrics. In the early 1990s, the Chinese government objected to the encoding of Tibetan. About five years ago, Hungarian nationalists tried to sabotage the encoding for Old Hungarian because they wanted it to be called “Szekley-Hungarian Rovas” instead. An encoding for an alphabet used to write Nepal Bhasa and Sanskrit was delayed a few years ago by ethnonationalists who mistrusted the proposal because they objected to the author’s surname. Over and over, the Unicode Consortium has protected its standard from such political attacks.

The standard’s effectiveness helped. “If standards work, they’re invisible and can be ignored by the public,” Busch says. Twenty years after its first version, Unicode had become the default text-data standard, adopted by device manufacturers and software companies all over the world. Each version of the standard ushered more users into a seamless digital world of text. “We used to ask ourselves, ‘How many years do you think the consortium will need to be in place before we can publish the last version?’ ” Whistler recalls. The end was finally in sight — at one point the consortium had barely more than 50 writing systems to add.

All that changed in October 2010, when that year’s version of the Unicode standard included its first set of emojis."



"Not everyone thinks that Unicode should be in the emoji business at all. I met several people at Emojicon promoting apps that treat emojis like pictures, not text, and I heard an idea floated for a separate standards body for emojis run by people with nontechnical backgrounds. “Normal people can have an opinion about why there isn’t a cupcake emoji,” said Jennifer 8. Lee, an entrepreneur and a film producer whose advocacy on behalf of a dumpling emoji inspired her to organize Emojicon. The issue isn’t space — Unicode has about 800,000 unused numerical identifiers — but about whose expertise and worldview shapes the standard and prioritizes its projects.

“Emoji has had a tendency to subtract attention from the other important things the consortium needs to be working on,” Ken Whistler says. He believes that Unicode was right to take responsibility for emoji, because it has the technical expertise to deal with character chaos (and has dealt with it before). But emoji is an unwanted distraction. “We can spend hours arguing for an emoji for chopsticks, and then have nobody in the room pay any attention to details for what’s required for Nepal, which the people in Nepal use to write their language. That’s my main concern: emoji eats the attention span both in the committee and for key people with other responsibilities.”

Emoji has nonetheless provided a boost to Unicode. Companies frequently used to implement partial versions of the standard, but the spread of emoji now forces them to adopt more complete versions of it. As a result, smartphones that can manage emoji will be more likely to have Hanifi Rohingya on them too. The stream of proposals also makes the standard seem alive, attracting new volunteers to Unicode’s mission. It’s not unusual for people who come to the organization through an interest in emoji to end up embracing its priorities. “Working on characters used in a small province of China, even if it’s 20,000 people who are going to use it, that’s a more important use of their time than deliberating over whether the hand of my yoga emoji is in the right position,” Mark Bramhill told me.

Since its creation was announced in 2015, the “Adopt a Character” program, through which individuals and organizations can sponsor any characters, including emojis, has raised more than $200,000. A percentage of the proceeds goes to support the Script Encoding Initiative, a research project based at Berkeley, which is headed by the linguistics researcher Deborah Anderson, who is devoted to making Unicode truly universal. One the consortium recently accepted is called Nyiakeng Puachue Hmong, devised for the Hmong language by a minister in California whose parishioners have been using it for more than 25 years. Still in the proposal stage is Tigalari, once used to write Sanskrit and other Indian languages.

One way to read the story of Unicode in the time of emoji is to see a privileged generation of tech consumers confronting the fact that they can’t communicate in ways they want to on their devices: through emoji. They get involved in standards-making, which yields them some satisfaction but slows down the speed with which millions of others around the world get access to the most basic of online linguistic powers. “There are always winners and losers in standards,” Lawrence Busch says. “You might want to say, ultimately we’d like everyone to win and nobody to lose too much, but we’re stuck with the fact that we have to make decisions, and when we make them, those decisions are going to be less acceptable to some than to others.”"
unicode  language  languages  internet  international  standards  emoji  2017  priorities  web  online  anshumanpandey  rohingya  arabic  markbramhill  hmong  tigalari  nyiakengpuachuehmong  muhammadnoor  mohammedhanif  kenwhistler  history  1980  2011  1990s  1980s  mobile  phones  google  apple  ascii  facebook  emojicon  michaelaerard  technology  communication  tibet 
october 2017 by robertogreco
Ellen Ullman: Life in Code: "A Personal History of Technology" | Talks at Google - YouTube
"The last twenty years have brought us the rise of the internet, the development of artificial intelligence, the ubiquity of once unimaginably powerful computers, and the thorough transformation of our economy and society. Through it all, Ellen Ullman lived and worked inside that rising culture of technology, and in Life in Code she tells the continuing story of the changes it wrought with a unique, expert perspective.

When Ellen Ullman moved to San Francisco in the early 1970s and went on to become a computer programmer, she was joining a small, idealistic, and almost exclusively male cadre that aspired to genuinely change the world. In 1997 Ullman wrote Close to the Machine, the now classic and still definitive account of life as a coder at the birth of what would be a sweeping technological, cultural, and financial revolution.

Twenty years later, the story Ullman recounts is neither one of unbridled triumph nor a nostalgic denial of progress. It is necessarily the story of digital technology’s loss of innocence as it entered the cultural mainstream, and it is a personal reckoning with all that has changed, and so much that hasn’t. Life in Code is an essential text toward our understanding of the last twenty years—and the next twenty."
ellenullman  bias  algorithms  2017  technology  sexism  racism  age  ageism  society  exclusion  perspective  families  parenting  mothers  programming  coding  humans  humanism  google  larrypage  discrimination  self-drivingcars  machinelearning  ai  artificialintelligence  literacy  reading  howweread  humanities  education  publicschools  schools  publicgood  libertarianism  siliconvalley  generations  future  pessimism  optimism  hardfun  kevinkelly  computing 
october 2017 by robertogreco
Frontier notes on metaphors: the digital as landscape and playground - Long View on Education
"I am concerned with the broader class of metaphors that suggest the Internet is an inert and open place for us to roam. Scott McLeod often uses the metaphor of a ‘landscape’: “One of schools’ primary tasks is to help students master the dominant information landscape of their time.”

McLeod’s central metaphor – mastering the information landscape – fits into a larger historical narrative that depicts the Internet as a commons in the sense of “communally-held space, one which it is specifically inappropriate for any single individual or subset of the community (including governments) to own or control.” Adriane Lapointe continues, “The internet is compared to a landscape which can be used in various ways by a wide range of people for whatever purpose they please, so long as their actions do not interfere with the actions of others.”

I suspect that the landscape metaphor resonates with people because it captures how they feel the Internet should work. Sarah T. Roberts argues that we are tempted to imagine the digital as “valueless, politically neutral and as being without material consequences.” However, the digital information landscape is an artifact shaped by capitalism, the US military, and corporate power. It’s a landscape that actively tracks and targets us, buys and sells our information. And it’s mastered only by the corporations, CEOs and venture capitalists.

Be brave? I have no idea what it would mean to teach students how to ‘master’ the digital landscape. The idea of ‘mastering’ recalls the popular frontier and pioneer metaphors that have fallen out of fashion since 1990s as the Internet became ubiquitous, as Jan Rune Holmevik notes. There is of course a longer history of the “frontiers of knowledge” metaphor going back to Francis Bacon and passing through Vannevar Bush, and thinking this way has become, according to Gregory Ulmer, “ubiquitous, a reflex, a habit of mind that shapes much of our thinking about inquiry” – and one that needs to be rethought if we take the postcolonial movement seriously.

While we might worry about being alert online, we aren’t exposed to enough stories about the physical and material implications of the digital. It’s far too easy to think that the online landscape exists only on our screens, never intersecting with the physical landscape in which we live. Yet, the Washington Post reports that in order to pave the way for new data centers, “the Prince William County neighborhood [in Virginia] of mostly elderly African American homeowners is being threatened by plans for a 38-acre computer data center that will be built nearby. The project requires the installation of 100-foot-high towers carrying 230,000-volt power lines through their land. The State Corporation Commission authorized Dominion Virginia Power in late June to seize land through eminent domain to make room for the towers.” In this case, the digital is transforming the physical landscape with hostile indifference to the people that live there.

Our students cannot be digitally literate citizens if they don’t know stories about the material implications about the digital. Cathy O’Neil has developed an apt metaphor for algorithms and data – Weapons of Math Destruction – which have the potential to destroy lives because they feed on systemic biases. In her book, O’Neil explains that while attorneys cannot cite the neighborhood people live in as a reason to deny prisoners parole, it is permissible to package that judgment into an algorithm that generates a prediction of recidivism."



"When I talk to students about the implications of their searches being tracked, I have no easy answers for them. How can youth use the net for empowerment when there’s always the possibility that their queries will count against them? Yes, we can use google to ask frank questions about our sexuality, diet, and body – or any of the other ways we worry about being ‘normal’ – but when we do so, we do not wander a non-invasive landscape. And there few cues that we need to be alert or smart.

Our starting point should not be the guiding metaphors of the digital as a playground where we need to practice safety or a landscape that we can master, but Shoshana Zuboff’s analysis of surveillance capitalism: “The game is selling access to the real-time flow of your daily life –your reality—in order to directly influence and modify your behavior for profit. This is the gateway to a new universe of monetization opportunities: restaurants who want to be your destination. Service vendors who want to fix your brake pads. Shops who will lure you like the fabled Sirens.”



So what do we teach students? I think that Chris Gilliard provides the right pedagogical insight to end on:
Students are often surprised (and even angered) to learn the degree to which they are digitally redlined, surveilled, and profiled on the web and to find out that educational systems are looking to replicate many of those worst practices in the name of “efficiency,” “engagement,” or “improved outcomes.” Students don’t know any other web—or, for that matter, have any notion of a web that would be different from the one we have now. Many teachers have at least heard about a web that didn’t spy on users, a web that was (theoretically at least) about connecting not through platforms but through interfaces where individuals had a significant amount of choice in saying how the web looked and what was shared. A big part of the teaching that I do is to tell students: “It’s not supposed to be like this” or “It doesn’t have to be like this.”
"
banjamindoxtdator  2017  landscapes  playgrounds  georgelakoff  markjohnson  treborscolz  digitalcitizenship  internet  web  online  mckenziewark  privacy  security  labor  playbor  daphnedragona  gamification  uber  work  scottmcleod  adrianelapointe  sarahroberts  janruneholmevik  vannevabush  gregoryulmer  francisbacon  chrisgilliard  pedagogy  criticalthinking  shoshanazuboff  surveillance  surveillancecapitalism  safiyanoble  google  googleglass  cathyo'neil  algorithms  data  bigdata  redlining  postcolonialism  race  racism  criticaltheory  criticalpedagogy  bias 
july 2017 by robertogreco
The Ultimate Collection of Google Font Pairings (Displayed Beautifully with Classic Art) | Reliable
"How this post came to be

I have to be honest - I love the concept of Google fonts, but I find the execution to always be somewhat... lacking. I don't know. When compared to classics like Futura, Bodoni, Garamond - even Helvetica - they just fall short, and I rarely, if ever, end up using them.

Can you relate?

Again, I love the concept of Google font pairings: the fast download of cool fonts (and even cute fonts) from their high-speed library is great, and has brought far more unique, web friendly fonts and font pairs to the internet than ever before. They sort of broke us out of the standard web fonts and web safe fonts we were all chained down to a few years back of Arial and Verdana and even the Times New Roman font (remember those days? Can you believe they were just a few short years ago?).

But because of that feeling of something "lacking" - I've stayed away from Google fonts. Until now.

A while ago, my partner and co-founder of Reliable, David Tendrich, challenged me to do something about it.

"Make Google fonts work," he said.

And so that's how this post was born.

I wanted to create the best font pairings Google has to offer that even high-end agency designers would be tempted to use. I wanted to assemble Google font pairs that even I would have trouble turning down.

So I combed through Google's vast library and tested hundreds of font combinations, from their most famous and top fonts like the Roboto font, Railway font, Montserrat font, Lato font, Oswald font, Lobster font, and more, to more obscure, funky ones you may have never even seen before this post.

The wonderful Rijks collection

It was also about this time that I came across the Rijks Museum's online art collection. In short, it's a beautiful collection of both classical and modern art that is 100% royalty free and available for any use you'd like. (Can you say "aaaamazing?")

I took my favorite pieces from the Rijks collection and combined them with my Google font pairings to create a truly beautiful display of Google fonts that really work. We've also organized them by filters to help you find a font to fit that project you're working on right now. You'll find dozens of font pairings you can re-use time and time again for different clients and projects.

But that's not all!

I undertook one more challenge in this project: To express these font pairings through profound, time-tested quotes on design from world-renowned designers of all styles. So we have beauty in art, functionality in fonts, and wisdom in quotes.

If you too have had trouble finding great Google fonts and combinations, this might win you over to the Google Fonts Team like it won me over. Or maybe not! The beauty of design is that, at the end of the day, our own preferences and styles are what truly matter.

One last thing:

To help you find font pairings, we organized them in two ways: Style (Serif, Sans Serif, Both), and Mood (Any, Modern, Striking, Eccentric, Classic, Minimal, Neutral, Warm).

Here's a brief explanation of each of these moods:

Modern: Feels like it was made for the 21st century, and wouldn't make sense in any other period. Typically clean, more on the minimal side, and great for projects that require a more polished feel.

Striking: Impact. Boldness. Weight. These font pairs reach out and grab you and pull you into their message.

Eccentric: Quirky. Odd. Different. These fonts communicate uniqueness in various ways. Great for personal blogs, companies in a crowded marketplace that need to be set apart, and more.

Classic: These font combinations feel like they could have existed for generations. They're reminiscent of classic, time-tested and weathered fonts that last. Great for projects that need to project confidence, reliability, style.

Minimal: These minimal font pairings say so much, with a whisper. They almost try to blend into the background and get out of the way to help you more purely take in the message. Clean. Concise. Polished.

Neutral: Some brands are like the friendly local baker who greets everyone with a smile. Others are more professional, cerebral. These neutral fonts are more on the cerebral side - conveying professionalism and cleanliness above all else. Think Helvetica, but for Google fonts.

Warm: For brands who are the "friendly local baker," these fonts are for you. They convey heart, creativity, openness. They say, "Come talk to me, let's be friends." Great for brands that have that personal touch.

So there you have it!

Beautiful fonts and combinations from Google you can use to fuel your personal and client projects. They're completely web safe fonts, and due to their vast use worldwide, I think it's safe to say Google fonts are the new standard web fonts.

(By the way, we've made this entire collection of Google font pairings into a downloadable PDF that you can easily reference at any time. You should see a small yellow tab at the bottom of your screen - click that to download the post now!)

I hope displaying them on top of various colors, with various beautiful works of art behind them, helped you envision how they might work in your projects. That was one of my biggest goals in creating this post.

An important lesson

That's actually a lesson that was greatly reinforced in me throughout this Google font quest - that how fonts are used are just as important, if not more so, than the fonts themselves.

I think often Google fonts are strewn across designs that are lacking the fundamentals of good design. They're the cool, hip thing to use - and as a result, so many people us them. But design is a spectrum ranging from bad to great, and as bell curves go, few designs are truly great.

By simple math, most designs using Google fonts need improvement. Perhaps that's where my initial bias against Google fonts came from. Design is something I take so seriously, and am so passionate about, that when I see bad or lazy design, it hurts. From seeing so much sub-par design riddled with Google fonts, I associated Google fonts with sub-par design.

A new perspective

But undertaking this challenge to create this collection forced me to see Google fonts from a new perspective. Namely, it forced me to throw away my previous conceptions and see them anew. When I did, I simply viewed them like I would anything else in a design - as an asset to be used and manipulated to achieve an end-goal.

When I had no choice but to make them work, I viewed them as something that actually "could" work. And that's where the creativity and magic began.

That leads me to another important lesson I became re-acquainted with in this process - that when we think something won't work, it won't work. And when we truly think it can, we really can make it work.

Strategies for choosing font pairs

I also wanted to talk about some of the strategies behind these Google font combinations to help you create even more of your own. Because while I have 50 here, I'm certain there are dozens more waiting to be made.

If you'll notice, there's a pattern to nearly every pair: The headline is very bold and impactful, and then the body font is very light and airy. This contrast creates a nice tension and context for the fonts. It makes it very interesting as you scroll. Our eyes and brains desire constant change and flux and small contrasts like this deliver.

Another reason the body fonts are very light and airy is that they have to be palatable and legible to the eye over the course of a long piece of text. If I throw a bold, impactful font at you for more than 10 or so words - your eye will go crazy. It's like talking on the phone with someone who only screams.

When you go from a louder headline font to a body font, there's almost a feeling of relief. The headline was a nice, momentary burst of excitement - but then the eye is relieved to handle something easier and less demanding.

Serif & Sans

In addition, still in line with that concept of contrast, I often paired a serif headline with a sans serif body, or vise versa. Again, this just emphasizes contrast and keeps things interesting.

It also takes things a step further and shifts the feel. Serif fonts tend to feel more grounded, conservative and calm. Sans serif fonts tend to feel more modern, daring, progressive. By paring the two together, you get a great balance that's interesting to the mind and the eye.

Work with what you (don't) love

Finally, in line with the attitude shift I mentioned above, in going from "Google fonts don't work" to "Let's make them work" - I purposefully chose some fonts I simply thought I'd never like or want to use in any context. If I looked at a font and felt like it was a "heck no" - I felt compelled to give it a try.

This is so important for the creative process. Often, without even realizing it, we confine ourselves to our creative comfort zones, which slowly shrink over time. But when we step outside and try something we thought we'd never like - we often have our biggest breakthroughs."
font  typography  fonts  design  google  googlefonts  free  loulevit  2017  webdev  graphicdesign  via:lukeneff  webdesign 
july 2017 by robertogreco
The Rhythm of Food — by Google News Lab and Truth & Beauty
"How do we search for food? Google search interest can reveal key food trends over the years.

From the rise and fall of recipes over diets and drinks to cooking trends and regional cuisines."
classideas  food  visualization  dataviz  google  seasons  search  fruit 
july 2017 by robertogreco
What's Wrong With Letting Tech Run Our Schools - Bloomberg
"Silicon Valley tech moguls are conducting an enormous experiment on the nation’s children. We should not be so trusting that they’ll get it right.

Alphabet Inc. unit Google has taken a big role in public education, offering low-cost laptops and free apps. Mark Zuckerberg of Facebook Inc. is investing heavily in educational technology, largely though the Chan Zuckerberg Initiative. Netflix Inc. head Reed Hastings has been tinkering with expensive and algorithmic ed-tech tools.

Encouraging as all this may be, the technologists might be getting ahead of themselves, both politically and ethically. Also, there’s not a lot of evidence that what they’re doing works.

Like it or not, education is political. People on opposite sides of the spectrum read very different science books, and can’t seem to agree on fundamental principles. It stands to reason that what we choose to teach our children will vary, depending on our beliefs. That’s to acknowledge, not defend, anti-scientific curricula.

Zuckerberg and Bill Gates learned this the hard way last year when the Ugandan government ordered the closure of 60 schools -- part of a network providing highly scripted, low-cost education in Africa -- amid allegations that they had been “teaching pornography” and “conveying the gospel of homosexuality” in sex-ed classes. Let’s face it, something similar could easily happen here if tech initiatives expand beyond the apolitical math subjects on which they have so far focused.

Beyond that, there are legitimate reasons to be worried about letting tech companies wield so much influence in the classroom. They tend to offer “free services” in return for access to data, a deal that raises some serious privacy concerns -- particularly if you consider that it can involve tracking kids’ every click, keystroke and backspace from kindergarten on.

My oldest son is doing extremely well as a junior in school right now, but he was a late bloomer who didn’t learn to read until third grade. Should that be a part of his permanent record, data that future algorithms could potentially use to assess his suitability for credit or a job? Or what about a kid whose “persistence score” on dynamic, standardized tests waned in 10th grade? Should colleges have access to that information in making their admissions decisions?

These are not far-fetched scenarios. Consider the fate of nonprofit education venture InBloom, which sought to collect and integrate student records in a way that would allow lessons to be customized. The venture shut down a few years ago amid concerns about how sensitive information -- including tags identifying students as “tardy” or “autistic” -- would be protected from theft and shared with outside vendors.

Google and others are collecting similar data and using it internally to improve their software. Only after some prompting did Google agree to comply with the privacy law known as FERPA, which had been weakened for the purpose of third-party sharing. It’s not clear how the data will ultimately be used, how long the current crop of students will be tracked, or to what extent their futures will depend on their current performance.

Nobody really knows to what educational benefit we are bearing such uncertainties. What kinds of kids will the technological solutions reward? Will they be aimed toward producing future Facebook engineers? How will they serve children in poverty, with disabilities or with different learning styles? As far as I know, there’s no standard audit that would allow us to answer such questions. We do know, though, that the companies and foundations working on educational technology have a lot of control over the definition of success. That’s already too much power.

In short, blindly trusting the tech guys is no way to improve our educational system. Although they undoubtedly mean well, we should demand more accountability."
edtech  google  provatization  siliconvalley  technology  schools  politics  policy  2017  publicschools  education  inbloom  facebook  markzuckerberg  data  pivacy  accountability  via:audreyatters 
june 2017 by robertogreco
What's Wrong with Apple's New Headquarters | WIRED
"But … one more one more thing. You can’t understand a building without looking at what’s around it—its site, as the architects say. From that angle, Apple’s new HQ is a retrograde, literally inward-looking building with contempt for the city where it lives and cities in general. People rightly credit Apple for defining the look and feel of the future; its computers and phones seem like science fiction. But by building a mega-headquarters straight out of the middle of the last century, Apple has exacerbated the already serious problems endemic to 21st-century suburbs like Cupertino—transportation, housing, and economics. Apple Park is an anachronism wrapped in glass, tucked into a neighborhood."



"Apple Park isn’t the first high-end, suburban corporate headquarters. In fact, that used to be the norm. Look back at the 1950s and 1960s and, for example, the Connecticut General Life Insurance HQ in Hartford or John Deere’s headquarters in Moline, Illinois. “They were stunningly beautiful, high modernist buildings by quality architects using cutting-edge technology to create buildings sheathed in glass with a seamless relationship between inside and outside, dependent on the automobile to move employees to the site,” says Louise Mozingo, a landscape architect at UC Berkeley and author of Pastoral Capitalism: A History of Suburban Corporate Landscapes. “There was a kind of splendid isolation that was seen as productive, capturing the employees for an entire day and in the process reinforcing an insular corporate culture.”

By moving out of downtown skyscrapers and building in the suburbs, corporations were reflecting 1950s ideas about cities—they were dirty, crowded, and unpleasantly diverse. The suburbs, though, were exclusive, aspirational, and architectural blank slates. (Also, buildings there are easier to secure and workers don’t go out for lunch where they might hear about other, better jobs.) It was corporatized white flight. (Mozingo, I should add, speaks to this retrograde notion in Levy’s WIRED story.)

Silicon Valley, though, never really played by these rules. IBM built a couple of research sites modeled on its East Coast redoubts, but in general, “Silicon Valley has thrived on using rather interchangeable buildings for their workplaces,” Mozingo says. You start in a garage, take over half a floor in a crummy office park, then take over the full floor, then the building, then get some venture capital and move to a better office park. “Suddenly you’re Google, and you have this empire of office buildings along 101."

And then when a bust comes or your new widget won’t widge, you let some leases lapse or sell some real estate. More than half of the lot where Apple sited its new home used to be Hewlett Packard. The Googleplex used to be Silicon Graphics. It’s the circuit of life.

Except when you have a statement building like the Spaceship, the circuit can’t complete. If Apple ever goes out of business, what would happen to the building? The same thing that happened to Union Carbide’s. That’s why nobody builds these things anymore. Successful buildings engage with their surroundings—and to be clear, Apple isn’t in some suburban arcadia. It’s in a real live city, across the street from houses and retail, near two freeway onramps.

Except the Ring is mostly hidden behind artificial berms, like Space Mountain at Disneyland. “They’re all these white elephants. Nobody knows what the hell to do with them. They’re iconic, high-end buildings, and who cares?” Mozingo says. “You have a $5 billion office building, incredibly idiosyncratic, impossible to purpose for somebody else. Nobody’s going to move into Steve Jobs’ old building.”"



"The problems in the Bay Area (and Los Angeles and many other cities) are a lot more complicated than an Apple building, of course. Cities all have to balance how they feel about adding jobs, which can be an economic benefit, and adding housing, which also requires adding expensive services like schools and transit. Things are especially tough in California, where a 1978 law called Proposition 13 radically limits the amount that the state can raise property taxes yearly. Not only did its passage gut basic services the state used to excel at, like education, but it also turned real estate into the primary way Californians accrued and preserved personal wealth. If you bought a cheap house in the 1970s in the Bay Area, today it’s a gold mine—and you are disincentivized from doing anything that would reduce its value, like, say, allowing an apartment building to be built anywhere within view.

Meanwhile California cities also have to figure out how to pay for their past employees’ pensions, an ever-increasing percentage of city budgets. Since they can’t tax old homes and can’t build new ones, commercial real estate and tech booms look pretty good. “It’s a lot to ask a corporate campus to fix those problems,” Arieff says.

But that doesn’t mean that it shouldn’t try. Some companies are: The main building of the cloud storage company Box, for example, is across the street from the Redwood City CalTrain station, and the company lets people downtown park in its lot on weekends. “The architecture is neither here nor there, but it’s a billion times more effective than the Apple campus,” Arieff says. That’s a more contemporary approach than building behind hills, away from transit.

When those companies are transnational technology corporations, it’s even harder to make that case. “Tech tends to be remarkably detached from local conditions, primarily because they’re selling globally,” says Ed Glaeser, a Harvard economist who studies cities. “They’re not particularly tied to local suppliers or local customers.” So it’s hard to get them to help fix local problems. They have even less of an incentive to solve planning problems than California homeowners do. “Even if they see the problem and the solution, there’s not a way to sell that. This is why there are government services,” Arieff says. “You can’t solve a problem like CalTrain frequency or the jobs-to-housing ratio with a market-based solution.”

Cities are changing; a more contemporary approach to commercial architecture builds up instead of out, as the planning association’s report says. Apple’s ring sites 2.5 million square feet on 175 acres of rolling hills and trees meant to evoke the Stanford campus. The 60-story tall Salesforce Tower in San Francisco has 1.5 million square feet, takes up about an acre, has a direct connection to a major transit station—the new Transbay Terminal—and cost a fifth of the Apple ring. Stipulated, the door handles probably aren’t as nice, but the views are killer.

The Future

Cupertino is the kind of town that technology writers tend to describe as “once-sleepy” or even, and this should really set off your cliche alarm, “nondescript.” But Shrivastava had me meet her for coffee at Main Street Cupertino, a new development that—unlike the rotten strip malls along Stevens Creek Blvd—combines cute restaurants and shops with multi-story residential development and a few hundred square feet of grass that almost nearly sort of works as a town square.

Across the actual street from Main Street, the old Vallco Mall—one of those medieval fortress-like shopping centers with a Christmas-sized parking lot for a moat—has become now Cupertino’s most hotly debated site for new development. (The company that built Main Street owns it.) Like all the other once-sleepy, nondescript towns in Silicon Valley, Cupertino knows it has to change. Shrivastava knows that change takes time.

It takes even longer, though, if businesses are reluctant partners. In the early 20th century, when industrial capitalists were first starting to get really, really rich, they noticed that publicly financed infrastructure would help them get richer. If you own land that you want to develop into real estate, you want a train that gets there and trolleys that connect it to a downtown and water and power for the houses you’re going to build. Maybe you want libraries and schools to induce families to live there. So you team up with government. “In most parts of the US, you open a tap and drink the water and it won’t kill you. There was a moment when this was a goal of both government and capital,” Mozingo says. “Early air pollution and water pollution regulations were an agreement between capitalism and government.”

Again, in the 1930s and 1940s, burgeoning California Bay Area businesses realized they’d need a regional transit network. They worked for 30 years alongside communities and planners to build what became BART, still today a strange hybrid between regional connector and urban subway.

Tech companies are taking baby steps in this same direction. Google added housing to the package deal surrounding the construction of its new HQ in the North Bayshore area—nearly 10,000 apartments. (That HQ is a collection of fancy pavilion-like structures from famed architect Bjarke Ingels.) Facebook’s new headquarters (from famed architect Frank Gehry) is supposed to be more open to the community, maybe even with a farmers’ market. Amazon’s new headquarters in downtown Seattle, some of 10 million square feet of office space the company has there, comes with terrarium-like domes that look like a good version of Passengers.

So what could Apple have built? Something taller, with mixed-use development around it? Cupertino would never have allowed it. But putting form factor aside, the best, smartest designers and architects in the world could have tried something new. Instead it produced a building roughly the shape of a navel, and then gazed into it.

Steven Levy wrote that the headquarters was Steve Jobs’ last great project, an expression of the way he saw his domain. It may look like a circle, but it’s actually a pyramid—a monument… [more]
apple  urbanism  cities  architects  architecture  adamrogers  2017  applecampus  cupertino  suburbia  cars  civics  howbuildingslearn  stevejobs  design  housing  publictransit  civicresponsibility  corporations  proposition13  bart  allisonarieff  bayarea  1030s  1940s  1950s  facebook  google  amazon  seattle  siliconvalley  isolationism  caltrain  government  capitalism  publicgood  louisemozingo  unioncarbide  ibm  history  future  landscape  context  inequality 
june 2017 by robertogreco
David Byrne | Journal | ELIMINATING THE HUMAN
"My dad was an electrical engineer—I love the engineer's’ way of looking at the world. I myself applied to both art school AND to engineering school (my frustration was that there was little or no cross-pollination. I was told at the time that taking classes in both disciplines would be VERY difficult). I am familiar with and enjoy both the engineer's mindset and the arty mindset (and I’ve heard that now mixing one’s studies is not as hard as it used to be).

The point is not that making a world to accommodate oneself is bad, but that when one has as much power over the rest of the world as the tech sector does, over folks who don’t naturally share its worldview, then there is a risk of a strange imbalance. The tech world is predominantly male—very much so. Testosterone combined with a drive to eliminate as much interaction with real humans as possible—do the math, and there’s the future.

We’ve gotten used to service personnel and staff who have no interest or participation in the businesses where they work. They have no incentive to make the products or the services better. This is a long legacy of the assembly line, standardising, franchising and other practices that increase efficiency and lower costs. It’s a small step then from a worker that doesn’t care to a robot. To consumers, it doesn’t seem like a big loss.

Those who oversee the AI and robots will, not coincidentally, make a lot of money as this trend towards less human interaction continues and accelerates—as many of the products produced above are hugely and addictively convenient. Google, Facebook and other companies are powerful and yes, innovative, but the innovation curiously seems to have had an invisible trajectory. Our imaginations are constrained by who and what we are. We are biased in our drives, which in some ways is good, but maybe some diversity in what influences the world might be reasonable and may be beneficial to all.

To repeat what I wrote above—humans are capricious, erratic, emotional, irrational and biased in what sometimes seem like counterproductive ways. I’d argue that though those might seem like liabilities, many of those attributes actually work in our favor. Many of our emotional responses have evolved over millennia, and they are based on the probability that our responses, often prodded by an emotion, will more likely than not offer the best way to deal with a situation.

Neuroscientist Antonio Damasio wrote about a patient he called Elliot, who had damage to his frontal lobe that made him unemotional. In all other respects he was fine—intelligent, healthy—but emotionally he was Spock. Elliot couldn’t make decisions. He’d waffle endlessly over details. Damasio concluded that though we think decision-making is rational and machinelike, it’s our emotions that enable us to actually decide.

With humans being somewhat unpredictable (well, until an algorithm completely removes that illusion), we get the benefit of surprises, happy accidents and unexpected connections and intuitions. Interaction, cooperation and collaboration with others multiplies those opportunities.

We’re a social species—we benefit from passing discoveries on, and we benefit from our tendency to cooperate to achieve what we cannot alone. In his book, Sapiens, Yuval Harari claims this is what allowed us to be so successful. He also claims that this cooperation was often facilitated by a possibility to believe in “fictions” such as nations, money, religions and legal institutions. Machines don’t believe in fictions, or not yet anyway. That’s not to say they won’t surpass us, but if machines are designed to be mainly self-interested, they may hit a roadblock. If less human interaction enables us to forget how to cooperate, then we lose our advantage.

Our random accidents and odd behaviors are fun—they make life enjoyable. I’m wondering what we’re left with when there are fewer and fewer human interactions. Remove humans from the equation and we are less complete as people or as a society. “We” do not exist as isolated individuals—we as individuals are inhabitants of networks, we are relationships. That is how we prosper and thrive."
davidbyrne  2017  automation  ai  business  culture  technology  dehumanization  humanism  humanity  gigeconomy  labor  work  robots  moocs  socialmedia  google  facebook  amazon  yuvalharari  social  productivity  economics  society  vr  ebay  retail  virtualreality 
june 2017 by robertogreco
Eyes Without a Face — Real Life
"The American painter and sculptor Ellsworth Kelly — remembered mainly for his contributions to minimalism, Color Field, and Hard-edge painting — was also a prodigious birdwatcher. “I’ve always been a colorist, I think,” he said in 2013. “I started when I was very young, being a birdwatcher, fascinated by the bird colors.” In the introduction to his monograph, published by Phaidon shortly before his death in 2015, he writes, “I remember vividly the first time I saw a Redstart, a small black bird with a few very bright red marks … I believe my early interest in nature taught me how to ‘see.’”

Vladimir Nabokov, the world’s most famous lepidopterist, classified, described, and named multiple butterfly species, reproducing their anatomy and characteristics in thousands of drawings and letters. “Few things have I known in the way of emotion or appetite, ambition or achievement, that could surpass in richness and strength the excitement of entomological exploration,” he wrote. Tom Bradley suggests that Nabokov suffered from the same “referential mania” as the afflicted son in his story “Signs and Symbols,” imagining that “everything happening around him is a veiled reference to his personality and existence” (as evidenced by Nabokov’s own “entomological erudition” and the influence of a most major input: “After reading Gogol,” he once wrote, “one’s eyes become Gogolized. One is apt to see bits of his world in the most unexpected places”).

For me, a kind of referential mania of things unnamed began with fabric swatches culled from Alibaba and fine suiting websites, with their wonderfully zoomed images that give you a sense of a particular material’s grain or flow. The sumptuous decadence of velvets and velours that suggest the gloved armatures of state power, and their botanical analogue, mosses and plant lichens. Industrial materials too: the seductive artifice of Gore-Tex and other thermo-regulating meshes, weather-palimpsested blue tarpaulins and piney green garden netting (winningly known as “shade cloth”). What began as an urge to collect colors and textures, to collect moods, quickly expanded into the delicious world of carnivorous plants and bugs — mantises exhibit a particularly pleasing biomimicry — and deep-sea aphotic creatures, which rewardingly incorporate a further dimension of movement. Walls suggest piled textiles, and plastics the murky translucence of jellyfish, and in every bag of steaming city garbage I now smell a corpse flower.

“The most pleasurable thing in the world, for me,” wrote Kelly, “is to see something and then translate how I see it.” I feel the same way, dosed with a healthy fear of cliché or redundancy. Why would you describe a new executive order as violent when you could compare it to the callous brutality of the peacock shrimp obliterating a crab, or call a dress “blue” when it could be cobalt, indigo, cerulean? Or ivory, alabaster, mayonnaise?

We might call this impulse building visual acuity, or simply learning how to see, the seeing that John Berger describes as preceding even words, and then again as completely renewed after he underwent the “minor miracle” of cataract surgery: “Your eyes begin to re-remember first times,” he wrote in the illustrated Cataract, “…details — the exact gray of the sky in a certain direction, the way a knuckle creases when a hand is relaxed, the slope of a green field on the far side of a house, such details reassume a forgotten significance.” We might also consider it as training our own visual recognition algorithms and taking note of visual or affective relationships between images: building up our datasets. For myself, I forget people’s faces with ease but never seem to forget an image I have seen on the internet.

At some level, this training is no different from Facebook’s algorithm learning based on the images we upload. Unlike Google, which relies on humans solving CAPTCHAs to help train its AI, Facebook’s automatic generation of alt tags pays dividends in speed as well as privacy. Still, the accessibility context in which the tags are deployed limits what the machines currently tell us about what they see: Facebook’s researchers are trying to “understand and mitigate the cost of algorithmic failures,” according to the aforementioned white paper, as when, for example, humans were misidentified as gorillas and blind users were led to then comment inappropriately. “To address these issues,” the paper states, “we designed our system to show only object tags with very high confidence.” “People smiling” is less ambiguous and more anodyne than happy people, or people crying.

So there is a gap between what the algorithm sees (analyzes) and says (populates an image’s alt text with). Even though it might only be authorized to tell us that a picture is taken outside, then, it’s fair to assume that computer vision is training itself to distinguish gesture, or the various colors and textures of the slope of a green field. A tag of “sky” today might be “cloudy with a threat of rain” by next year. But machine vision has the potential to do more than merely to confirm what humans see. It is learning to see something different that doesn’t reproduce human biases and uncover emotional timbres that are machinic. On Facebook’s platforms (including Instagram, Messenger, and WhatsApp) alone, over two billion images are shared every day: the monolith’s referential mania looks more like fact than delusion."
2017  rahelaima  algorithms  facebook  ai  artificialintelligence  machinelearning  tagging  machinevision  at  ellsworthkelly  color  tombrdley  google  captchas  matthewplummerfernandez  julesolitski  neuralnetworks  eliezeryudkowsky  seeing 
may 2017 by robertogreco
The Weird Thing About Today's Internet - The Atlantic
"O’Reilly’s lengthy description of the principles of Web 2.0 has become more fascinating through time. It seems to be describing a slightly parallel universe. “Hyperlinking is the foundation of the web,” O’Reilly wrote. “As users add new content, and new sites, it is bound into the structure of the web by other users discovering the content and linking to it. Much as synapses form in the brain, with associations becoming stronger through repetition or intensity, the web of connections grows organically as an output of the collective activity of all web users.”

Nowadays, (hyper)linking is an afterthought because most of the action occurs within platforms like Facebook, Twitter, Instagram, Snapchat, and messaging apps, which all have carved space out of the open web. And the idea of “harnessing collective intelligence” simply feels much more interesting and productive than it does now. The great cathedrals of that time, nearly impossible projects like Wikipedia that worked and worked well, have all stagnated. And the portrait of humanity that most people see filtering through the mechanics of Facebook or Twitter does not exactly inspire confidence in our social co-productions.

Outside of the open-source server hardware and software worlds, we see centralization. And with that centralization, five giant platforms have emerged as the five most valuable companies in the world: Apple, Google, Microsoft, Amazon, Facebook."



"All this to say: These companies are now dominant. And they are dominant in a way that almost no other company has been in another industry. They are the mutant giant creatures created by software eating the world.

It is worth reflecting on the strange fact that the five most valuable companies in the world are headquartered on the Pacific coast between Cupertino and Seattle. Has there ever been a more powerful region in the global economy? Living in the Bay, having spent my teenage years in Washington state, I’ve grown used to this state of affairs, but how strange this must seem from from Rome or Accra or Manila.

Even for a local, there are things about the current domination of the technology industry that are startling. Take the San Francisco skyline. In 2007, the visual core of the city was north of Market Street, in the chunky buildings of the downtown financial district. The TransAmerica Pyramid was a regional icon and had been the tallest building in the city since construction was completed in 1972. Finance companies were housed there. Traditional industries and power still reigned. Until quite recently, San Francisco had primarily been a cultural reservoir for the technology industries in Silicon Valley to the south."

[See also:

"How the Internet has changed in the past 10 years"
http://kottke.org/17/05/how-the-internet-has-changed-in-the-past-10-years

"What no one saw back then, about a week after the release of the original iPhone, was how apps on smartphones would change everything. In a non-mobile world, these companies and services would still be formidable but if we were all still using laptops and desktops to access information instead of phones and tablets, I bet the open Web would have stood a better chance."

"‘The Internet Is Broken’: @ev Is Trying to Salvage It"
https://www.nytimes.com/2017/05/20/technology/evan-williams-medium-twitter-internet.html]

[Related:
"Tech’s Frightful Five: They’ve Got Us"
https://www.nytimes.com/2017/05/10/technology/techs-frightful-five-theyve-got-us.html

"Which Tech Giant Would You Drop?: The Big Five tech companies increasingly dominate our lives. Could you ditch them?"
https://www.nytimes.com/interactive/2017/05/10/technology/Ranking-Apple-Amazon-Facebook-Microsoft-Google.html

"Apple, Amazon, Facebook, Microsoft and Alphabet, the parent company of Google, are not just the largest technology companies in the world. As I’ve argued repeatedly in my column, they are also becoming the most powerful companies of any kind, essentially inescapable for any consumer or business that wants to participate in the modern world. But which of the Frightful Five is most unavoidable? I ponder the question in my column this week.

But what about you? If an evil monarch forced you to choose, in what order would you give up these inescapable giants of tech?"]
alexismadrigal  internet  2017  apple  facebook  google  amazon  microsoft  westcoast  bayarea  sanfrancisco  seattle  siliconvalley  twitter  salesforce  instagram  snapchat  timoreilly  2005  web  online  economics  centralization  2007  web2.0  whatsapp  evanwilliams  kottke  farhadmanjoo 
may 2017 by robertogreco
Your Camera Wants to Kill the Keyboard | WIRED
"SNAPCHAT KNEW IT from the start, but in recent months Google and Facebook have all but confirmed it: The keyboard, slowly but surely, is fading into obscurity.

Last week at Google’s annual developer conference, the company presented its vision for how it expects its users—more than a billion people—to interact with technology in the coming years. And for the most part, it didn’t involve typing into a search box. Instead, Google’s brass spent its time onstage touting the company’s speech recognition skills and showing off Google Lens, a new computer vision technology that essentially turns your phone’s camera into a search engine.

Technology has once again reached an inflection point. For years, smartphones relied on hardware keyboards, a holdover from the early days of cell phones. Then came multitouch. Spurred by the wonders of the first smartphone screens, people swiped, typed, and pinched. Now, the way we engage with our phones is changing once again thanks to AI. Snapping a photo works as well, if not better, than writing a descriptive sentence in a search box. Casually chatting with Google Assistant, the company’s omnipresent virtual helper, gets results as fast, if not faster, than opening Chrome and navigating from there. The upshot, as Google CEO Sundar Pichai explained, is that we’re increasingly interacting with our computers in more natural and emotive ways, which could mean using your keyboard a lot less.

Ask the people who build your technology, and they’ll tell you: The camera is the new keyboard. The catchy phrase is becoming something of an industry-wide mantra to describe the constant march toward more visual forms of communication. Just look at Snapchat. The company bet its business on the fact that people would rather trade pictures than strings of words. The idea proved so compelling that Facebook and Instagram unabashedly developed their own versions of the feature. “The camera has already become a pervasive form of communication,” says Roman Kalantari, the head creative technologist at the design studio Fjord. “But what’s the next step after that?”

For Facebook and Snapchat, it was fun-house mirror effects and goofy augmented reality overlays—ways of building on top of photos that you simply can’t with text. Meanwhile, Google took a decidedly more utilitarian approach with Lens, turning the camera into an input device much like the keyboard itself. Point your camera at a tree, and it’ll tell you the variety. Snap a pic of the new restaurant on your block, and it’ll pull up the menu and hours, even help you book a reservation. Perhaps the single most effective demonstration of the technology was also its dullest—focus the lens on a router’s SKU and password, and Google’s image recognition will scan the information, pass it along to your Android phone, and automatically log you into the network.

This simplicity is a big deal. No longer does finding information require typing into a search box. Suddenly the world, in all its complexity, can be understood just by aiming your camera at something. Google isn’t the only company buying into this vision of the future. Amazon’s Fire Phone from 2014 enabled image-based search, which meant you could point the camera at a book or a box of cereal and have the item shipped to you instantly via Amazon Prime. Earlier this year, Pinterest launched the beta version of Lens, a tool that allows users to take a photo of an object in the real world and surface related objects on the Pinterest platform. “We’re getting to the point where using your camera to discover new ideas is as fast and easy as typing,” says Albert Pereta, a creative lead at Pinterest, who led the development at Lens.

Translation: Words can be hard, and it often works better to show than to tell. It’s easier to find the mid-century modern chair with a mahogany leather seat you’re looking for when you can share what it looks like, rather than typing a string of precise keywords. “With a camera, you can complete the task by taking a photo or video of the thing,” explains Gierad Laput, who studies human computer interaction at Carnegie Mellon. “Whereas with a keyboard, you complete this task by typing a description of the thing. You have to come up with the right description and type them accordingly.”

The caveat, of course, is that the image recognition needs to be accurate in order to work. You have agency when you type something into a search box—you can delete, revise, retype. But with a camera, the devices decides what you’re looking at and, even more crucially, assumes what information you want to see in return. The good (or potentially creepy) news is that with every photo taken, search query typed, and command spoken, Google learns more about you, which means over time your results grow increasingly accurate. With its deep trove of knowledge in hand, Google seems determined to smooth out the remaining rough edges of technology. It’ll probably still be a while before the keyboard goes extinct, but with every shot you take on your camera, it’s getting one step closer."
interface  ai  google  communication  images  cameras  2017  snapchat  facebook  smartphones  lizstinson  imagerecognition  pinterest  keyboards  input  romankalantari  technology  amazon  sundarpichai  albertpereta  gieradlaput 
may 2017 by robertogreco
Redesigning Android Emoji – Google Design – Medium
"Yep, we did it. We said goodbye to the blobs. We moved away from the asymmetric and slightly dimensional shape of the container to an easily scannable squishy circle, relying on bold color, purposeful asymmetry — such as the new mind-blown emoji or the prop-wearing cowboy emoji — and loud facial features to convey emotion.

We also spent a long, long time making sure that we addressed cross-platform emotional consistency. Because one of our main goals with the redesign was to avoid confusion or miscommunication across platforms, we wanted to assure the user that when they sent an emoji to a friend, the message was clearly communicated regardless of whether they are on iOS, Windows, Samsung, or any other platform."
android  emotions  google  design  consistency  communication  2017  color  emoji 
may 2017 by robertogreco
How Google Took Over the Classroom - The New York Times
"The tech giant is transforming public education with low-cost laptops and
free apps. But schools may be giving Google more than they are getting."



"Mr. Casap, the Google education evangelist, likes to recount Google’s emergence as an education powerhouse as a story of lucky coincidences. The first occurred in 2006 when the company hired him to develop new business at its office on the campus of Arizona State University in Tempe.

Mr. Casap quickly persuaded university officials to scrap their costly internal email service (an unusual move at the time) and replace it with a free version of the Gmail-and-Docs package that Google had been selling to companies. In one semester, the vast majority of the university’s approximately 65,000 students signed up.

And a new Google business was born.

Mr. Casap then invited university officials on a road show to share their success story with other schools. “It caused a firestorm,” Mr. Casap said. Northwestern University, the University of Southern California and many others followed.

This became Google’s education marketing playbook: Woo school officials with easy-to-use, money-saving services. Then enlist schools to market to other schools, holding up early adopters as forward thinkers among their peers.

The strategy proved so successful in higher education that Mr. Casap decided to try it with public schools.

As it happened, officials at the Oregon Department of Education were looking to help local schools cut their email costs, said Steve Nelson, a former department official. In 2010, the state officially made Google's education apps available to its school districts.

“That caused the same kind of cascade,” Mr. Casap said. School districts around the country began contacting him, and he referred them to Mr. Nelson, who related Oregon’s experience with Google’s apps.

By then, Google was developing a growth strategy aimed at teachers — the gatekeepers to the classroom — who could influence the administrators who make technology decisions. “The driving force tends to be the pedagogical side,” Mr. Bout, the Google education executive, said. “That is something we really embraced.”

Google set up dozens of online communities, called Google Educator Groups, where teachers could swap ideas for using its tech. It started training programs with names like Certified Innovator to credential teachers who wanted to establish their expertise in Google’s tools or teach their peers to use them.

Soon, teachers began to talk up Google on social media and in sessions at education technology conferences. And Google became a more visible exhibitor and sponsor at such events. Google also encouraged school districts that had adopted its tools to hold “leadership symposiums” where administrators could share their experiences with neighboring districts.

Although business practices like encouraging educators to spread the word to their peers have become commonplace among education technology firms, Google has successfully deployed these techniques on a such a large scale that some critics say the company has co-opted public school employees to gain market dominance.

“Companies are exploiting the education space for sales and public good will,” said Douglas A. Levin, the president of EdTech Strategies, a consulting firm. Parents and educators should be questioning Google’s pervasiveness in schools, he added, and examining “how those in the public sector are carrying the message of Google branding and marketing.”

Mr. Bout of Google disagreed, saying that the company’s outreach to educators was not a marketing exercise. Rather, he said, it was an effort to improve education by helping teachers learn directly from their peers how to most effectively use Google’s tools.

“We help to amplify the stories and voices of educators who have lessons learned,” he said, “because it can be challenging for educators to find ways to share with each other.”"
google  sfsh  education  apple  data  privacy  billfitzgerald  chicago  publicschools  technology  edtech  googleclassroom  googleapps  learning  schools  advertising  jaimecasap 
may 2017 by robertogreco
Torching the Modern-Day Library of Alexandria - The Atlantic
[See also: "Google Books was the company’s first moonshot. But 15 years later, the project is stuck in low-Earth orbit."
https://backchannel.com/how-google-book-search-got-lost-c2d2cf77121d ]
googlebooks  2017  jamessomers  digital  libraries  copyright  google  books 
april 2017 by robertogreco
Build a Better Monster: Morality, Machine Learning, and Mass Surveillance
"technology and ethics aren't so easy to separate, and that if you want to know how a system works, it helps to follow the money."



"A question few are asking is whether the tools of mass surveillance and social control we spent the last decade building could have had anything to do with the debacle of the 2017 election, or whether destroying local journalism and making national journalism so dependent on our platforms was, in retrospect, a good idea.

We built the commercial internet by mastering techniques of persuasion and surveillance that we’ve extended to billions of people, including essentially the entire population of the Western democracies. But admitting that this tool of social control might be conducive to authoritarianism is not something we’re ready to face. After all, we're good people. We like freedom. How could we have built tools that subvert it?"



"The economic basis of the Internet is surveillance. Every interaction with a computing device leaves a data trail, and whole industries exist to consume this data. Unlike dystopian visions from the past, this surveillance is not just being conducted by governments or faceless corporations. Instead, it’s the work of a small number of sympathetic tech companies with likeable founders, whose real dream is to build robots and Mars rockets and do cool things that make the world better. Surveillance just pays the bills."



"These companies exemplify the centralized, feudal Internet of 2017. While the protocols that comprise the Internet remain open and free, in practice a few large American companies dominate every aspect of online life. Google controls search and email, AWS controls cloud hosting, Apple and Google have a duopoly in mobile phone operating systems. Facebook is the one social network.

There is more competition and variety among telecommunications providers and gas stations than there is among the Internet giants."



"Build a Better Monster
Idle Words · by Maciej Cegłowski
I came to the United States as a six year old kid from Eastern Europe. One of my earliest memories of that time was the Safeway supermarket, an astonishing display of American abundance.

It was hard to understand how there could be so much wealth in the world.

There was an entire aisle devoted to breakfast cereals, a food that didn't exist in Poland. It was like walking through a canyon where the walls were cartoon characters telling me to eat sugar.

Every time we went to the supermarket, my mom would give me a quarter to play Pac Man. As a good socialist kid, I thought the goal of the game was to help Pac Man, who was stranded in a maze and needed to find his friends, who were looking for him.

My games didn't last very long.

The correct way to play Pac Man, of course, is to consume as much as possible while running from the ghosts that relentlessly pursue you. This was a valuable early lesson in what it means to be an American.

It also taught me that technology and ethics aren't so easy to separate, and that if you want to know how a system works, it helps to follow the money.

Today the technology that ran that arcade game permeates every aspect of our lives. We’re here at an emerging technology conference to celebrate it, and find out what exciting things will come next. But like the tail follows the dog, ethical concerns about how technology affects who we are as human beings, and how we live together in society, follow us into this golden future. No matter how fast we run, we can’t shake them.

This year especially there’s an uncomfortable feeling in the tech industry that we did something wrong, that in following our credo of “move fast and break things”, some of what we knocked down were the load-bearing walls of our democracy.

Worried CEOs are roving the landscape, peering into the churches and diners of red America. Steve Case, the AOL founder, roams the land trying to get people to found more startups. Mark Zuckerberg is traveling America having beautifully photographed conversations.

We’re all trying to understand why people can’t just get along. The emerging consensus in Silicon Valley is that polarization is a baffling phenomenon, but we can fight it with better fact-checking, with more empathy, and (at least in Facebook's case) with advanced algorithms to try and guide conversations between opposing camps in a more productive direction.

A question few are asking is whether the tools of mass surveillance and social control we spent the last decade building could have had anything to do with the debacle of the 2017 election, or whether destroying local journalism and making national journalism so dependent on our platforms was, in retrospect, a good idea.

We built the commercial internet by mastering techniques of persuasion and surveillance that we’ve extended to billions of people, including essentially the entire population of the Western democracies. But admitting that this tool of social control might be conducive to authoritarianism is not something we’re ready to face. After all, we're good people. We like freedom. How could we have built tools that subvert it?

As Upton Sinclair said, “It is difficult to get a man to understand something, when his salary depends on his not understanding it.”

I contend that there are structural reasons to worry about the role of the tech industry in American political life, and that we have only a brief window of time in which to fix this.

Surveillance Capitalism

The economic basis of the Internet is surveillance. Every interaction with a computing device leaves a data trail, and whole industries exist to consume this data. Unlike dystopian visions from the past, this surveillance is not just being conducted by governments or faceless corporations. Instead, it’s the work of a small number of sympathetic tech companies with likeable founders, whose real dream is to build robots and Mars rockets and do cool things that make the world better. Surveillance just pays the bills.

It is a striking fact that mass surveillance has been driven almost entirely by private industry. While the Snowden revelations in 2012 made people anxious about government monitoring, that anxiety never seemed to carry over to the much more intrusive surveillance being conducted by the commercial Internet. Anyone who owns a smartphone carries a tracking device that knows (with great accuracy) where you’ve been, who you last spoke to and when, contains potentially decades-long archives of your private communications, a list of your closest contacts, your personal photos, and other very intimate information.

Internet providers collect (and can sell) your aggregated browsing data to anyone they want. A wave of connected devices for the home is competing to bring internet surveillance into the most private spaces. Enormous ingenuity goes into tracking people across multiple devices, and circumventing any attempts to hide from the tracking.

With the exception of China (which has its own ecology), the information these sites collect on users is stored permanently and with almost no legal controls by a small set of companies headquartered in the United States.

Two companies in particular dominate the world of online advertising and publishing, the economic engines of the surveillance economy.

Google, valued at $560 billion, is the world’s de facto email server, and occupies a dominant position in almost every area of online life. It’s unremarkable for a user to connect to the Internet on a Google phone using Google hardware, talking to Google servers via a Google browser, while blocking ads served over a Google ad network on sites that track visitors with Google analytics. This combination of search history, analytics and ad tracking gives the company unrivaled visibility into users’ browsing history. Through initiatives like AMP (advanced mobile pages), the company is attempting to extend its reach so that it becomes a proxy server for much of online publishing.

Facebook, valued at $400 billion, has close to two billion users and is aggressively seeking its next billion. It is the world’s largest photo storage service, and owns the world’s largest messaging service, WhatsApp. For many communities, Facebook is the tool of choice for political outreach and organizing, event planning, fundraising and communication. It is the primary source of news for a sizable fraction of Americans, and through its feed algorithm (which determines who sees what) has an unparalleled degree of editorial control over what that news looks like.

Together, these companies control some 65% of the online ad market, which in 2015 was estimated at $60B. Of that, half went to Google and $8B to Facebook. Facebook, the smaller player, is more aggressive in the move to new ad and content formats, particularly video and virtual reality.

These companies exemplify the centralized, feudal Internet of 2017. While the protocols that comprise the Internet remain open and free, in practice a few large American companies dominate every aspect of online life. Google controls search and email, AWS controls cloud hosting, Apple and Google have a duopoly in mobile phone operating systems. Facebook is the one social network.

There is more competition and variety among telecommunications providers and gas stations than there is among the Internet giants.

Data Hunger

The one thing these companies share is an insatiable appetite for data. They want to know where their users are, what they’re viewing, where their eyes are on the page, who they’re with, what they’re discussing, their purchasing habits, major life events (like moving or pregnancy), and anything else they can discover.

There are two interlocking motives for this data hunger: to target online advertising, and to train machine learning algorithms.

Advertising

Everyone is familiar with online advertising. Ads are served indirectly, based on real-time auctions … [more]
advertising  facebook  google  internet  politics  technology  apple  labor  work  machinelearning  security  democracy  california  taxes  engagement 
april 2017 by robertogreco
The Future Agency - The Verge
"“It's really easy to freak people out with science fiction. It's a heavy responsibility,” says Tellart co-founder Matt Cottam when I first meet him and Scappaticci at the company’s New York outpost, located in the corner of a Chelsea loft. He cites a maxim from the author and New School sociology instructor Barbara Adams: “Every act of future making is an act of future taking." Cottam continues, “While creating a high fidelity image of the future may broaden people's imagination for what's possible, it can also really narrow their perception of what's possible or what their options are.”"



"The agencies are paid to adapt unstable emerging technologies to marketing and branding efforts, and in the process normalize and commodify them for a mainstream audience. If you see facial recognition technology at the Museum of Future Government Services, for example, then you might not be so shocked when it actually shows up in airport security. The experiential fiction acclimatizes you to the future in advance."
sciencefiction  scifi  future  futurism  2017  kylechaykra  google  microsoft  googlecreativelab  microsoftresearch  tellart  museumofthefuture  design 
april 2017 by robertogreco
Uber’s ghost map and the meaning of greyballing | ROUGH TYPE
"The Uber map is a media production. It presents a little, animated entertainment in which you, the user, play the starring role. You are placed at the very center of things, wherever you happen to be, and you are surrounded by a pantomime of oversized automobiles poised to fulfill your desires, to respond immediately to your beckoning. It’s hard not to feel flattered by the illusion of power that the Uber map grants you. Every time you open the app, you become a miniature superhero on a city street. You send out a bat signal, and the batmobile speeds your way. By comparison, taking a bus or a subway, or just hoofing it, feels almost insulting.

In a similar way, a Google map also sets you in a fictionalized story about a place, whether you use the map for navigation or for searching. You are given a prominent position on the map, usually, again, at its very center, and around you a city personalized to your desires takes shape. Certain business establishments and landmarks are highlighted, while other ones are not. Certain blocks are highlighted as “areas of interest“; others are not. Sometimes the highlights are paid for, as advertising; other times they reflect Google’s assessment of you and your preferences. You’re not allowed to know precisely why your map looks the way it does. The script is written in secret.

It’s not only maps. The news and message feeds presented to you by Facebook, or Apple or Google or Twitter, are also stories about the world, fictional representations manufactured both to appeal to your desires and biases and to provide a compelling context for advertising. Mark Zuckerberg may wring his hands over “fake news,” but fake news is to the usual Facebook feed what the Greyball map is to the usual Uber map: an extreme example of the norm.

When I talk about “you,” I don’t really mean you. The “you” around which the map or the news feed or any other digitized representation of the world coalesces is itself a representation. As John Cheney-Lippold explains in his forthcoming book We Are Data, companies like Facebook and Google create digital versions of their users derived through an algorithmic analysis of the data they collect about their users. The companies rely on these necessarily fictionalized representations for both technical reasons (human beings can’t be computed; to be rendered computable, you have to be turned into a digital representation) and commercial reasons (a digital representation of a person can be bought and sold). The “you” on the Uber map or in the Facebook feed is a fake — a character in a story — but it’s a useful and a flattering fake, so you accept it as an accurate portrayal of yourself: an “I” for an I.

Greyballing is not an aberration of the virtual world. Greyballing is the essence of virtuality."

[via: https://tinyletter.com/audreywatters/letters/hewn-no-204 ]
mapping  maps  technology  self  simulacra  nicholascarr  via:audreywatters  greyballing  uber  ideology  fictions  data  algorithms  representation  news  facebooks  fakenews  cartography  business  capitalism  place  google 
march 2017 by robertogreco
Poe’s law explains why 2016 was so terrible.
"We will all remember 2016’s political theater for many reasons: for its exhausting, divisive election, for its memes both dank and dark, for the fact that the country’s first female presidential candidate won the popular vote by a margin of 2.8 million and still lost the election to an actual reality show villain.

But 2016 was also marked—besieged, even—by Poe’s law, a decade-old internet adage articulated by Nathan Poe, a commentator on a creationism discussion thread. Building on the observation that “real” creationists posting to the forum were often difficult to parse from those posing as creationists, Poe’s law stipulates that online, sincere expressions of extremism are often indistinguishable from satirical expressions of extremism.

A prominent example of Poe’s law in action is the March 2016 contest to name a British research vessel that cost almost $300 million. Participants railed—perhaps earnestly, perhaps jokingly—against the National Environment Research Council’s decision to reject the public’s overwhelming support for the name “Boaty McBoatface.” So too is the April spread of the “Trump Effect” Mass Effect 2 remix video, which resulted in then-candidate Donald Trump retweeting a video that may or may not have been a satirical effort to frame him as a xenophobic, fascist villain. June’s popular Harambe meme, in which a gorilla shot dead at the Cincinnati Zoo was embraced in the service of animal rights advocacy alongside Dadaist absurdity and straight-up racism, is another. In each case, earnest participation bled into playful participation, making it difficult to know exactly what was happening. A ridiculous joke? A pointed attack? A deliberate argument? Maybe all of the above?

The rise of the so-called alt-right—a loose amalgamation of white nationalists, misogynists, anti-Semites, and Islamophobes—provides a more sobering example of Poe’s law. White nationalist sentiments have metastasized into unequivocal expressions of hate in the wake of Trump’s electoral victory, but in the early days of the group, it was harder to tell. Participants even provided Poe’s law justifications when describing their behavior. A March 2016 Breitbart piece claimed the racism espoused by the “young meme brigades” swarming 4chan, Reddit, and Twitter was ironic play, nothing more, deployed solely to shock the “older generations” that encountered it. According to Breitbart, those propagating hate were no more genuinely bigoted than 1980s heavy metal fans genuinely worshiped Satan. The implication: First of all, shut up, everyone is overreacting, and simultaneously, do keep talking about us, because overreaction is precisely what we’re going for.

Perhaps the best illustration of this tension is Pepe the Frog, the anti-Semitic cartoon mascot of “hipster Nazi” white nationalism. The meme was ostensibly harnessed in an effort to create “meme magic” through pro-Trump “shitposting” (that is, to ensure a Trump victory by dredging up as much chaos and confusion as possible). But it communicated a very clear white supremacist message. The entire point was for it to be taken seriously as a hate symbol, even if the posters were, as they insisted, “just trolling”—a distinction we argue is ultimately irrelevant, since regardless of motivations, such messages communicate, amplify, and normalize bigotry. And normalized bigotry emboldens further bigotry, as Trump’s electoral victory has made painfully clear.

Poe’s law also played a prominent role in Facebook’s fake news problem, particularly in the spread of articles written with the cynical intention of duping Trump supporters through fabrication and misinformation. Readers may have passed these articles along as gospel because they really did believe, for example, that an FBI agent investigating Hillary Clinton’s private email server died mysteriously. Or maybe they didn’t believe it but wanted to perpetuate the falsehood for a laugh, out of boredom, or simply to watch the world burn. Each motive equally possible, each equally unverifiable, and each normalizing and incentivizing the spread of outright lies.

Hence the year’s plethora of outrageous election conspiracy theories—including the very false claim that Clinton was running a child sex trafficking ring out of Comet Ping Pong, a Washington, D.C., pizza restaurant. Pizzagate, as the story came to be known, like so many of the stories animating this weirdest of all possible elections, has a direct link both to 4chan and r/The_Donald, another hotbed of highly ambivalent pro-Trump activity. It is therefore very likely that the conspiracy is yet another instance of pro-Trump shitposting. But even if some participants are “just trolling,” other participants may approach the story with deadly seriousness—seriousness that precipitated one Pizzagate crusader to travel from his home in North Carolina to Comet Ping Pong with an assault rifle in order to conduct his own investigation, by opening fire in the restaurant.

And then there was Trump himself, whose incessant provocations, insults, self-congratulations and straight-up, demonstrable lies have brought Poe’s law to the highest office of the land.

Take, for example, Trump’s incensed reactions to the casts of Hamilton and Saturday Night Live, his baseless assertion of widespread voter fraud (in an election he won), and his unconstitutional claim that flag-burners should be denaturalized or imprisoned. Are these outbursts designed to distract the press from his almost incomprehensibly tangled economic conflicts of interest? Is he just using Twitter to yell at the TV? Is he simply that unfamiliar with well-established constitutional precedent? Is he, and we say this with contempt, “just trolling”?

The same kinds of questions apply to Trump’s entrée into foreign policy issues. Did he honestly think the call he took from the president of Taiwan was nothing more than pleasantries? (His advisers certainly didn’t think so.) Does he sincerely not remember all the times Russian hacking was discussed—all the times he himself discussed the hacks—before the election? Does he truly believe the Russian hacking story is little more than a pro-Clinton conspiracy?

It’s unclear what the most distressing answers to these questions might be.

Poe’s law helps explain why “fuck 2016” is, at least according to the A.V. Club, this year’s “definitive meme.” Content subsumed by Poe’s law is inherently disorienting, not unlike trying to have an intense emotional conversation with someone wearing dark sunglasses. Not knowing exactly what you’re looking at, and therefore what to look out for, obscures how best to respond in a given moment. More vexingly, it obscures what the implications of that response might be.

Take Pizzagate. If proponents of the theory genuinely believe that Clinton is running an underage sex ring out of a Washington, D.C., pizza shop, it makes absolute sense to debunk the rumor, as often and as loudly as possible. On the other hand, if the story is a shitpost joke, even to just some of those perpetuating it, then amplification might ultimately benefit the instigators and further harm those caught in their crosshairs (in this case both literally and figuratively).

Further complicating this picture, each new instance of amplification online, regardless of who is doing the sharing, and regardless of what posters’ motivations might be, risks attracting a new wave of participants to a given story. Each of these participants will, in turn, have similarly inscrutable motives and through commenting on, remixing, or simply repeating a story might continue its spread in who knows what directions, to who knows what consequences.

As the above examples illustrate, the things people say and do online have indelible, flesh-and-blood implications (looking at you, Paul Ryan). Heading into 2017, it is critical to strategize ways of navigating a Poe’s law–riddled internet—particularly as PEOTUS mutates into POTUS.

One approach available to everyone is to forcefully reject the “just trolling,” “just joking,” and “just saying words” excuses so endemic in 2016. In a given context, you may be “just trolling,” “just joking,” or “just saying whatever,” because you have the profound luxury of dismissing the embodied impact of your words. It may also be the case that the people in your immediate circle might get the troll, or joke, or words, because they share your sense of humor and overall worldview.

But even if you and your immediate circle can decode your comments, your troll or joke or words can be swept into the service of something else entirely, for audiences who know nothing of the context and who have exactly zero interest in both your sense of humor and overall worldview.

In short, regardless of anyone’s self-satisfied “don’t blame me, I was just X-ing,” all actions online have consequences—at least the potential for consequences, intended or otherwise. So for god’s sake, take your own words seriously."
whitnetphillips  ryanmilner  fakenews  media  facebooks  google  extremism  nathanpoe  poe'slaw  creationism  satire  sarcasm  internet  memes  shitpoting  pepethefrog  conspiracytheories  conspiracy  discourse  twitter  socialmedia  news  newscycle  donaldtrump  2016 
january 2017 by robertogreco
Google Noto Fonts
"When text is rendered by a computer, sometimes characters are displayed as “tofu”. They are little boxes to indicate your device doesn’t have a font to display the text.

Google has been developing a font family called Noto, which aims to support all languages with a harmonious look and feel. Noto is Google’s answer to tofu. The name noto is to convey the idea that Google’s goal is to see “no more tofu”. Noto has multiple styles and weights, and is freely available to all. The comprehensive set of fonts and tools used in our development is available in our GitHub repositories."
fonts  noto  google  typography  free 
november 2016 by robertogreco
How Chromebooks Are About to Totally Transform Laptop Design | WIRED
"That’s not to say the Chrome OS crew are fortune-tellers, of course. They did miss out on one very important thing: smartphones. You may have heard of them. “Back when we were starting Chrome OS,” Lin says, “the web and mobile were in a dead heat. We were betting big on the web, and the Android team was betting big on mobile.” He doesn’t say the obvious next part, which is that mobile and Android won.

There are still times when you want a keyboard and trackpad, though, or a screen larger than the palm of your hand. And lucky for the Chrome team, Android’s also part of Google. So the two teams started talking about how to integrate. They had lots of concerns about performance, integration, and above all security. A couple of years ago, a Chrome engineer ran an experiment: He took containers, a way of separating parts of a system that’s common in data centers, and ran them on a local machine. Android in one, Chrome OS is another. “A few of us saw it,” Sengupta says, “and our eyes literally opened up.” That was the answer.
Android apps solve a couple of Chrome OS’s lingering problems. Most important, they bring all the software people are now accustomed to using, onto a new platform. Remember when people used to complain about Chromebooks not having Word? There are now billions of people who now reasonably expect their laptop to have Snapchat and Uber. Apps also offer offline support in a much more robust way, and they bring the kind of multi-window, desktop-app functionality that feels familiar to the old-school Windows users. Of course, they also require totally different things than traditional computer software. Most apps assume you’re using them on small, touch-enabled screens, running on devices with cellular connections and a bunch of sensors that you definitely don’t have in your laptop.

So, OK, new question: what does a laptop look like in the age of mobile?

New Puzzle Pieces

Imagine you want to build a Chromebook. Great idea! Before you can do anything, you have to deal with Alberto Martin Perez, a product manager on the Chrome OS team. Perez is the keeper of Google’s documentation, the huge set of requirements and standards given to all Chromebook makers. The documentation is an ever-changing organism, concerned with everything from how much RAM and battery life a Chromebook needs, to how hard you have to press the trackpad before it registers as a click. If your Chromebook takes more than ten seconds to boot, or the power button isn’t on the top right? Get on the plane back to China and try again. The long, complex document is written in engineer-speak and is remarkably detailed. It’s Google’s first line of defense against corner-cutting manufacturers.

When Google decided to integrate Android apps with Chrome OS, Perez and his team combed through the documentation. “We wanted to make sure we were ahead,” Perez says. “It’s really easy to change a web app, it’s really hard to change a laptop.” Google now strongly recommends—which is a lightly-veiled warning that it’ll be mandatory soon—that every Chromebook include GPS, NFC, compass, accelerometer, a fingerprint reader, and a barometer. Those are all smartphone parts that have made little sense in a laptop before. But Android apps are inspiring manufacturers to make devices that move, that adapt, that take on different forms in different contexts.

Computer industry execs believe Chrome OS has come into its own, that people will now choose it over Windows for reasons other than price. For many new customers, says Stacy Wolff, HP’s global head of design, “their first device was a smartphone. And they look for the cleanliness, the simplicity, the stability of what we see in those devices.” That’s the thinking behind the sharp and business-like HP Chromebook 13, the company’s new $500 laptop. Wolf sounds eager to continue down the fancy road, too: When I ask why the Chromebook 13’s not as nice as the Windows-powered Spectre 13, which is one of the best-looking and lightest laptops ever made, he pauses to make sure he’s not giving too much away. “I can’t talk about the future, but there’s nothing that stops us from continuing to go and revolutionize that space.” The $1,000 Chromebook used to be a silly sideshow, Google’s way of overshooting. Soon enough, it’ll be a totally viable purchase.

The next few months are shaping up to be the PC market’s most experimental phase in a long time. The addition of Android apps “begs for higher performance hardware and new form factors to support these new use cases,” says Gary Ridling, Samsung’s senior vice president of product marketing. Batteries are more important than ever, as are touch-friendly displays. Windows manufacturers have been experimenting with convertible and detachable devices for the last few years, but the combination of Android and Chrome will actually make them work.

The results are already starting to trickle out. Acer announced the Chromebook R13, which has a 1080p, 13-inch touchscreen that flips 360 degrees, along with 12 hours of battery, 4 gigs of RAM, and up to 64 gigs of storage. It’ll only get crazier from here: you’ll see laptops that are maybe more like tablets, a few that are maybe even a little bit like smartphones, and every imaginable combination of keyboard, trackpad, and touchscreen. Google and its partners all see this as the moment Chromebook goes from niche—for school, or travel, or your Luddite dad—to mainstream. “The ability to run their favorite apps from phones and tablets,” Ridling says, “without compromising speed, simplicity, or security, will dramatically expand value of Chromebooks to consumers.”

When the legendary Walt Mossberg started his personal technology column at the Wall Street Journal in 1991, he opened with a now-classic line: “Personal computers are just too hard to use, and it isn’t your fault.” 25 long years later, that story’s finally changing. Chromebooks are exactly the computer the world needs now: simple, secure, usable. They just work. And starting this fall, they’ll work they work the way people do in 2016: online everywhere, all the time, in a thousand different ways. “Personal computing” left desks and monitors behind a long time ago, and personal computers are finally catching up."
chromebooks  laptops  2016  martiperez  android  chromeos  google  acer  srg 
september 2016 by robertogreco
Why Are America’s Most Innovative Companies Still Stuck in 1950s Suburbia? | Collectors Weekly
"When Apple finishes its new $5 billion headquarters in Cupertino, California, the technorati will ooh and ahh over its otherworldly architecture, patting themselves on the back for yet another example of “innovation.” Countless employees, tech bloggers, and design fanatics are already lauding the “futuristic” building and its many “groundbreaking” features. But few are aware that Apple’s monumental project is already outdated, mimicking a half-century of stagnant suburban corporate campuses that isolated themselves—by design—from the communities their products were supposed to impact.

In the 1940s and ’50s, when American corporations first flirted with a move to the ‘burbs, CEOs realized that horizontal architecture immersed in a park-like buffer lent big business a sheen of wholesome goodness. The exodus was triggered, in part, by inroads the labor movement was making among blue-collar employees in cities. At the same time, the increasing diversity of urban populations meant it was getting harder and harder to maintain an all-white workforce. One by one, major companies headed out of town for greener pastures, luring desired employees into their gilded cages with the types of office perks familiar to any Googler.

Though these sprawling developments were initially hailed as innovative, America’s experiment with suburban, car-centric lifestyles eventually proved problematic, both for its exclusiveness and environmental drawbacks: Such communities intentionally prevented certain ethnic groups and lower-income people from moving there, while enforcing zoning rules that maximized driving. Today’s tech campuses, which the New York Times describes as “the triumph of privatized commons, of a verdant natural world sheltered for the few,” are no better, having done nothing to disrupt the isolated, anti-urban landscape favored by mid-century corporations.

Louise Mozingo, the Chair of UC Berkeley’s Landscape Architecture and Environmental Planning Department, detailed the origins of these corporate environments in her 2011 book, Pastoral Capitalism: A History of Suburban Corporate Landscapes. From the 1930s designs for AT&T Bell Laboratories in New Jersey to Google’s Silicon Valley campus today, Mozingo traced the evolution of suburbia’s “separatist geography.” In contrast with the city, Mozingo writes, “the suburbs were predictable, spacious, segregated, specialized, quiet, new, and easily traversed—a much more promising state of affairs to corporations bent on expansion.” It also didn’t hurt that many top executives often already lived in the affluent, low-density areas near where they wanted their offices built.

Like the expansive headquarters of many companies who fled dense downtowns, Apple’s new office falls into the architectural vein Mozingo dubs “pastoral capitalism,” after a landscaping trend made popular more than a century ago. In the mid-19th century, prominent figures like Frederick Law Olmsted promoted a specific vision of the natural environment adapted to modern life, beginning with urban parks and university campuses and eventually encompassing suburban residential neighborhoods.

“There was this whole academic discussion around what defined the picturesque, the beautiful, and the sublime,” Mozingo told me when we spoke recently. “Landscape gardener Andrew Jackson Downing had written extensively about it in American publications, but Olmsted went beyond that, and called his ideal park landscape ‘pastoral.’ He was well-read enough to understand that this combined elements of wild nature with agricultural nature.”"



"But perhaps even more damaging was the way this architectural trend turned residents away from one another and reduced their engagement in the public sphere. From the 1950s onward, the vast majority of suburban office projects relied on a model Mozingo refers to as “separatist geography,” where people were isolated from their larger communities for the benefit of a single business entity.

Mozingo’s concept of a separatist landscape builds off the ideas of geographer Allan Pred, who describes how our daily path through the built environment is a major influence on our culture and values. “If you live in a typical suburban place,” Mozingo explains, “you get in your car and drive to work by yourself, then stay in your office for the entire day seeing only other colleagues, and then drive back home alone. You’re basically only interested in improving highways and your office building.” Even as big tech touts its green credentials, the offices for Apple, Facebook, Google, and their ilk are inundated with parking, discreetly hidden below ground like their savvy mid-century forebears, encouraging employees to continue their solo commutes.

Today, this segregation isn’t only aided by architecture—it’s also a function of the tech-enabled lifestyle, with its endless array of on-demand services and delivery apps that limit interactions with people of differing views and backgrounds (exposure that would likely serve to increase tolerance). A protective bubble of affluence also reduces the need for civic engagement: If you always rely on ride-hailing apps, why would you care if the sidewalk gets cleaned or repaired?"



"“There are a handful of companies who are finally doing interesting things in the suburbs,” she continues. “For instance, there’s a developer in Silicon Valley, Kilroy Realty, building a development called the Crossing/900, which is the new Box headquarters, and it’s going to be high-density and mixed-use near Caltrain, so everybody’s excited about that one.” Mozingo also sees potential in a future Facebook project, since they’ve purchased a large plot of land near a disused rail line. “It’s supposed to be mixed-use with explicit public space, and a farmer’s market, and there’s the potential to actually service this area with rail,” she says. “I’m skeptical but hopeful.”

Clearly these modern suburban offices can’t resolve all of a community’s planning issues on a single, isolated site. But even companies that do try to affect change on a larger municipal level are often turned off by the required public process, which Mozingo calls “long, arduous, boring, and annoying.” Despite these misgivings, Mozingo’s understanding of urban history gives her faith that suburban corporate architecture could remedy the problems it has wrought.

“One of the reasons cities function really well,” Mozingo says, “is that in the first few decades of the 20th century, after industry had its way, there was a coalition of progressives who said, ‘We want good lighting, good transportation, and clean water in our cities. We’re going to have sidewalks and streets with orderly traffic, and we’re going to do some zoning so you don’t have a tannery right next to an orphanage.’ They put in big public institutions like museums and theaters and squares with fancy fountains. It cost everybody money, but was agreed on by both the public and private sectors. This is the reason why we still love San Francisco and New York City. Even if we don’t live there, we like going there.

“Believe me, in 1890, cities in the United States were just dreadful–but by 1920, they were much better, and everybody could turn on the tap and drink some water. This was not a small victory,” Mozingo emphasizes. “Suburban corporations have to realize that they’re in the same situation: They have to build alliances with municipalities, counties, state agencies, and each other to come together and spend the next three decades figuring it out—and it is going to take decades.”"
suberbs  suburbia  apple  google  ibm  belllabs  isolation  2016  cities  urbanism  us  corporatecampuses  janejacobs  allanpred  publicspace  urbanplanning  segegation  whiteflight  history  class  race  racism  1970s  1980s  housing  jobs  economics  work  generalmotors  transportation  publictransit  normanfoster  architecture  louisemozingo 
august 2016 by robertogreco
a16z Podcast: The Meaning of Emoji 💚 🍴 🗿 – Andreessen Horowitz
"This podcast is all about emoji. But it’s really about how innovation really comes about — through the tension between standards vs. proprietary moves; the politics of time and place; and the economics of creativity, from making to funding … Beginning with a project on Kickstarter to crowd-translate Moby Dick entirely into emoji to getting dumplings into emoji form and ending with the Library of Congress and an “emoji-con”. So joining us for this conversation are former VP of Data at Kickstarter Fred Benenson (and the 👨 behind ‘Emoji Dick’) and former New York Times reporter and current Unicode emoji subcommittee member Jennifer 8. Lee (one of the 👩 behind the dumpling emoji).

So yes, this podcast is all about emoji. But it’s also about where emoji fits in the taxonomy of social communication — from emoticons to stickers — and why this matters, from making emotions machine-readable to being able to add “limbic” visual expression to our world of text. If emoji is a (very limited) language, what tradeoffs do we make for fewer degrees of freedom and greater ambiguity? How exactly does one then translate emoji (let alone translate something into emoji)? How do emoji work, both technically underneath the hood and in the (committee meeting) room where it happens? And finally, what happens as emoji becomes a means of personalized expression?

This a16z Podcast is all about emoji. We only wish it could be in emoji!"
emoji  open  openstandards  proprietarystandards  communication  translation  fredbenenson  jennifer8.lee  sonalchokshi  emopjidick  mobydick  unicode  apple  google  microsoft  android  twitter  meaning  standardization  technology  ambiguity  emoticons  text  reading  images  symbols  accessibility  selfies  stickers  chat  messaging  universality  uncannyvalley  snapchat  facebook  identity  race  moby-dick 
august 2016 by robertogreco
Transit Maps: Apple vs. Google vs. Us — Medium
"Transit maps are beautiful. You see them plastered on bus shelters and subway stops. Your parents kept one in their pockets. You might have one burned into your brain.

A transit map is much more than a list of stations. It’s the underlying anatomy of your city. It shows how people move, how neighbourhoods are connected, and how your craziest city adventures begin.

Of course, transit maps are also incredibly functional: they’re abstract diagrams that show you how your transit system works. They have rigid lines and fixed-angles. While they’re not geographically accurate, they do a pretty good job of helping you figure out how to get from A-to-B. Every transit line has a different colour, and intersecting lines show you where to transfer.

You can ask any transit agency designer: creating a transit map is a painstaking process. Transit agencies put lots of thought into making diagrams that are equally beautiful and functional…

…although no two cities approach transit maps exactly the same way.
Which is great!

Unless you’re trying to design a transit map for every city in the world.

Imagine that: every transit line in every city, condensed into one, single, beautiful, curvy, map. Millions of stops, thousands of lines, hundreds of agencies.

Google Maps and Apple Maps have tried to do it, but we thought we could do better.

They have lots of resources. We don’t. But then again… we have Anton.

In this post, we’ll show you how Anton, our algorithm alchemist, took on both Apple and Google. He’ll be posting a technical follow up soon, so if you’re into that, we’ll let you know on Twitter. (If you want to take our word for it though, maybe just download our app? See our transit maps in all their titillating, unadulterated glory.)"
maps  mapping  application  ios  mobile  android  iphone  googlemaps  applemaps  apple  google  transit  transitapp  publictransit  2016  design 
july 2016 by robertogreco
After years of intensive analysis, Google discovers the key to good teamwork is being nice — Quartz
[via: https://workfutures.io/message-ansel-on-overwork-jenkin-on-the-workplace-cortese-on-stocksy-mohdin-on-project-3cb6502c79a8 ]

"Google’s data-driven approach ended up highlighting what leaders in the business world have known for a while; the best teams respect one another’s emotions and are mindful that all members should contribute to the conversation equally. It has less to do with who is in a team, and more with how a team’s members interact with one another.

The findings echo Stephen Covey’s influential 1989 book The 7 Habits of Highly Effective People: Members of productive teams take the effort to understand each other, find a way to relate to each other, and then try to make themselves understood."
2016  google  work  niceness  kindness  labor  teams  howwework  commonsense  understanding  administration  leadership  management  sfsh  conversation  productivity  projectaristotle 
july 2016 by robertogreco
TILT #1: librarians like to search, everyone else likes to find
"My father was a technologist and bullshitter. Not in that "doesn't tell the truth" way (though maybe some of that) but mostly in that "likes to shoot the shit with people" way. When he was being sociable he'd pass the time idly wondering about things. Some of these were innumeracy tests "How many of this thing do you think could fit inside this other thing?" or "How many of these things do you think there are in the world?" Others were more concrete "Can I figure out what percentage of the movies that have been released this year will wind up on Netflix in the next twelve months?" and then he'd like to talk about how you'd get the answer. I mostly just wanted to get the answer, why just speculate about something you could know?

He wasn't often feeling sociable so it was worth trying to engage with these questions to keep the conversation going. I'd try some searches, I'd poke around online, I'd ask some people, his attention would wane. Often the interactions would end abruptly with some variant of head-shaking and "Well I guess you can't know some things..." I feel like many, possibly most, things are knowable given enough time to do the research. Still do.

To impatient people many things are "unknowable". The same is true for users of Google. Google is powerful and fast, sure. But they've buried their advanced search deeper and deeper over time, continually try to coerce you to sign in and give them location data, and they save your search history unless you tell them not to. It's common knowledge that they're the largest media owner on the planet, more than Disney, more than Comcast. I use Google. I like Google. But even though they're better than most other search engines out there, that doesn't mean that searching, and finding, can't be a lot better. Getting a million results feels like some sort of accomplishment but it's not worth much if you don't have the result you want.

As filtering and curating are becoming more and more what the internet is about, having a powerful, flexible, and "thoughtful" search feature residing on top of these vast stores of poorly archived digital stuff becomes more critical. No one should settle for a search tool that is just trying to sell you something. Everyone should work on getting their librarian merit badges in order to learn to search, not just find."
jessamynwest  search  internet  google  libraries  2016  filtering  curating  web  online  archives  algorithms 
june 2016 by robertogreco
The Bot Power List 2016 — How We Get To Next
"Science fiction is full of bots that hurt people. HAL 9000 kills one astronaut and tries to kill another in 2001: A Space Odyssey; Ava in Ex Machina expertly manipulates the humans she meets to try and escape her cell; the T-800 is known as The Terminator for obvious reasons.

Even more common, though, are those bots clever and sentient enough to have real personality but undone through their naïveté — from Johnny Five in Short Circuit to the robotic cop in RoboCop, sci-fi is great at examining the dangers of greater intelligence when it’s open to manipulation or lacking concrete moral direction. A smarter bot, a more powerful bot, is also a bot that has more power to do evil things, and in the process expose human hubris.

These are all fictional examples, of course, but since we’re starting to see the tech industry shift its focus toward conversational bots as the future of, well, everything, maybe it offers us a useful way to define the power that a bot has. In this case, we’ll say that a bot is powerful if it could do powerfully evil things if it wanted to.

We’ve asked a number of experts to suggest what they think are the most powerful bots around today, in what is still an early stage for the industry. Together, those suggestions make up our first-ever Bot Power List."
bots  2016  googlenow  alexa  siri  ai  xiaoic  wordsmith  watson  hellobarbie  jillwatson  viv  cortana  amazon  apple  google  microsoft  facebook  eliza  luvo  lark  quartznwws  hala  cyberlover  murdock  bendixon  brucewilcox  neomy  deepdrumpf  rbs  josephweizenbaum  irenechang  ibm  mattel 
june 2016 by robertogreco
Official Google Blog: Meet Gboard: Search, GIFs, emojis & more. Right from your keyboard.
"iPhone users—this one’s for you. Meet Gboard, a new app for your iPhone that lets you search and send information, GIFs, emojis and more, right from your keyboard.

Say you’re texting with a friend about tomorrow’s lunch plans. They ask you for the address. Until now it’s worked like this: You leave your texting app. Open Search. Find the restaurant. Copy the address. Switch back to your texts. Paste the address into a message. And finally, hit send.
Searching and sending stuff on your phone shouldn’t be that difficult. With Gboard, you can search and send all kinds of things—restaurant info, flight times, news articles—right from your keyboard. Anything you’d search on Google, you can search with Gboard. Results appear as cards with the key information front and center, such as the phone number, ratings and hours. With one tap, you can send it to your friend and you keep the conversation going.


You can search for more than just Google search results. Instead of scrolling to find💃 or 👯 , search for “dancer” and find that emoji you were looking for instantly. Even better—you can search for the perfect GIF to show people how you’re really feeling. Finally, Gboard has Glide Typing, which lets you type words by sliding your finger from key to key instead of tapping—so everything you do is just a little bit faster.


Gboard works in any app—messaging, email, YouTube—so you can use it anywhere on your phone. Get it now in the App Store in English in the U.S., with more languages to come."

[See also: https://itunes.apple.com/app/gboard-search.-gifs.-emojis/id1091700242 ]
ios  google  search  gifs  emoji  gboard  via:ableparris  2016 
may 2016 by robertogreco
The Garden and the Stream: A Technopastoral | Hapgood
[Brought back to my attention thanks to Allen:
"@rogre Read this and thought of you and your bookmarks & tumblr:"
https://twitter.com/tealtan/status/720121133102710784 ]

[See also:
https://hapgood.us/2014/06/04/smallest-federated-wiki-as-an-alternate-vision-of-the-web/
https://hapgood.us/2014/11/06/federated-education-new-directions-in-digital-collaboration/
https://hapgood.us/2015/01/08/the-fedwiki-user-innovation-toolkit/
https://hapgood.us/2016/03/03/pre-stocking-the-library/
https://hapgood.us/2016/03/04/bring-your-bookmarks-into-the-hypertext-age/
https://hapgood.us/2016/03/26/intentionally-finding-knowledge-gaps/
https://hapgood.us/2016/04/09/answer-to-leigh-blackall/
http://rainystreets.wikity.cc/
https://www.youtube.com/watch?v=2Gi9SRsRrE4

https://github.com/federated-wiki
http://fed.wiki.org/
http://journal.hapgood.net/view/federated-wiki
http://wikity.net/
http://wikity.net/?p=link-word&s=journal.hapgood.net ]

"The Garden is an old metaphor associated with hypertext. Those familiar with the history will recognize this. The Garden of Forking Paths from the mid-20th century. The concept of the Wiki Gardener from the 1990s. Mark Bernstein’s 1998 essay Hypertext Gardens.

The Garden is the web as topology. The web as space. It’s the integrative web, the iterative web, the web as an arrangement and rearrangement of things to one another.

Things in the Garden don’t collapse to a single set of relations or canonical sequence, and that’s part of what we mean when we say “the web as topology” or the “web as space”. Every walk through the garden creates new paths, new meanings, and when we add things to the garden we add them in a way that allows many future, unpredicted relationships

We can see this here in this collage of photos of a bridge in Portland’s Japanese Garden. I don’t know if you can see this, but this is the same bridge from different views at different times of year.

The bridge is a bridge is a bridge — a defined thing with given boundaries and a stated purpose. But the multi-linear nature of the garden means that there is no one right view of the bridge, no one correct approach. The architect creates the bridge, but it is the visitors to the park which create the bridge’s meaning. A good bridge supports many approaches, many views, many seasons, maybe many uses, and the meaning of that bridge will even evolve for the architect over time.

In the Garden, to ask what happened first is trivial at best. The question “Did the bridge come after these trees” in a well-designed garden is meaningless historical trivia. The bridge doesn’t reply to the trees or the trees to the bridge. They are related to one another in a relatively timeless way.

This is true of everything in the garden. Each flower, tree, and vine is seen in relation to the whole by the gardener so that the visitors can have unique yet coherent experiences as they find their own paths through the garden. We create the garden as a sort of experience generator, capable of infinite expression and meaning.

The Garden is what I was doing in the wiki as I added the Gun Control articles, building out a network of often conflicting information into a web that can generate insights, iterating it, allowing that to grow into something bigger than a single event, a single narrative, or single meaning.

The Stream is a newer metaphor with old roots. We can think of the”event stream” of programming, the “lifestream” proposed by researchers in the 1990s. More recently, the term stream has been applied to the never ending parade of twitter, news alerts, and Facebook feeds.

In the stream metaphor you don’t experience the Stream by walking around it and looking at it, or following it to its end. You jump in and let it flow past. You feel the force of it hit you as things float by.

It’s not that you are passive in the Stream. You can be active. But your actions in there — your blog posts, @ mentions, forum comments — exist in a context that is collapsed down to a simple timeline of events that together form a narrative.

In other words, the Stream replaces topology with serialization. Rather than imagine a timeless world of connection and multiple paths, the Stream presents us with a single, time ordered path with our experience (and only our experience) at the center.

In many ways the Stream is best seen through the lens of Bakhtin’s idea of the utterance. Bakhtin saw the utterance, the conversational turn of speech, as inextricably tied to context. To understand a statement you must go back to things before, you must find out what it was replying to, you must know the person who wrote it and their speech context. To understand your statement I must reconstruct your entire stream.

And of course since I can’t do that for random utterances, I mostly just stay in the streams I know. If the Garden is exposition, the stream is conversation and rhetoric, for better and worse.

You see this most clearly in things like Facebook, Twitter, Instagram. But it’s also the notifications panel of your smartphone, it’s also email, it’s also to a large extent blogging. Frankly, it’s everything now.

Whereas the garden is integrative, the Stream is self-assertive. It’s persuasion, it’s argument, it’s advocacy. It’s personal and personalized and immediate. It’s invigorating. And as we may see in a minute it’s also profoundly unsuited to some of the uses we put it to.

The stream is what I do on Twitter and blogging platforms. I take a fact and project it out as another brick in an argument or narrative or persona that I build over time, and recapitulate instead of iterate."



"So what’s the big picture here? Why am I so obsessed with the integrative garden over the personal and self-assertive stream? Blogs killed hypertext — but who cares, Mike?

I think we’ve been stuck in some unuseful binaries over the past years. Or perhaps binaries that have outlived their use.

So what I’m asking you all to do is put aside your favorite binaries for a moment and try out the garden vs. the stream. All binaries are fictions of course, but I think you’ll find the garden vs. the stream is a particularly useful fiction for our present moment.

OER

Let’s start with OER. I’ve been involved with Open Educational Resources many years, and I have to say that I’m shocked and amazed that we still struggle to find materials.

We announced an open textbook initiative at my school the other day, and one of the first people to email me said she taught State and Local Government and she’d love to ditch the textbook.

So I go look for a textbook on State and Local Government. Doesn’t exist. So I grab the syllabus and look at what sorts of things need explaining.

It’s stuff like influence of local subsidies on development. Now if you Google that term, how many sites in the top 50 will you find just offering a clear and balanced treatment of what it is, what the recent trends are with it, and what seems to be driving the trends?

The answer is none. The closest you’ll find is an article from something called the Encyclopedia of Earth which talks about the environmental economics of local energy subsidies.

Everything else is either journal articles or blog posts making an argument about local subsidies. Replying to someone. Building rapport with their audience. Making a specific point about a specific policy. Embedded in specific conversations, specific contexts.

Everybody wants to play in the Stream, but no one wants to build the Garden.

Our traditional binary here is “open vs. closed”. But honestly that’s not the most interesting question to me anymore. I know why textbook companies are closed. They want to make money.

What is harder to understand is how in nearly 25 years of the web, when people have told us what they THINK about local subsidies approximately one kajillion times we can’t find one — ONE! — syllabus-ready treatment of the issue.

You want ethics of networked knowledge? Think about that for a minute — how much time we’ve all spent arguing, promoting our ideas, and how little time we’ve spent contributing to the general pool of knowledge.

Why? Because we’re infatuated with the stream, infatuated with our own voice, with the argument we’re in, the point we’re trying to make, the people in our circle we’re talking to.

People say, well yes, but Wikipedia! Look at Wikipedia!

Yes, let’s talk about Wikipedia. There’s a billion people posting what they think about crap on Facebook.

There’s about 31,000 active wikipedians that hold English Wikipedia together. That’s about the population of Stanford University, students, faculty and staff combined, for the entire English speaking world.

We should be ashamed. We really should."



"And so we come to the question of whether we are at a turning point. Do we see a rebirth of garden technologies in the present day? That’s always a tough call, asking an activist like me to provide a forecast of the future. But let me respond while trying not to slip into wishful analysis.

I think maybe we’re starting to see a shift. In 2015, out of nowhere, we saw web annotation break into the mainstream. This is a garden technology that has risen and fallen so many times, and suddenly people just get it. Suddenly web annotation, which used to be hard to explain, makes sense to people. When that sort of thing happens culturally it’s worth looking closely at.

Github has taught a generation of programmers that copies are good, not bad, and as we noted, it’s copies that are essential to the Garden.

The Wikimedia Education project has been convincing teachers there’s a life beyond student blogging.

David Wiley has outlined a scheme whereby students could create the textbooks of the future, and you can imagine that rather than create discrete textbooks we could engage students in building a grand web of knowledge that could, like Bush’s trails, be reconfigured and duplicated to serve specific classes … [more]
mikecaufield  federatedwiki  web  hypertext  oer  education  edtech  technology  learning  vannevarbush  katebowles  davecormier  wikipedia  memex  dynabook  davidwiley  textbooks  streams  gardens  internet  cv  curation  online  open  dlrn2015  canon  wikis  markbernstein  networks  collaboration  narrative  serialization  context  tumblr  facebook  twitter  pinboard  instagram  blogs  blogging  networkedknowledge  google  search  github  wardcunningham  mikhailbakhtin  ethics  bookmarks  bookmarking 
april 2016 by robertogreco
Google Nik Collection
"Create stunning images faster
Add the power of the Nik Collection by Google to your workflow today.
Advanced editing, simplified

Easily create the photos you’ve imagined with six powerful plug-ins for Photoshop®, Lightroom®, or Aperture®.

Make precise edits quickly

Use U Point® technology to selectively edit just the parts of your photos that need touching up without losing time on complex masks and selections.

More affordable than ever

You don’t have to choose between plug-ins -- now you get the full set for one low price. You can take the whole collection for a spin with a 15-day free trial."
google  photography  photoshop  software  filters 
march 2016 by robertogreco
Why the Economic Fates of America’s Cities Diverged - The Atlantic
"What accounts for these anomalous and unpredicted trends? The first explanation many people cite is the decline of the Rust Belt, and certainly that played a role."



"Another conventional explanation is that the decline of Heartland cities reflects the growing importance of high-end services and rarified consumption."



"Another explanation for the increase in regional inequality is that it reflects the growing demand for “innovation.” A prominent example of this line of thinking comes from the Berkeley economist Enrico Moretti, whose 2012 book, The New Geography of Jobs, explains the increase in regional inequality as the result of two new supposed mega-trends: markets offering far higher rewards to “innovation,” and innovative people increasingly needing and preferring each other’s company."



"What, then, is the missing piece? A major factor that has not received sufficient attention is the role of public policy. Throughout most of the country’s history, American government at all levels has pursued policies designed to preserve local control of businesses and to check the tendency of a few dominant cities to monopolize power over the rest of the country. These efforts moved to the federal level beginning in the late 19th century and reached a climax of enforcement in the 1960s and ’70s. Yet starting shortly thereafter, each of these policy levers were flipped, one after the other, in the opposite direction, usually in the guise of “deregulation.” Understanding this history, largely forgotten today, is essential to turning the problem of inequality around.

Starting with the country’s founding, government policy worked to ensure that specific towns, cities, and regions would not gain an unwarranted competitive advantage. The very structure of the U.S. Senate reflects a compromise among the Founders meant to balance the power of densely and sparsely populated states. Similarly, the Founders, understanding that private enterprise would not by itself provide broadly distributed postal service (because of the high cost of delivering mail to smaller towns and far-flung cities), wrote into the Constitution that a government monopoly would take on the challenge of providing the necessary cross-subsidization.

Throughout most of the 19th century and much of the 20th, generations of Americans similarly struggled with how to keep railroads from engaging in price discrimination against specific areas or otherwise favoring one town or region over another. Many states set up their own bureaucracies to regulate railroad fares—“to the end,” as the head of the Texas Railroad Commission put it, “that our producers, manufacturers, and merchants may be placed on an equal footing with their rivals in other states.” In 1887, the federal government took over the task of regulating railroad rates with the creation of the Interstate Commerce Commission. Railroads came to be regulated much as telegraph, telephone, and power companies would be—as natural monopolies that were allowed to remain in private hands and earn a profit, but only if they did not engage in pricing or service patterns that would add significantly to the competitive advantage of some regions over others.

Passage of the Sherman Antitrust Act in 1890 was another watershed moment in the use of public policy to limit regional inequality. The antitrust movement that sprung up during the Populist and Progressive era was very much about checking regional concentrations of wealth and power. Across the Midwest, hard-pressed farmers formed the “Granger” movement and demanded protection from eastern monopolists controlling railroads, wholesale-grain distribution, and the country’s manufacturing base. The South in this era was also, in the words of the historian C. Vann Woodward, in a “revolt against the East” and its attempts to impose a “colonial economy.”"



"By the 1960s, antitrust enforcement grew to proportions never seen before, while at the same time the broad middle class grew and prospered, overall levels of inequality fell dramatically, and midsize metro areas across the South, the Midwest, and the West Coast achieved a standard of living that converged with that of America’s historically richest cites in the East. Of course, antitrust was not the only cause of the increase in regional equality, but it played a much larger role than most people realize today.

To get a flavor of how thoroughly the federal government managed competition throughout the economy in the 1960s, consider the case of Brown Shoe Co., Inc. v. United States, in which the Supreme Court blocked a merger that would have given a single distributor a mere 2 percent share of the national shoe market.

Writing for the majority, Supreme Court Chief Justice Earl Warren explained that the Court was following a clear and long-established desire by Congress to keep many forms of business small and local: “We cannot fail to recognize Congress’ desire to promote competition through the protection of viable, small, locally owned business. Congress appreciated that occasional higher costs and prices might result from the maintenance of fragmented industries and markets. It resolved these competing considerations in favor of decentralization. We must give effect to that decision.”

In 1964, the historian and public intellectual Richard Hofstadter would observe that an “antitrust movement” no longer existed, but only because regulators were managing competition with such effectiveness that monopoly no longer appeared to be a realistic threat. “Today, anybody who knows anything about the conduct of American business,” Hofstadter observed, “knows that the managers of the large corporations do their business with one eye constantly cast over their shoulders at the antitrust division.”

In 1966, the Supreme Court blocked a merger of two supermarket chains in Los Angeles that, had they been allowed to combine, would have controlled just 7.5 percent of the local market. (Today, by contrast there are nearly 40 metro areas in the U.S where Walmart controls half or more of all grocery sales.) Writing for the majority, Justice Harry Blackmun noted the long opposition of Congress and the Court to business combinations that restrained competition “by driving out of business the small dealers and worthy men.”

During this era, other policy levers, large and small, were also pulled in the same direction—such as bank regulation, for example. Since the Great Recession, America has relearned the history of how New Deal legislation such as the Glass-Steagall Act served to contain the risks of financial contagion. Less well remembered is how New Deal-era and subsequent banking regulation long served to contain the growth of banks that were “too big to fail” by pushing power in the banking system out to the hinterland. Into the early 1990s, federal laws severely limited banks headquartered in one state from setting up branches in any other state. State and federal law fostered a dense web of small-scale community banks and locally operated thrifts and credit unions.

Meanwhile, bank mergers, along with mergers of all kinds, faced tough regulatory barriers that included close scrutiny of their effects on the social fabric and political economy of local communities. Lawmakers realized that levels of civic engagement and community trust tended to decline in towns that came under the control of outside ownership, and they resolved not to let that happen in their time.

In other realms, too, federal policy during the New Deal and for several decades afterward pushed strongly to spread regional equality. For example, New Deal programs such as the Tennessee Valley Authority, the Bonneville Power Administration, and the Rural Electrification Administration dramatically improved the infrastructure of the South and West. During and after World War II, federal spending on the military and the space program also tilted heavily in the Sunbelt’s favor.

The government’s role in regulating prices and levels of service in transportation was also a huge factor in promoting regional equality. In 1952, the Interstate Commerce Commission ordered a 10-percent reduction in railroad freight rates for southern shippers, a political decision that played a substantial role in enabling the South’s economic ascent after the war. The ICC and state governments also ordered railroads to run money-losing long-distance and commuter passenger trains to ensure that far-flung towns and villages remained connected to the national economy.

Into the 1970s, the ICC also closely regulated trucking routes and prices so they did not tilt in favor of any one region. Similarly, the Civil Aeronautics Board made sure that passengers flying to and from small and midsize cities paid roughly the same price per mile as those flying to and from the largest cities. It also required airlines to offer service to less populous areas even when such routes were unprofitable.

Meanwhile, massive public investments in the interstate-highway system and other arterial roads added enormously to regional equality. First, it vastly increased the connectivity of rural areas to major population centers. Second, it facilitated the growth of reasonably priced suburban housing around high-wage metro areas such as New York and Los Angeles, thus making it much more possible than it is now for working-class people to move to or remain in those areas.

Beginning in the late 1970s, however, nearly all the policy levers that had been used to push for greater regional income equality suddenly reversed direction. The first major changes came during Jimmy Carter’s administration. Fearful of inflation, and under the spell of policy entrepreneurs such as Alfred Kahn, Carter signed the Airline Deregulation Act in 1978. This abolished the Civil Aeronautics Board, which had worked to offer rough regional parity in airfares and levels of service since 1938… [more]
us  cities  policy  economics  history  inequality  via:robinsonmeyer  2016  philliplongman  regulation  deregulation  capitalism  trusts  antitrustlaw  mergers  competition  markets  banks  finance  ronaldreagan  corporatization  intellectualproperty  patents  law  legal  equality  politics  government  rentseeking  innovation  acquisitions  antitrustenforcement  income  detroit  nyc  siliconvalley  technology  banking  peterganong  danielshoag  1950s  1960s  1970s  1980s  1990s  greatdepression  horacegreely  chicago  denver  cleveland  seattle  atlanta  houston  saltlakecity  stlouis  enricomoretti  shermanantitrustact  1890  cvannwoodward  woodrowwilson  1912  claytonantitrustact  louisbrandeis  federalreserve  minneapolis  kansascity  robinson-patmanact  1920s  1930s  miller-tydingsact  fdr  celler-kefauveract  emanuelceller  huberhumphrey  earlwarren  richardhofstadter  harryblackmun  newdeal  interstatecommercecommission  jimmycarter  alfredkahn  airlinederegulationact  1978  memphis  cincinnati  losangeles  airlines  transportation  rail  railroads  1980  texas  florida  1976  amazon  walmart  r 
march 2016 by robertogreco
What Google Learned From Its Quest to Build the Perfect Team - The New York Times
"Project Aristotle’s researchers began by reviewing a half-century of academic studies looking at how teams worked. Were the best teams made up of people with similar interests? Or did it matter more whether everyone was motivated by the same kinds of rewards? Based on those studies, the researchers scrutinized the composition of groups inside Google: How often did teammates socialize outside the office? Did they have the same hobbies? Were their educational backgrounds similar? Was it better for all teammates to be outgoing or for all of them to be shy? They drew diagrams showing which teams had overlapping memberships and which groups had exceeded their departments’ goals. They studied how long teams stuck together and if gender balance seemed to have an impact on a team’s success.

No matter how researchers arranged the data, though, it was almost impossible to find patterns — or any evidence that the composition of a team made any difference. ‘‘We looked at 180 teams from all over the company,’’ Dubey said. ‘‘We had lots of data, but there was nothing showing that a mix of specific personality types or skills or backgrounds made any difference. The ‘who’ part of the equation didn’t seem to matter.’’

Some groups that were ranked among Google’s most effective teams, for instance, were composed of friends who socialized outside work. Others were made up of people who were basically strangers away from the conference room. Some groups sought strong managers. Others preferred a less hierarchical structure. Most confounding of all, two teams might have nearly identical makeups, with overlapping memberships, but radically different levels of effectiveness. ‘‘At Google, we’re good at finding patterns,’’ Dubey said. ‘‘There weren’t strong patterns here.’’

As they struggled to figure out what made a team successful, Rozovsky and her colleagues kept coming across research by psychologists and sociologists that focused on what are known as ‘‘group norms.’’ Norms are the traditions, behavioral standards and unwritten rules that govern how we function when we gather: One team may come to a consensus that avoiding disagreement is more valuable than debate; another team might develop a culture that encourages vigorous arguments and spurns groupthink. Norms can be unspoken or openly acknowledged, but their influence is often profound. Team members may behave in certain ways as individuals — they may chafe against authority or prefer working independently — but when they gather, the group’s norms typically override individual proclivities and encourage deference to the team.

Project Aristotle’s researchers began searching through the data they had collected, looking for norms. They looked for instances when team members described a particular behavior as an ‘‘unwritten rule’’ or when they explained certain things as part of the ‘‘team’s culture.’’ Some groups said that teammates interrupted one another constantly and that team leaders reinforced that behavior by interrupting others themselves. On other teams, leaders enforced conversational order, and when someone cut off a teammate, group members would politely ask everyone to wait his or her turn. Some teams celebrated birthdays and began each meeting with informal chitchat about weekend plans. Other groups got right to business and discouraged gossip. There were teams that contained outsize personalities who hewed to their group’s sedate norms, and others in which introverts came out of their shells as soon as meetings began.

After looking at over a hundred groups for more than a year, Project Aristotle researchers concluded that understanding and influencing group norms were the keys to improving Google’s teams. But Rozovsky, now a lead researcher, needed to figure out which norms mattered most. Google’s research had identified dozens of behaviors that seemed important, except that sometimes the norms of one effective team contrasted sharply with those of another equally successful group. Was it better to let everyone speak as much as they wanted, or should strong leaders end meandering debates? Was it more effective for people to openly disagree with one another, or should conflicts be played down? The data didn’t offer clear verdicts. In fact, the data sometimes pointed in opposite directions. The only thing worse than not finding a pattern is finding too many of them. Which norms, Rozovsky and her colleagues wondered, were the ones that successful teams shared?"



"As the researchers studied the groups, however, they noticed two behaviors that all the good teams generally shared. First, on the good teams, members spoke in roughly the same proportion, a phenomenon the researchers referred to as ‘‘equality in distribution of conversational turn-taking.’’ On some teams, everyone spoke during each task; on others, leadership shifted among teammates from assignment to assignment. But in each case, by the end of the day, everyone had spoken roughly the same amount. ‘‘As long as everyone got a chance to talk, the team did well,’’ Woolley said. ‘‘But if only one person or a small group spoke all the time, the collective intelligence declined.’’

Second, the good teams all had high ‘‘average social sensitivity’’ — a fancy way of saying they were skilled at intuiting how others felt based on their tone of voice, their expressions and other nonverbal cues. One of the easiest ways to gauge social sensitivity is to show someone photos of people’s eyes and ask him or her to describe what the people are thinking or feeling — an exam known as the Reading the Mind in the Eyes test. People on the more successful teams in Woolley’s experiment scored above average on the Reading the Mind in the Eyes test. They seemed to know when someone was feeling upset or left out. People on the ineffective teams, in contrast, scored below average. They seemed, as a group, to have less sensitivity toward their colleagues."



"When Rozovsky and her Google colleagues encountered the concept of psychological safety in academic papers, it was as if everything suddenly fell into place. One engineer, for instance, had told researchers that his team leader was ‘‘direct and straightforward, which creates a safe space for you to take risks.’’ That team, researchers estimated, was among Google’s accomplished groups. By contrast, another engineer had told the researchers that his ‘‘team leader has poor emotional control.’’ He added: ‘‘He panics over small issues and keeps trying to grab control. I would hate to be driving with him being in the passenger seat, because he would keep trying to grab the steering wheel and crash the car.’’ That team, researchers presumed, did not perform well.

Most of all, employees had talked about how various teams felt. ‘‘And that made a lot of sense to me, maybe because of my experiences at Yale,’’ Rozovsky said. ‘‘I’d been on some teams that left me feeling totally exhausted and others where I got so much energy from the group.’’ Rozovsky’s study group at Yale was draining because the norms — the fights over leadership, the tendency to critique — put her on guard. Whereas the norms of her case-competition team — enthusiasm for one another’s ideas, joking around and having fun — allowed everyone to feel relaxed and energized.

For Project Aristotle, research on psychological safety pointed to particular norms that are vital to success. There were other behaviors that seemed important as well — like making sure teams had clear goals and creating a culture of dependability. But Google’s data indicated that psychological safety, more than anything else, was critical to making a team work.

‘‘We had to get people to establish psychologically safe environments,’’ Rozovsky told me. But it wasn’t clear how to do that. ‘‘People here are really busy,’’ she said. ‘‘We needed clear guidelines.’’

However, establishing psychological safety is, by its very nature, somewhat messy and difficult to implement. You can tell people to take turns during a conversation and to listen to one another more. You can instruct employees to be sensitive to how their colleagues feel and to notice when someone seems upset. But the kinds of people who work at Google are often the ones who became software engineers because they wanted to avoid talking about feelings in the first place.

Rozovsky and her colleagues had figured out which norms were most critical. Now they had to find a way to make communication and empathy — the building blocks of forging real connections — into an algorithm they could easily scale."



"Project Aristotle is a reminder that when companies try to optimize everything, it’s sometimes easy to forget that success is often built on experiences — like emotional interactions and complicated conversations and discussions of who we want to be and how our teammates make us feel — that can’t really be optimized. Rozovsky herself was reminded of this midway through her work with the Project Aristotle team. ‘‘We were in a meeting where I made a mistake,’’ Rozovsky told me. She sent out a note afterward explaining how she was going to remedy the problem. ‘‘I got an email back from a team member that said, ‘Ouch,’ ’’ she recalled. ‘‘It was like a punch to the gut. I was already upset about making this mistake, and this note totally played on my insecurities.’’"
charlesduhigg  google  teams  teamwork  groups  groupdynamics  juliarozovsky  psychology  norms  groupnorms  communication  2016  siliconvalley  collaboration  projectaristotle  behavior  safety  emocions  socialemotional  empathy  psychologicalsafety  leadership  socialemotionallearning 
february 2016 by robertogreco
What World Are We Building? — Data & Society: Points — Medium
"It’s easy to love or hate technology, to blame it for social ills or to imagine that it will fix what people cannot. But technology is made by people. In a society. And it has a tendency to mirror and magnify the issues that affect everyday life. The good, bad, and ugly."



"1. Inequity All Over Again

While social media was being embraced, I was doing research, driving around the country talking with teenagers about how they understood technology in light of everything else taking place in their lives. I watched teens struggle to make sense of everyday life and their place in it. And I watched as privileged parents projected their anxieties onto the tools that were making visible the lives of less privileged youth.

As social media exploded, our country’s struggle with class and race got entwined with technology. I will never forget sitting in small town Massachusetts in 2007 with a 14-year-old white girl I call Kat. Kat was talking about her life when she made a passing reference to why her friends had all quickly abandoned MySpace and moved to Facebook: because it was safer, and MySpace was boring. Whatever look I gave her at that moment made her squirm. She looked down and said,
I’m not really into racism, but I think that MySpace now is more like ghetto or whatever, and…the people that have Facebook are more mature… The people who use MySpace — again, not in a racist way — but are usually more like [the] ghetto and hip-hop/rap lovers group.'


As we continued talking, Kat became more blunt and told me that black people use MySpace and white people use Facebook.

Fascinated by Kat’s explanation and discomfort, I went back to my field notes. Sure enough, numerous teens had made remarks that, with Kat’s story in mind, made it very clear that a social division had unfolded between teens using MySpace and Facebook during the 2006–2007 school year. I started asking teens about these issues and heard many more accounts of how race affected engagement. "



"The techniques we use at Crisis Text Line are the exact same techniques that are used in marketing. Or personalized learning. Or predictive policing. Predictive policing, for example, involves taking prior information about police encounters and using that to make a statistical assessment about the likelihood of crime happening in a particular place or involving a particular person. In a very controversial move, Chicago has used such analytics to make a list of people most likely to be a victim of violence. In an effort to prevent crime, police officers approached those individuals and used this information in an effort to scare them to stay out of trouble. But surveillance by powerful actors doesn’t build trust; it erodes it. Imagine that same information being given to a social worker. Even better, to a community liaison. Sometimes, it’s not the data that’s disturbing, but how it’s used and by whom.

3. The World We’re Creating

Knowing how to use data isn’t easy. One of my colleagues at Microsoft Research — Eric Horvitz — can predict with startling accuracy whether someone will be hospitalized based on what they search for. What should he do with that information? Reach out to people? That’s pretty creepy. Do nothing? Is that ethical? No matter how good our predictions are, figuring out how to use them is a complex social and cultural issue that technology doesn’t solve for us. In fact, as it stands, technology is just making it harder for us to have a reasonable conversation about agency and dignity, responsibility and ethics.

Data is power. Increasingly we’re seeing data being used to assert power over people. It doesn’t have to be this way, but one of the things that I’ve learned is that, unchecked, new tools are almost always empowering to the privileged at the expense of those who are not.

For most media activists, unfettered Internet access is at the center of the conversation, and that is critically important. Today we’re standing on a new precipice, and we need to think a few steps ahead of the current fight.

We are moving into a world of prediction. A world where more people are going to be able to make judgments about others based on data. Data analysis that can mark the value of people as worthy workers, parents, borrowers, learners, and citizens. Data analysis that has been underway for decades but is increasingly salient in decision-making across numerous sectors. Data analysis that most people don’t understand.

Many activists will be looking to fight the ecosystem of prediction — and to regulate when and where prediction can be used. This is all fine and well when we’re talking about how these technologies are designed to do harm. But more often than not, these tools will be designed to be helpful, to increase efficiency, to identify people who need help. Their positive uses will exist alongside uses that are terrifying. What do we do?

One of the most obvious issues is the limited diversity of people who are building and using these tools to imagine our future. Statistical and technical literacy isn’t even part of the curriculum in most American schools. In our society where technology jobs are high-paying and technical literacy is needed for citizenry, less than 5% of high schools offer AP computer science courses. Needless to say, black and brown youth are much less likely to have access, let alone opportunities. If people don’t understand what these systems are doing, how do we expect people to challenge them?

We must learn how to ask hard questions of technology and of those making decisions based data-driven tech. And opening the black box isn’t enough. Transparency of data, algorithms, and technology isn’t enough. We need to build assessment into any system that we roll-out. You can’t just put millions of dollars of surveillance equipment into the hands of the police in the hope of creating police accountability, yet, with police body-worn cameras, that’s exactly what we’re doing. And we’re not even trying to assess the implications. This is probably the fastest roll-out of a technology out of hope, and it won’t be the last. How do we get people to look beyond their hopes and fears and actively interrogate the trade-offs?

Technology plays a central role — more and more — in every sector, every community, every interaction. It’s easy to screech in fear or dream of a world in which every problem magically gets solved. To make the world a better place, we need to start paying attention to the different tools that are emerging and learn to frame hard questions about how they should be put to use to improve the lives of everyday people.

We need those who are thinking about social justice to understand technology and those who understand technology to commit to social justice."
danahboyd  inequality  technology  2016  facebook  myspace  race  racism  prejudice  whiteflight  bigdata  indifference  google  web  online  internet  christinaxu  bias  diversity  socialjustice 
february 2016 by robertogreco
The Internet Isn't Available in Most Languages - The Atlantic
"Tweet, tuít, or giolc? These were the three iterations of a Gaelic version of the word “tweet” that Twitter’s Irish translators debated in 2012. The agonizing choice between an Anglicized spelling, a Gaelic spelling, or the use of the Gaelic word for “tweeting like a bird” stalled the project for an entire year. Finally, a small group of translators made an executive decision to use the Anglicized spelling of “tweet” with Irish grammar. As of April 2015, Gaelic Twitter is online.

Indigenous and under-resourced cultures face a number of obstacles when establishing their languages on the Internet. English, along with a few other languages like Spanish and French, dominates the web. People who speak these languages often take for granted access to social-media sites with agreed-upon vocabularies, built-in translation services, and basic grammar and spell-checkers.

For Gaelic, a minority language spoken by only two to three percent of the Irish population, it can be difficult to access these digital services. And even languages with millions of speakers can lack the resources needed to make the Internet relevant to daily life.

In September of this year, the Broadband Commission for Digital Development, an organization established five years ago to monitor the growth and use of the Internet around the world, released its 2015 report on the state of broadband. The report argues that representation of the world's languages online remains one of the major challenges in expanding the Internet to reach the four billion people who don’t yet have access.

At the moment, the Internet only has webpages in about five percent of the world's languages. Even national languages like Hindi and Swahili are used on only .01 percent of the 10 million most popular websites. The majority of the world’s languages lack an online presence that is actually useful.

Ethnologue, a directory of the world’s living languages, has determined that 1,519 out of the 7,100 languages spoken today are in danger of extinction. For these threatened languages, social-networking sites like Facebook, Twitter, and Instagram, which rely primarily on user-generated content, as well as other digital platforms like Google and Wikipedia, have a chance to contribute to their preservation. While the best way to keep a language alive is to speak it, using one’s native language online could help.

The computational linguistics professor Kevin Scannell devotes his time to developing the technical infrastructure—often using open-source software—that can work for multiple languages. He’s worked with more than 40 languages around the world, his efforts part of a larger struggle to promote under-resourced languages. “[The languages] are not part of the world of the Internet or computing,” he says. “We’re trying to change that mindset by providing the tools for people to use.”

One such under-resourced language is Chichewa, a Bantu language spoken by 12 million people, many of whom are in the country of Malawi. According to Edmond Kachale, a programmer who began developing a basic word processor for the language in 2005 and has been working on translating Google search into Chichewa for the last five years, his language doesn’t have sufficient content online. This makes it difficult for its speakers to compete in a digital, globalized world. “Unless a language improves its visibility in the digital world,” he says, “it is heading for extinction.”

In Malawi, over 60 percent of the population lacks Internet access; but Kachale says that “even if there would be free Internet nation-wide, chances are that [Chichewa speakers] may not use it at all because of the language barrier.” The 2015 Broadband Report bears Kachale’s point out. Using the benchmark of 100,000 Wikipedia pages in any given language, it found that only 53 percent of the world’s population has access to sufficient content in their native language to make use of the Internet relevant.

People who can’t use the Internet risk falling behind economically because they can’t take advantage of e-commerce. In Malawi, Facebook has become a key platform for Internet businesses, even though the site has not yet been translated into Chichewa. Instead, users tack-on a work-around browser plug-in, a quick-fix for languages that don’t have official translations for big social-media sites.

“Unless a language improves its visibility in the digital world, it is heading for extinction.”
In 2014, Facebook added 20 new languages to its site and launched several more this year, bringing it to more than 80 languages. The site also opens up languages for community-based translation. This option is currently available for about 50 languages, including Aymara, an indigenous language spoken mainly in Bolivia, Peru, and Chile. Though it has approximately 2 million speakers, UNESCO has designated Aymara as “vulnerable.” Beginning in May of 2014, a group of 20 volunteer translators have been chipping away at the 25,000 words used on the site—and the project is on course to be finished by Christmas.

The project is important because it will encourage young people to use their native language. “We are sure when Aymara is available on Facebook as an official language, it will be a source of motivation for Aymara people,” says Elias Quisepe Chura, who manages the translation effort (it happens primarily online, unsurprisingly via a Facebook page).

Ruben Hilari, another member of the translation team, told the Spanish newspaper El Pais, “Aymara is alive. It does not need to be revitalized. It needs to be strengthened and that is exactly what we are doing. If we do not work for our language and culture today, it will be too late tomorrow to remember who we are, and we will always feel insecure about our identity.”

Despite its reputation as the so-called information superhighway, the Internet is only legible to speakers of a few languages; this limit to the web’s accessibility proves that it can be as just as insular and discriminative as the modern world at large."
internet  languages  language  linguistics  2015  translation  insularity  web  online  gaelic  hindi  swahili  kevinscannell  via:unthinkingly  katherineschwab  edmondkachele  accessibility  enlgish  aymara  rubenhilari  eliasquisepechura  bolivia  perú  chile  indigenous  indigeneity  chichewa  bantu  google  kevinsannell  twitter  facebook  instagram  software  computation  computing  inclusivity 
january 2016 by robertogreco
Tracing You (2015) -- by Benjamin Grosser
"computational surveillance system

Tracing You presents a website’s best attempt to see the world from its visitors’ viewpoints. By cross referencing visitor IP addresses with available online data sources, the system traces each visitor back through the network to its possible origin. The end of that trace is the closest available image that potentially shows the visitor’s physical environment. Sometimes what this image shows is eerily accurate; other times it is wildly dislocated. What can a computational system know of our environment based on the traces we leave behind? Why might it want to see where we are? How accurate are the system’s data sources and when might they improve? Finally, what does this site’s attempt to trace its visitors reveal about who (or what) is reading the web? By showing how far it sees in real-time, Tracing You provokes these questions and more.

How it Works
Every time you visit a website, the computer serving that site records data about the visit. One piece of that data is the visitor’s Internet Protocol (IP) address. A numerical string (e.g. 203.0.113.4), the IP address uniquely identifies the device used to view the site, whether it’s your phone, laptop, or tablet. Every IP address is registered with the Internet Assigned Numbers Authority, and thus has data associated with the registration. Tracing You starts with this IP address and follows the trail it leaves. First it looks up the IP address using ipinfo to obtain geolocation. This is represented as a latitude/longitude pair (e.g. 48.8631831,2.3629368) that identifies a precise location on the earth. The latitude/longitude is sent to Google, where it queries the Street View, Static Maps, and Javascript Maps data services. Using these services, Tracing You searches for the closest available match it can find, whether it’s a street image in front of the location, an interior image inside the location, or, if nothing else, a satellite image from above (e.g. many locations in China). Once found, this image is combined with text information from ipinfo and shown on the Tracing You interface.

These queries happen so quickly that when you look at the Tracing You interface you should see an image related to you. You will be the site’s most recent visitor at that moment. The image you see may be very close to your current location, or even photographed from within the building you are in at that moment. Alternatively, the image may be down the block, a few blocks over, or even further. How close it gets is very much dependent on how networks are built, configured, operated, and distributed where you are, which network you use, and the accuracy of the data associated with those networks. The more you look at the site, the more it looks back at you. Big data is continually refining its “picture” of the world. As that picture becomes more resolved, Tracing You will get more accurate. As new data sources become available, I will integrate them into the work."

[See also: http://bengrosser.com/projects/tracing-you/ ]
2015  benjamingrosser  google  internet  ip  maps  mapping  googlestreetview  streetview  data  ipaddresses  bigdata  networks  online 
january 2016 by robertogreco
The Website Obesity Crisis
"Let me start by saying that beautiful websites come in all sizes and page weights. I love big websites packed with images. I love high-resolution video. I love sprawling Javascript experiments or well-designed web apps.

This talk isn't about any of those. It's about mostly-text sites that, for unfathomable reasons, are growing bigger with every passing year.

While I'll be using examples to keep the talk from getting too abstract, I’m not here to shame anyone, except some companies (Medium) that should know better and are intentionally breaking the web.

The Crisis

What do I mean by a website obesity crisis?

Here’s an article on GigaOm from 2012 titled "The Growing Epidemic of Page Bloat". It warns that the average web page is over a megabyte in size.

The article itself is 1.8 megabytes long."


Here's an almost identical article from the same website two years later, called “The Overweight Web". This article warns that average page size is approaching 2 megabytes.

That article is 3 megabytes long.

If present trends continue, there is the real chance that articles warning about page bloat could exceed 5 megabytes in size by 2020.

The problem with picking any particular size as a threshold is that it encourages us to define deviancy down. Today’s egregiously bloated site becomes tomorrow’s typical page, and next year’s elegantly slim design.

I would like to anchor the discussion in something more timeless.

To repeat a suggestion I made on Twitter, I contend that text-based websites should not exceed in size the major works of Russian literature.

This is a generous yardstick. I could have picked French literature, full of slim little books, but I intentionally went with Russian novels and their reputation for ponderousness.

In Goncharov's Oblomov, for example, the title character spends the first hundred pages just getting out of bed.

If you open that tweet in a browser, you'll see the page is 900 KB big.
That's almost 100 KB more than the full text of The Master and Margarita, Bulgakov’s funny and enigmatic novel about the Devil visiting Moscow with his retinue (complete with a giant cat!) during the Great Purge of 1937, intercut with an odd vision of the life of Pontius Pilate, Jesus Christ, and the devoted but unreliable apostle Matthew.

For a single tweet.

Or consider this 400-word-long Medium article on bloat, which includes the sentence:

"Teams that don’t understand who they’re building for, and why, are prone to make bloated products."

The Medium team has somehow made this nugget of thought require 1.2 megabytes.

That's longer than Crime and Punishment, Dostoyevsky’s psychological thriller about an impoverished student who fills his head with thoughts of Napoleon and talks himself into murdering an elderly money lender.
Racked by guilt, so rattled by his crime that he even forgets to grab the money, Raskolnikov finds himself pursued in a cat-and-mouse game by a clever prosecutor and finds redemption in the unlikely love of a saintly prostitute.

Dostoevski wrote this all by hand, by candlelight, with a goddamned feather."



"Everyone admits there’s a problem. These pages are bad enough on a laptop (my fan spun for the entire three weeks I was preparing this talk), but they are hell on mobile devices. So publishers are taking action.

In May 2015, Facebook introduced ‘Instant Articles’, a special format for news stories designed to appear within the Facebook site, and to load nearly instantly.

Facebook made the announcement on a 6.8 megabyte webpage dominated by a giant headshot of some dude. He doesn’t even work for Facebook, he’s just the National Geographic photo editor.

Further down the page, you'll find a 41 megabyte video, the only way to find out more about the project. In the video, this editor rhapsodizes about exciting misfeatures of the new instant format like tilt-to-pan images, which means if you don't hold your phone steady, the photos will drift around like a Ken Burns documentary.

Facebook has also launched internet.org, an effort to expand Internet access. The stirring homepage includes stories of people from across the developing world, and what getting Internet access has meant for them.
You know what’s coming next. When I left the internet.org homepage open in Chrome over lunch, I came back to find it had transferred over a quarter gigabyte of data.

Surely, you'll say, there's no way the globe in the background of a page about providing universal web access could be a giant video file?

But I am here to tell you, oh yes it is. They load a huge movie just so the globe can spin.

This is Facebook's message to the world: "The internet is slow. Sit and spin."

And it's not like bad connectivity is a problem unique to the Third World! I've traveled enough here in Australia to know that in rural places in Tasmania and Queensland, vendors treat WiFi like hundred-year-old brandy.

You're welcome to buy as much of it as you want, but it costs a fortune and comes in tiny portions. And after the third or fourth purchase, people start to look at you funny.

Even in well-connected places like Sydney, we've all had the experience of having a poor connection, and almost no battery, while waiting for some huge production of a site to load so we can extract a morsel of information like a restaurant address.

The designers of pointless wank like that Facebook page deserve the ultimate penalty.
They should be forced to use the Apple hockey puck mouse for the remainder of their professional lives. [shouts of horror from the audience]

Google has rolled out a competitor to Instant Articles, which it calls Accelerated Mobile Pages. AMP is a special subset of HTML designed to be fast on mobile devices.

Why not just serve regular HTML without stuffing it full of useless crap? The question is left unanswered.

The AMP project is ostentatiously open source, and all kinds of publishers have signed on. Out of an abundance of love for the mobile web, Google has volunteered to run the infrastructure, especially the user tracking parts of it.

Jeremy Keith pointed out to me that the page describing AMP is technically infinite in size. If you open it in Chrome, it will keep downloading the same 3.4 megabyte carousel video forever.
If you open it in Safari, where the carousel is broken, the page still manages to fill 4 megabytes.

These comically huge homepages for projects designed to make the web faster are the equivalent of watching a fitness video where the presenter is just standing there, eating pizza and cookies.

The world's greatest tech companies can't even make these tiny text sites, describing their flagship projects to reduce page bloat, lightweight and fast on mobile.

I can't think of a more complete admission of defeat."



"The other vision is of the web as Call of Duty—an exquisitely produced, kind-of-but-not-really-participatory guided experience with breathtaking effects and lots of opportunities to make in-game purchases.

Creating this kind of Web requires a large team of specialists. No one person can understand the whole pipeline, nor is anyone expected to. Even if someone could master all the technologies in play, the production costs would be prohibitive.

The user experience in this kind of Web is that of being carried along, with the illusion of agency, within fairly strict limits. There's an obvious path you're supposed to follow, and disincentives to keep you straying from it. As a bonus, the game encodes a whole problematic political agenda. The only way to reject it is not to play.

Despite the lavish production values, there's a strange sameness to everything. You're always in the same brown war zone.

With great effort and skill, you might be able make minor modifications to this game world. But most people will end up playing exactly the way the publishers intend. It's passive entertainment with occasional button-mashing.

Everything we do to make it harder to create a website or edit a web page, and harder to learn to code by viewing source, promotes that consumerist vision of the web.

Pretending that one needs a team of professionals to put simple articles online will become a self-fulfilling prophecy. Overcomplicating the web means lifting up the ladder that used to make it possible for people to teach themselves and surprise everyone with unexpected new ideas

Here's the hortatory part of the talk:

Let’s preserve the web as the hypertext medium it is, the only thing of its kind in the world, and not turn it into another medium for consumption, like we have so many examples of already.

Let’s commit to the idea that as computers get faster, and as networks get faster, the web should also get faster.

Let’s not allow the panicked dinosaurs of online publishing to trample us as they stampede away from the meteor. Instead, let's hide in our holes and watch nature take its beautiful course.

Most importantly, let’s break the back of the online surveillance establishment that threatens not just our livelihood, but our liberty. Not only here in Australia, but in America, Europe, the UK—in every free country where the idea of permanent, total surveillance sounded like bad science fiction even ten years ago.

The way to keep giant companies from sterilizing the Internet is to make their sites irrelevant. If all the cool stuff happens elsewhere, people will follow. We did this with AOL and Prodigy, and we can do it again.

For this to happen, it's vital that the web stay participatory. That means not just making sites small enough so the whole world can visit them, but small enough so that people can learn to build their own, by example.

I don't care about bloat because it's inefficient. I care about it because it makes the web inaccessible.

Keeping the Web simple keeps it awesome."
pagebloat  webdesign  maciejceglowski  2015  webdev  participatory  openweb  internet  web  online  minecraft  accessibility  efficiency  aesthetics  cloud  cloudcomputing  amazonwebservices  backend  paypal  google  docker  websites  wired  theverge  medium  javascript  advertising  ads  acceleratedmobilepages  mobile  html  facebook  freebasics  jeremykeith  timkadlec  internet.org  facebookinstantarticles  maciejcegłowski 
january 2016 by robertogreco
I spent a weekend at Google talking with nerds about charity. I came away … worried. - Vox
"To be fair, the AI folks weren't the only game in town. Another group emphasized "meta-charity," or giving to and working for effective altruist groups. The idea is that more good can be done if effective altruists try to expand the movement and get more people on board than if they focus on first-order projects like fighting poverty.

This is obviously true to an extent. There's a reason that charities buy ads. But ultimately you have to stop being meta. As Jeff Kaufman — a developer in Cambridge who's famous among effective altruists for, along with his wife Julia Wise, donating half their household's income to effective charities — argued in a talk about why global poverty should be a major focus, if you take meta-charity too far, you get a movement that's really good at expanding itself but not necessarily good at actually helping people.

And you have to do meta-charity well — and the more EA grows obsessed with AI, the harder it is to do that. The movement has a very real demographic problem, which contributes to very real intellectual blinders of the kind that give rise to the AI obsession. And it's hard to imagine that yoking EA to one of the whitest and most male fields (tech) and academic subjects (computer science) will do much to bring more people from diverse backgrounds into the fold.

The self-congratulatory tone of the event didn't help matters either. I physically recoiled during the introductory session when Kerry Vaughan, one of the event's organizers, declared, "I really do believe that effective altruism could be the last social movement we ever need." In the annals of sentences that could only be said with a straight face by white men, that one might take the cake.

Effective altruism is a useful framework for thinking through how to do good through one's career, or through political advocacy, or through charitable giving. It is not a replacement for movements through which marginalized peoples seek their own liberation. If EA is to have any hope of getting more buy-in from women and people of color, it has to at least acknowledge that."
charity  philanthropy  ethics  2015  altruism  dylanmatthews  google  siliconvalley  ai  artificialintelligence 
november 2015 by robertogreco
Google’s insidious shadow lobbying: How the Internet giant is bankrolling friendly academics—and skirting federal investigations - Salon.com
"Google habitually increases their presence in Washington and the ivory tower whenever their business lines are at risk. In 2011-2012, when the FTC and multiple state investigations opened, Google more than doubled their lobbying expenses, hiring twelve outside lobbying firms and a former Congresswoman from New York, Susan Molinari. And with the FTC’s decision to not prosecute, Google’s activities clearly were money well spent. (Google did pay the FTC $22.5 million in a separate case about privacy violations.)

The growing peril of corporations buying off academics to support their perspective recently came to light when Senator Elizabeth Warren called out Robert Litan, a nonresident senior fellow at the Brookings Institution think tank, for producing suspect industry-funded research. Litan was subsequently relieved of his position with Brookings. It should come as no surprise that Litan also received money from Google to issue a paper in May 2012.

Joshua Wright recently ended his tenure at the FTC, going back to George Mason to teach. With Google again threatened in the U.S. and Europe with additional lawsuits over anti-competitive practices, he is free to write on their behalf again and benefit from Google’s large storehouse of funds for academic underwriting."
google  lobbying  regulation  2015  policy  politics  influence  telecommunications  ftc  corruption  academia 
november 2015 by robertogreco
Google App Streaming: A Big Move In Building "The Web Of Apps"
"Imagine if, in order to use the web, you had to download an app for each website you wanted to visit. To find news from the New York Times, you had to install an app that let you access the site through your web browser. To purchase from Amazon, you first needed to install an Amazon app for your browser. To share on Facebook, installation of the Facebook app for your browser would be required.

That would be a nightmare. It would get even worse when you consider how this would impact search. Every day, millions of people are searching for answers to new things they’ve never realized they needed before. Each person could easily potentially encounter 10, 20 or more sites they’re directed to from search, that promise those answers. But if installing an app for each of those sites were required, the effortless way we currently enjoy web search would be a cumbersome mess.

This situation could have been the web today. For a short time before the web, it even seemed this was how online services would go. You had your AOL, your CompuServe, your Prodigy, your MSN — all online services that were disconnected from each other, some with unique content that could only be accessed if you installed (and subscribed to) that particular online service.

The web put an end to this. More specifically, the web browser did. The web browser became a universal app that let anyone open anything on the web. No need to download software for an online service. No need to download an app for a specific web site. Simply launch the web browser of your choice, and you could get to anything. Moreover, search engines like Google could point you anywhere, knowing you wouldn’t need to install any special apps.

The Disconnected World Of Apps

The growth of mobile and its app-centric world has been the opposite of the web. Until fairly recently, there’s been no seamless moving between apps. If you wanted New York Times news within an app environment, you had to download that app. If you wanted to interact with Facebook easily on mobile, you needed the Facebook app. If you wanted to purchase from Amazon, another app was required (and even then, with iOS, you couldn’t buy because Amazon doesn’t want to pay the “Apple Tax” cut that Apple wants from any iOS app that sells things).

The situation is worse when it comes to search. Again, until somewhat recently, if you searched for content using Google, its mobile search results would tend to push you to mobile web pages. Often, that’s a perfectly fine experience. But sometimes, it might be nicer to go into an app. Worse, there’s a small but growing number of app-only publishers and services. They have no web sites and thus nothing for Google or other search engines to point you at from mobile search results.

The Web Of Apps Begins

Wouldn’t it be nice if you could move between apps just as you do with the web? Major companies like Google, Apple, Facebook and Microsoft certainly believe so. That’s why over the past two years or so, they’ve all been pushing things like Google App Indexing, Apple Deep Linking & Universal Links, Facebook App Links and Bing App Linking.

For a general overview on these efforts, see our Marketing Land guide to app indexing and deep links. But the takeaway is that all these companies want to make it easier to go from any link — from a web page or within an app — and into another app, when appropriate.

There’s still lots of work to be done, as well as fragmentation remaining. Each company has its own system, though some of those systems can leverage or work with others, as with Google’s support of Apple Universal Links, if developers do a little extra work."
applications  google  web  webapps  openweb  2015  aol  compuserve  prodigy  msn  facebook  apple  walledgardens  deeplinking  internet  worldwideweb  appleuniversallinks  links 
november 2015 by robertogreco
The Terror of the Archive | Hazlitt
"The digitally inflected individual is often not quite an individual, not quite alone. Our past selves seem to be suspended around us like ghostly, shimmering holograms, versions of who we were lingering like memories made manifest in digital, diaphanous bodies. For me, many of those past selves are people I would like to put behind me—that same person who idly signed up for Ashley Madison is someone who hurt others by being careless and self-involved. Now, over a decade on, I’m left wondering to what extent that avatar of my past still stands for or defines me—of the statute of limitations on past wrongs. Though we’ve always been an accumulation of our past acts, now that digital can splay out our many, often contradictory selves in such an obvious fashion, judging who we are has become more fraught and complicated than ever. How, I wonder, do we ethically evaluate ourselves when the conflation of past and present has made things so murky?

*

Sometimes, I aimlessly trawl through old and present email accounts, and it turns out I am often inadvertently mining for awfulness. In one instance—in a Hotmail account I named after my love for The Simpsons—I find myself angrily and thoughtlessly shoving off a woman’s renewed affection because I am, I tell her, “sick of this.” I reassure myself that I am not that person anymore—that I now have the awareness and the humility to not react that way. Most days, looking at how I’ve grown since then, I almost believe this is true.

Yet, to be human is to constantly make mistakes and, as a result, we often hurt others, if not through our acts then certainly our inaction. There is for each of us, if we are honest, a steady stream of things we could have done differently or better: could have stopped to offer a hand; could have asked why that person on the subway was crying; could have been kinder, better, could have taken that leap. But, we say, we are only who we are.

We joke about the horror of having our Google searches publicized, or our Twitter DMs revealed, but in truth, we know the mere existence of such a digital database makes it likely that something will emerge from the murky space in which digital functions as a canvas for our fantasies or guilt.

That is how we justify ourselves. Our sense of who we are is subject to a kind of recency bias, and a confirmation bias, too—a selection of memories from the recent past that conform to the fantasy of the self as we wish it to be. Yet the slow accretion of selective acts that forms our self-image is also largely an illusion—a convenient curation of happenings that flatters our ego, our desire to believe we are slowly getting better. As it turns out, grace and forgiveness aren’t the purview of some supernatural being, but temporality—the simple erasure of thought and feeling that comes from the forward passage of time."



"The line between evasiveness and forgiveness, cowardice and grace, is thin, often difficult to locate, but absolutely vital. It seems, though, that our ethical structures may slowly be slipping out of step with our subjectivities. If we have abandoned the clean but totalitarian simplicity of Kant’s categorical imperative, instead embracing that postmodern cliché of a fluid morality, we still cling to the idea that the self being morally judged is a singular ethical entity, either good or bad. It’s common on social media, for example, for someone to be dismissed permanently for one transgression—some comedian or actor who is good at race but bad at gender (or vice versa) to be moved from the accepted pile to the trash heap. If our concept of morality is fluid, our idea of moral judgment is not similarly so.

That notion of self assumes morality is accretive and cumulative: that we can get better over time, but nevertheless remain a sum of the things we’ve done. Obviously, for the Bill Cosbys or Jian Ghomeshis or Jared Fogles of the world, this is fine. In those cases, it is the repetition of heinous, predatory behaviour over time that makes forgiveness almost impossible—the fact that there is no distance between past and present is precisely the point. For most of us, though, that simple idea of identity assumes that selves are singular, totalized things, coherent entities with neat boundaries and linear histories that arrived here in the present as complete. Even if that ever were true, what digitality helps lay bare is that who we are is actually a multiplicity, a conglomeration of acts, often contradictory, that slips backward and forward and sideways through time incessantly."



"Is the difficulty of digitality for our ethics, then, not the multiplicity of the person judged, but our Janus-faced relation to the icebergs of our psyches—the fact that our various avatars are actually interfaces for our subconscious, exploratory mechanisms for what we cannot admit to others or ourselves?

Freud said that we endlessly repeat past hurts, forever re-enacting the same patterns in a futile attempt to patch the un-healable wound. This, more than anything, is the terror of the personal, digital archive: not that it reveals some awful act from the past, some old self that no longer stands for us, but that it reminds us that who we are is in fact a repetition, a cycle, a circular relation of multiple selves to multiple injuries. It’s the self as a bundle of trauma, forever acting out the same tropes in the hopes that we might one day change.

What I would like to tell you is that I am a better man now than when, years ago, I tried my best to hide from the world and myself. In many ways that is true. Yet, all those years ago, what dragged me out of my depressive spiral was meeting someone—a beautiful, kind, warm person with whom, a decade later, I would repeat similar mistakes. I was callous again: took her for granted, pushed her away when I wanted to, and couldn’t take responsibility for either my or her emotions. Now, when a piece of the past pushes its way through the ether to remind me of who I was or am, I can try to push it down—but in a quiet moment, I might be struck by the terror that some darker, more cowardly part of me is still too close for comfort, still there inside me. The hologram of my past self, its face a distorted, shadowy reflection of me with large, dark eyes, is my mirror, my muse. And any judgment of my character depends not on whether I, in some simple sense, am still that person, but whether I—whether we, multiple and overlapped—can reckon with, can meet and return the gaze of the ghosts of our past."
navneetalang  archives  internet  memory  grace  forgiveness  circulation  change  past  present  mistakes  ashleymadison  twitter  email  privacy  facebook  socialmedia  dropbox  google  secrets  instagram  self  ethics  morality  judgement  identity 
september 2015 by robertogreco
Apple’s Modernism, Google’s Modernism: Some reflections on Alphabet, Inc. and a suggestion that modernist architect Adolf Loos would be totally into Soylent | Works Cited
"These temporal aesthetics, Google’s included, tell us something about the repurposing of modernist style for post-Fordist capital. Modernist style still succeeds in evoking newnesses even when wholly “unoriginal” because it so successfully dehistoricizes.20) That it still totally works, and that it remains congenial to capital in the face of capital’s transformations, hints that we have in modernist ideology a powerful actor.

Consequently, the study of early twentieth-century style can be understood as neither irrelevant nor innocent. The quasi-Darwinian, developmentalist ideologies of Silicon Valley have their correlates in styles that disguise their basic violence as design. Its results are, among other things, political transformations of the Bay Area that seek to do to San Francisco what Rob Rinehart did to his apartment—rely heavily on exploited labor that has been geographically displaced. It imagines people of the future living side by side with people who lag behind—but not literally side by side of course! because the laggards commute from Vallejo. Anyone who isn’t on board with the spatial segregation of the temporally disparate is an “enemy of innovation.” Again, this is actually less about time than about hierarchy. After all, the temporal difference between any two people in existence at the same time is completely made up: it’s an effect of style, which is in turn (if we follow Loos’s logic) a proxy for economic dominance. Time is, so to speak, money."
modernism  nataliacecire  2015  apple  google  siliconvalley  design  economics  atemporality  robrinehart  adolfloos  childhood  primitivism  developmentalism  aphabet  puerility  naomischor  siannengai  power  systemsthinking  displacement  innovation  ideology  californianideology  history  newness  exploitation  labor  segregation  hierarchy  technology  technosolutionism  domination 
august 2015 by robertogreco
Is It Time to Give Up on Computers in Schools?
"This is a version of the talk I gave at ISTE today on a panel titled "Is It Time to Give Up on Computers in Schools?" with Gary Stager, Will Richardson, Martin Levins, David Thornburg, and Wayne D'Orio. It was pretty damn fun.

Take one step into that massive shit-show called the Expo Hall and it’s hard not to agree: “yes, it is time to give up on computers in schools.”

Perhaps, once upon a time, we could believe ed-tech would change things. But as Seymour Papert noted in The Children’s Machine,
Little by little the subversive features of the computer were eroded away: … the computer was now used to reinforce School’s ways. What had started as a subversive instrument of change was neutralized by the system and converted into an instrument of consolidation.

I think we were naive when we ever thought otherwise.

Sure, there are subversive features, but I think the computers also involve neoliberalism, imperialism, libertarianism, and environmental destruction. They now involve high stakes investment by the global 1% – it’s going to be a $60 billion market by 2018, we’re told. Computers are implicated in the systematic de-funding and dismantling of a public school system and a devaluation of human labor. They involve the consolidation of corporate and governmental power. They involve scientific management. They are designed by white men for white men. They re-inscribe inequality.

And so I think it’s time now to recognize that if we want education that is more just and more equitable and more sustainable, that we need to get the ideologies that are hardwired into computers out of the classroom.

In the early days of educational computing, it was often up to innovative, progressive teachers to put a personal computer in their classroom, even paying for the computer out of their own pocket. These were days of experimentation, and as Seymour teaches us, a re-imagining of what these powerful machines could enable students to do.

And then came the network and, again, the mainframe.

You’ll often hear the Internet hailed as one of the greatest inventions of mankind – something that connects us all and that has, thanks to the World Wide Web, enabled the publishing and sharing of ideas at an unprecedented pace and scale.

What “the network” introduced in educational technology was also a more centralized control of computers. No longer was it up to the individual teacher to have a computer in her classroom. It was up to the district, the Central Office, IT. The sorts of hardware and software that was purchased had to meet those needs – the needs and the desire of the administration, not the needs and the desires of innovative educators, and certainly not the needs and desires of students.

The mainframe never went away. And now, virtualized, we call it “the cloud.”

Computers and mainframes and networks are points of control. They are tools of surveillance. Databases and data are how we are disciplined and punished. Quite to the contrary of Seymour’s hopes that computers will liberate learners, this will be how we are monitored and managed. Teachers. Students. Principals. Citizens. All of us.

If we look at the history of computers, we shouldn’t be that surprised. The computers’ origins are as weapons of war: Alan Turing, Bletchley Park, code-breakers and cryptography. IBM in Germany and its development of machines and databases that it sold to the Nazis in order to efficiently collect the identity and whereabouts of Jews.

The latter should give us great pause as we tout programs and policies that collect massive amounts of data – “big data.” The algorithms that computers facilitate drive more and more of our lives. We live in what law professor Frank Pasquale calls “the black box society.” We are tracked by technology; we are tracked by companies; we are tracked by our employers; we are tracked by the government, and “we have no clear idea of just how far much of this information can travel, how it is used, or its consequences.” When we compel the use of ed-tech, we are doing this to our students.

Our access to information is constrained by these algorithms. Our choices, our students’ choices are constrained by these algorithms – and we do not even recognize it, let alone challenge it.

We have convinced ourselves, for example, that we can trust Google with its mission: “To organize the world’s information and make it universally accessible and useful.” I call “bullshit.”

Google is at the heart of two things that computer-using educators should care deeply and think much more critically about: the collection of massive amounts of our personal data and the control over our access to knowledge.

Neither of these are neutral. Again, these are driven by ideology and by algorithms.

You’ll hear the ed-tech industry gleefully call this “personalization.” More data collection and analysis, they contend, will mean that the software bends to the student. To the contrary, as Seymour pointed out long ago, instead we find the computer programming the child. If we do not unpack the ideology, if the algorithms are all black-boxed, then “personalization” will be discriminatory. As Tressie McMillan Cottom has argued “a ‘personalized’ platform can never be democratizing when the platform operates in a society defined by inequalities.”

If we want schools to be democratizing, then we need to stop and consider how computers are likely to entrench the very opposite. Unless we stop them.

In the 1960s, the punchcard – an older piece of “ed-tech” – had become a symbol of our dehumanization by computers and by a system – an educational system – that was inflexible, impersonal. We were being reduced to numbers. We were becoming alienated. These new machines were increasing the efficiency of a system that was setting us up for a life of drudgery and that were sending us off to war. We could not be trusted with our data or with our freedoms or with the machines themselves, we were told, as the punchcards cautioned: “Do not fold, spindle, or mutilate.”

Students fought back.

Let me quote here from Mario Savio, speaking on the stairs of Sproul Hall at UC Berkeley in 1964 – over fifty years ago, yes, but I think still one of the most relevant messages for us as we consider the state and the ideology of education technology:
We’re human beings!

There is a time when the operation of the machine becomes so odious, makes you so sick at heart, that you can’t take part; you can’t even passively take part, and you’ve got to put your bodies upon the gears and upon the wheels, upon the levers, upon all the apparatus, and you’ve got to make it stop. And you’ve got to indicate to the people who run it, to the people who own it, that unless you’re free, the machine will be prevented from working at all!

We’ve upgraded from punchcards to iPads. But underneath, a dangerous ideology – a reduction to 1s and 0s – remains. And so we need to stop this ed-tech machine."
edtech  education  audreywatters  bias  mariosavio  politics  schools  learning  tressuemcmillancottom  algorithms  seymourpapert  personalization  data  security  privacy  howwteach  howwelearn  subversion  computers  computing  lms  neoliberalism  imperialism  environment  labor  publicschools  funding  networks  cloud  bigdata  google  history 
july 2015 by robertogreco
Google Street View Comes to California's State Parks | State Park | SoCal Wanderer | KCET
"Last week, California State Parks and Google Maps unveiled a project that allows folks to experience the images and sights of various hikes throughout California State Parks. Rather than fitting a 360-degree camera on top of a car, Google used Trekker, its camera that fits onto a wearable backpack and snaps photos as one walks."
california  stateparks  2015  googlemaps  google  streetview  googlestreetview  parks 
july 2015 by robertogreco
art as industrial lubricant - Text Patterns - The New Atlantis
"Holy cow, does Nick Carr pin this one to the wall. Google says, "At any moment in your day, Google Play Music has whatever you need music for — from working, to working out, to working it on the dance floor — and gives you curated radio stations to make whatever you’re doing better. Our team of music experts, including the folks who created Songza, crafts each station song by song so you don’t have to."

Nick replies:
This is the democratization of the Muzak philosophy. Music becomes an input, a factor of production. Listening to music is not itself an “activity” — music isn’t an end in itself — but rather an enhancer of other activities, each of which must be clearly demarcated....  

Once you accept that music is an input, a factor of production, you’ll naturally seek to minimize the cost and effort required to acquire the input. And since music is “context” rather than “core,” to borrow Geoff Moore’s famous categorization of business inputs, simple economics would dictate that you outsource the supply of music rather than invest personal resources — time, money, attention, passion — in supplying it yourself. You should, as Google suggests, look to a “team of music experts” to “craft” your musical inputs, “song by song,” so “you don’t have to.” To choose one’s own songs, or even to develop the personal taste in music required to choose one’s own songs, would be wasted labor, a distraction from the series of essential jobs that give structure and value to your days. 

Art is an industrial lubricant that, by reducing the friction from activities, makes for more productive lives.

If music be the lube of work, play on — and we'll be Getting Things Done."
nicholascarr  2015  alanjacobs  music  google  muzak  geoffmoore  productivity  latecapitalism 
june 2015 by robertogreco
The Internet of Things You Don’t Really Need - The Atlantic
"We already chose to forego a future of unconnected software. All of your devices talk constantly to servers, and your data lives in the Cloud because there’s increasingly no other choice. Eventually, we won’t have unconnected things, either. We’ve made that choice too, we just don’t know it yet. For the moment, you can still buy toasters and refrigerators and thermostats that don’t talk to the Internet, but try to find a new television that doesn’t do so. All new TVs are smart TVs, asking you to agree to murky terms and conditions in the process of connecting to Netflix or Hulu. Soon enough, everything will be like Nest. If the last decade was one of making software require connectivity, the next will be one of making everything else require it. Why? For Silicon Valley, the answer is clear: to turn every industry into the computer industry. To make things talk to the computers in giant, secured, air-conditioned warehouses owned by (or hoping to be owned by) a handful of big technology companies.

But at what cost? What improvements to our lives do we not get because we focused on “smart” things? Writing in The Baffler last year, David Graeber asked where the flying cars, force fields, teleportation pods, space colonies, and all the other dreams of the recent past’s future have gone. His answer: Technological development was re-focused so that it wouldn’t threaten existing seats of power and authority. The Internet of Things exists to build a market around new data about your toasting and grilling and refrigeration habits, while duping you into thinking smart devices are making your lives better than you could have made them otherwise, with materials other than computers. Innovation and disruption are foils meant to distract you from the fact that the present is remarkably similar to the past, with you working even harder for it.

But it sure feels like it makes things easier, doesn’t it? The automated bike locks and thermostats all doing your bidding so you can finally be free to get things done. But what will you do, exactly, once you can monitor your propane tank level from the comfort of the toilet or the garage or the liquor store? Check your Gmail, probably, or type into a Google Doc on your smartphone, maybe. Or perhaps, if you’re really lucky, tap some ideas into Evernote for your Internet of Things startup’s crowdfunding campaign. “It’s gonna be huge,” you’ll tell your cookout guests as you saw into a freshly grilled steak in the cool comfort of your Nest-controlled dining room. “This is the future.”"
2015  ianbogost  iot  internetofthings  design  davidgraeber  labor  siliconvalley  technology  power  authority  innovation  disruption  work  future  past  present  marketing  propaganda  google  cloud  cloudcomputing  computers  code  googledocs  ubicomp  ubiquitouscomputing  everyware  adamgreenfield  amazon  dropbox  kickstarter 
june 2015 by robertogreco
Welcome to the Age of Digital Imperialism - NYTimes.com
“…the smartphone — for all its indispensability as a tool of business and practicality — is also a bearer of values; it is not a culturally neutral device.”



"And if digital imperialism is happening — if smartphones and other gadgets are bearing cultural freight as they cross borders — there is little doubt as to which nation’s values are hiding in the hold. As of 2013, eight of the world’s top 10 Internet companies by audience were based in the United States, though 81 percent of their online visitors were not. (This fact was made painfully obvious to those users and their governments that same year, when Edward Snowden’s trove of N.S.A. documents showed just how low these American Internet giants had stooped to cooperate with surveillance demands.) Smartphones themselves, from their precision-milled exteriors to their tiled grids of apps on-screen, are patterned largely on Apple’s blueprint, even when designed and made by companies based in South Korea or China. The question is not whether the spread of technology is promulgating, as Hollywood once did, an American vision of what the world should be. Rather, the question is how the rest of the world will respond.

In this Tech and Design issue, we try to see American technology as it looks from elsewhere. In some locales, we focus on industries that are mourning or battling (or both) the arrival of high-tech competition from afar. In others, we linger on homegrown technological creations that face the prospect of displacement as the American juggernaut rolls on. We chart the unexamined footprint of technology on landscapes and languages, on fashion and friendships, far from the California office parks in which so many of these tools are devised and honed.

In Silicon Valley, the notion that technology spreads values is part of the corporate culture — as evidenced in the manifesto that Facebook published, rather incongruously, in the filing papers for its $16 billion I.P.O. three years ago. Declaring at the outset that Facebook was “built to accomplish a social mission,” the document goes on to promise a sort of Facebook revolution: “By giving people the power to share, we are starting to see people make their voices heard on a different scale from what has historically been possible.” It continues: “Through this process, we believe that leaders will emerge across all countries who are pro-Internet and fight for the rights of their people, including the right to share what they want and the right to access all information that people want to share with them.” This evangelical stance, pervasive in the Valley, explains why a major part of Facebook’s and Google’s philanthropic efforts in the past two years has been concentrated on taking Internet access to the developing world. Executives of these companies genuinely believe that over the long run, information technology — including, naturally, the services they themselves provide — is crucial to bettering society.

From the Valley’s perspective, that is, the “power to share” looks less like an imposition of American values and more like a universal social good. But even if we agree with this proposition — as Thailand’s Culture Ministry, for one, might not — there is the more fraught question of what all that sharing adds up to. For individual users, everything about the smartphone nudges them by design to reveal more, to express and connect more. But all the resulting revelations then get rolled up as data that can be offered to governments and corporations — which feel practically compelled, once they know they can obtain it, to parse it all for usable intelligence. For institutions, as with consumers, all resistance recedes once they understand what is possible, once it’s all made to seem not merely acceptable but inevitable and desirable.

This double-edged quality is a hallmark of so many technological innovations today. The same facial recognition software that autotags your photos can autoflag dissidents at the border. The machine-translation engine that lets you flirt in passable French can help spy on multiple continents from a single cubicle. The fitness data you use to adjust your workout might soon forcibly adjust your health-insurance premium. And the stakes have risen considerably as the Valley’s ambitions, during the past few years, have clambered into physical space; in a phenomenon that the venture capitalist Marc Andreessen has famously called “software eating the world,” a new generation of tech companies has encroached on industries like hospitality (Airbnb), transportation (Uber and Lyft), office space (WeWork) and more, bringing a set of tech-inflected values with them.

In old-fashioned 19th-century imperialism, the Christian evangelists made a pretense of traveling separately from the conquering colonial forces. But in digital imperialism, everything travels as one, in the form of the splendid technology itself: salvation and empire, missionary and magistrate, Bible and gun. For all that the world-changing talk of Silicon Valley gets parodied, it is not just empty rhetoric. Over the past decade, it has helped draw so many of the nation’s most driven college graduates to Silicon Valley, the one place in 21st-century America that promises to satisfy both their overweening ambition and their restless craving for social uplift. These unquiet Americans have gone on to design tools that spread values as they create value — a virtuous circle for all who share their virtues."
digital  smartphones  internet  google  facebook  culture  imperialism  digitalimperialism  values  siliconvalley  technology  us  billwasik 
june 2015 by robertogreco
Virtual Field Trips and Education (Technology) Inequalities
"Field trips are sometimes dismissed as trivial distractions and unnecessarily deviations from the curriculum, but the enrichment they offer is actually quite important, particularly for low-income students who might not otherwise have the opportunities their wealthier peers do to visit museums and the like."



"Other research has found that field trips have a long-lasting impact on students, most of whom can still (like me) recall significant elements from the outings – who was there, what they saw, what they did – even years later"



"But let's be honest: virtual field trips are not field trips. Oh sure, they might provide educational content. They might, as Google’s newly unveiled “Expeditions” cardboard VR tool promises, boast "360° photo spheres, 3D images and video, ambient sounds -- annotated with details, points of interest and questions that make them easy to integrate into curriculum already used in schools." But virtual field trips do not offer physical context; they do not offer social context. Despite invoking the adjective “immersive,” they most definitely are not.

"So when Google says, as it did onstage today at its annual developer/marketing event Google IO, that its new tool will “take your students to places a school can’t,” let’s ask more questions and not simply parrot the tech giant’s PR.
Let’s ask why certain students from certain schools can’t go places -- even local places -- anymore (if, indeed, they ever were able to). Let’s consider how equating viewing 3D movies in the classroom with experiential learning off-campus could give even more schools an excuse to cut back further on funding actual field trips. And, please, let’s not conflate providing students a VR viewer made out of cardboard with actually addressing how education technology exacerbates inequalities."
google  inequality  audreywatters  vr  fieldtrips  virtualreality  googleio  googlexpeditions  experience  memory  2015 
june 2015 by robertogreco
Eyeo 2014 - Leah Buechley on Vimeo
"Thinking About Making – An examination of what we mean by making (MAKEing) these days. What gets made? Who makes? Why does making matter?"



[uninscusive covers of Make Magazine and composition of Google employment]

“Meet the new boss, same as the old boss”

"I'm really tired of setting up structures where we tell young women and young brown and black kids that they should aspire to be like rich white guys."

[RTd these back than, but never watched the video. Thanks, Sara for bringing it back up.

https://twitter.com/arikan/status/477546169329938432
https://twitter.com/arikan/status/477549826498764801 ]

[Talk with some of the same content from Leah Buechley (and a lot of defensive comments from the crowd that Buechleya addresses well):
http://edstream.stanford.edu/Video/Play/883b61dd951d4d3f90abeec65eead2911d
https://www.edsurge.com/n/2013-10-29-make-ing-more-diverse-makers ]
leahbuechley  making  makermovement  critique  equality  gender  race  2014  via:ablerism  privilege  wealth  glvo  openstudioproject  lcproject  democratization  inequality  makemagazine  money  age  education  electronics  robots  robotics  rockets  technology  compsci  computerscience  computing  computers  canon  language  work  inclusivity  funding  google  intel  macarthurfoundation  opportunity  power  influence  movements  engineering  lowriders  pottery  craft  culture  universality  marketing  inclusion 
may 2015 by robertogreco
Why One Silicon Valley City Said “No” to Google – Next City
"Big money and even bigger egos are colliding in the tech world’s new company towns."



"In 2012, Mountain View and Google entered into a $222,000 annual contract for Google to pay for city planning staff to handle all the reviews needed to get Google’s projects off the drawing board and into construction phases. Today, that contract is valued at $377,838. While the city normally charges companies an hourly rate for municipal services, the vetting of Google projects required more hours than the city had available. Instead of rejecting the company’s plans outright for lack of staff, Mountain View asked Google to fund the hiring of two additional planners. It was an unusual arrangement, the kind usually reserved for corporate polluters that must pay for large-scale government cleanups.

The agreement to have Google subsidize public servants didn’t necessarily raise many local eyebrows. After all, like it had before, Google solved the problem it had created, albeit by playing a major role in government affairs.

But local will for such involvement appears to have waned. In rejecting the vast majority of Google’s campus expansion, the Mountain View city council also rejected most of the company’s $240 million community benefits package, from the bike lanes and affordable housing, to the $15 million public safety center and ecological restoration, all planned at Google’s behest and design.

The vast majority of the North Bayshore area was instead granted to LinkedIn, which offered far fewer community benefits, but had one major factor in its favor: It’s not Google.

The political climate for tech companies in the Bay Area is, to a great extent, confused. The Googles of the world are blamed for a sharp rise in the cost of living and an increased strain on public services and infrastructure, but at the same time, no one can deny the huge boost they’ve given local government coffers.

Still, there is a discrepancy between the billions of dollars these companies make and the checks they write to the local governments that host them.

The sales tax model that served California cities for decades doesn’t work in the knowledge economy. While Apple remits local tax on the products it sells, Google and Facebook don’t collect sales tax on the digital ads we click away and the data we unwittingly share. Community benefit deals can potentially bridge the gap between those taxes and impacts, but they allow companies to determine which civic projects should be priorities. Facebook might want more police and Google might want more local ecology — but what do residents want?

If cities want to take greater control of their future, they’ll have to create and enforce new tax revenue streams — something Mountain View council member Lenny Siegel says he is working toward.

Without a significant local tax burden, companies can afford to drive policies and services, superseding the role of local government and advancing their own ideology. When that ideology includes bike lanes and public school support, this arrangement might work well.

But in a region in the grips of a controversial housing crisis spurred in no small part by an influx of high-paid tech talent, Silicon Valley companies on the whole appear comparatively disinterested in funding the affordable homes these cities so desperately need."



"Big companies in small cities are bound to exert some of their own power, either purposefully or passively. Much of this seems inevitable — it’s how this valley was named “Silicon” decades ago. But these companies are no longer dealing just in silicon. Regardless of Google’s loss in North Bayshore, soon Mountain View will feature Google-designed cars running on Google-funded roads planned by Google-paid city engineers. Where they once built semiconductors and software, tech is shaping the future of human communication, infrastructure, transit, law and collective lived experience — all the things that make up a city."

[Related: “New Balance Bought Its Own Commuter Rail Station [in Boston]: Instead of asking the cash-strapped public-transit system to add a stop, the company simply paid for one itself.”
http://www.theatlantic.com/business/archive/2015/05/new-balance-bought-its-own-commuter-rail-station/392711/ ]
siliconvalley  google  mountainview  california  infrastrcuture  taxes  2015  susiecagle  government  governance  economics  publictransit  transportation  housing  law  transit  boston 
may 2015 by robertogreco
Google Street View photography bookmarklets
"Google Street View photography bookmarklets

For Google Chrome. Drag them to your bookmark toolbar.

Lite mode

GSVliteSwitch: Switch Google Maps from Full mode to Lite mode
(To switch back to Full mode, click Google's lightning bolt icon at lower right.)
GSVlite: Hide Street View overlays
GSVliteCompass: Hide Street View overlays except compass and zoom
Full mode

GSVfull: Hide Street View overlays (still shows street names)
Toggle fullscreen: Cmd-shift-F on OSX, F11 on Windows

Corrections or suggestions: @erasing"

[via: http://erasing.tumblr.com/post/117779933235/buchr-yesterday-google-disabled-google-maps

"Yesterday, Google disabled Google Maps Classic mode. I was devastated. Amazingly, superhero erasing created these bookmarklets to strip out all the shit the “new” Google Maps overlays on the Street View images.

http://scottdavidherman.com/gsv/

Day = saved."]
googlestreetview  streetview  bookmarklets  imagery  google  maps  mapping  onlinetoolkit 
may 2015 by robertogreco
Google brings Chrome OS straight into Windows 8 in latest update | The Verge
"Google started dropping hints about its Chrome OS-like plans for Windows 8 back in October. At the time it was merely an experiment in the developer version of Chrome, but today Google is rolling out a new user interface to all Chrome Windows users alongside a noisy tabs tracking feature. The new "Metro" mode essentially converts Chrome for Windows 8 into Chrome OS. Just like Google's full Chrome OS, you can create multiple browser windows and arrange them using a snap to the left or right of the display or full-screen modes. There's even a shelf with Chrome, Gmail, Google, Docs, and YouTube icons that can be arranged at the bottom, left, or right of the screen.

An app launcher is also available in the lower left-hand corner, providing access to search and recent apps. It’s all clearly designed to work well with touch on Windows 8, something that the traditional desktop version of Chrome has not focused on so far. The "Metro" mode presents the keyboard automatically, and also includes the ability to navigate and resize windows within the Chrome OS-like environment. Some UI elements still require some touch optimization, but overall it’s a better experience than the existing desktop version with touch.

While the Chrome browser acts as a Windows 8 application, it's using a special mode that Microsoft has enabled specifically for web browsers. The software maker allows browsers on Windows 8 to launch in its "Metro-style" environment providing they're set as default. The applications are listed in the Windows Store and they're still desktop apps, but the exception allows them to mimic Windows 8 apps and access the app and snapping features of the OS. While Chrome runs in this mode on Windows 8, Microsoft does not permit this type of behavior on Windows RT.

Google’s latest update for Windows 8 is clearly a big step forwards in its Chrome Apps initiative. The search giant is working with developers to create apps that exist outside of the browser and extend Chrome’s reach into more of a platform for third parties to build upon. Having a Chrome OS-like environment directly inside of Windows 8 extends Google’s browser into a Trojan horse to eventually convince users to download more and more Chrome Apps and possibly push them towards Chrome OS in the future.

We’ve reached out to Microsoft for comment on whether Google’s latest Chrome OS update conforms with the Metro-style browser policies, and we’ll update you accordingly."
chromeos  windows8  edg  android  chrome  google  2014  applications 
april 2015 by robertogreco
You can now run Android apps on a Mac or PC with Google Chrome | The Verge
"Google’s convergence of Chrome and Android is taking a big step forward this week. After launching a limited App Runtime for Chrome (ARC) back in September, Google is expanding its beta project to allow Android apps to run on Windows, OS X, and Linux. It’s an early experiment designed primarily for developers, but anyone can now download an APK of an existing Android app and launch it on a Windows / Linux PC, Mac, or Chromebook.

You simply need to download the ARC Welder app and obtain APKs from Google’s Play Store. There are some limitations: only one app can be loaded at a time, and you have to select landscape or portrait layout and whether you want the app to run in phone- or tablet-style. However, you can load multiple apps by selecting the download ZIP option in Arc Welder and extracting it and then enabling extension developer mode to load the folder of the extracted APK. During my testing I’ve found that most apps run really well. There are some exceptions like Gmail and Chrome for Android that throw up Google Play Services errors, but that’s not because ARC doesn’t support them. Developers will need to optimize their apps for ARC, and some Google Play Services are also supported right now, making that process a lot easier.

ARC is based on Android 4.4, meaning a lot of standalone apps are immediately compatible. Twitter works well, and Facebook Messenger loads just fine but does continuously say it’s waiting for the network. I was impressed with Flipboard, and the ability to flick through using two finger gestures on a trackpad, and even Instagram works well for casual browsing. Of course, trying to use the camera in apps will immediate force the app to crash, and keyboard commands aren’t always recognized properly. The biggest issue is that most apps are simply designed for touch, or in the case of games to use a phone’s accelerometer.

I tried a variety of games, and while simple titles like Candy Crush Soda work very well, others refused to launch properly or couldn’t handle mouse input correctly. That’s not surprising for apps that aren’t even optimized, and it’s clear Google’s project has a bright future. While Microsoft is building out Windows 10 and the idea of universal apps across PCs, phones, tablets, and the Xbox One, Google is turning Android into its own universal app platform. Google already built a way to push Chrome OS straight into Windows 8, and this latest Android experiment brings Google even closer to a PC market dominated by Microsoft. Developers can now run their Android apps on phones, tablets, PCs, Macs, Chromebooks, and even Linux-powered devices, and that’s a big opportunity that will likely result in a lot of these apps arriving in the Chrome Web Store in the near future."
mac  osx  android  chrome  emulators  2015  google  googleplay  chromeos 
april 2015 by robertogreco
Mobile-Friendly Test
"This test will analyze a URL and report if the page has a mobile-friendly design.

Learn more about the mobile-friendly criteria and how it may affect Google's search results by reading our blog post."

[via: http://searchengineland.com/everything-need-know-googles-new-stance-mobile-216870 ]
mobile  google  webdev  tools  webdesign 
april 2015 by robertogreco
Warren Ellis Esquire Essay - Warren Ellis Technology Column
"Regardless of what you think of Uber and its corporate behavior, the lesson should not go unlearned: If you build your business on top of someone else's system, eventually they're going to notice. Just last week, the livestreaming app Meerkat, which uses Twitter to transmit, felt a cold breeze pass through the room when Twitter bought the competing system Periscope, which will doubtless be baked into Twitter as soon as possible. Digital businesses can murder and haunt their own parasites.

In the midst of all this? Rich, crazy Elon Musk, who intends to put large and efficient electric batteries into people's homes. Which may not be one of his weird side projects, like Hyperloop, especially since Apple is hiring his car-makers away, and their car sales and shipments are under the projected numbers. And because it fits right in with the "disruption" thing. You know Musk has a solar panel company, right? This seems quite clever: SolarCity will let you lease their panels, or you can take out a 30-year loan with them. SolarCity doesn't charge you for installing or maintaining the system, and you pay SolarCity for the power the system generates, thereby paying off the loan. Electricity as a mortgage. Now, combine that with a rechargeable fuel cell in your home that could probably power your house for at least a week all on its own. Welcome to Basic Utilities Disruption.

Have you been reading this and thinking, Hmm, I'm not very interested in technology and disruption and ghosts and whatever else the hell you're talking about? Well, I bet you're interested in a future where it remains cost-effective for your local electricity substations to be maintained even after a critical number of homes in your area have gone off the grid, or, in the extreme open-market scenario, if it remains cost-effective to even supply electricity to your town at all. And what unforeseeable haunting might happen in the chilly aftermath...

We only sleep at night because Facebook, Google, Apple, Amazon, Microsoft, and Elon Musk don't want our businesses. Yet.

Facebook and Google fighting with balloons and drones to bring internet to Africa. Apple making Big Phones. Android NFC wallets versus Apple Pay. iCloud and Amazon Storage. You know what'll happen once these self-driving consumer-facing services go online? They'll be doing same-day purchase deliveries, going head-to-head with Amazon in cities, a fuller and faster version of Google's piloted Shopping Express. Jeff Bezos owns a rocket development firm, by the way, so maybe go carefully with that. Oh, and Apple apparently want into enterprise support business, which will put them against Amazon, where all the enterprise data is stored, and, of course, sleepy, old Microsoft.

Keep breathing. Stay warm. Things are going to get weirder yet."
warrenellis  2015  elonmusk  tesla  energy  publicutilities  utilities  solar  apple  google  microsoft  amazon  facebook  uber  technology  capitalism  competition  electricity  batteries  cars  self-drivingcars  solarcity 
march 2015 by robertogreco
Bruce Sterling Closing Talk by SXSW on SoundCloud - Hear the world’s sounds
"World traveler, science fiction author, journalist, and future-focused design critic Bruce Sterling spins the globe a few rounds as he wraps up the Interactive Conference with his peculiar view of the state of the world. Always unexpected, invented on the fly, a hash of trends, trepidations, and creative prognostication. Don't miss this annual event favorite. What will he covered in 2015?"
makers  making  brucesterling  internetofthings  sxsw  2015  turin  torino  design  climatechange  makerspaces  ianbogost  via:steelemaley  3dprinting  economics  apple  google  amazon  microsoft  future  business  iot 
march 2015 by robertogreco
Google Art Project - Chrome Web Store
"Art Project masterpieces from Google Cultural Institute in your browser tabs
Breathe a little culture into your day! Discover a beautiful artwork from the Google Art Project each time you open a new tab in Chrome.

With this extension, you’ll see masterpieces from Van Gogh, Degas, Monet and other iconic artists from museums around the world in every new Chrome tab. The artwork is refreshed every day, or change the settings to see a new image every time you open a new tab.

If an artwork happens to spark your curiosity, click the image description to discover more on the Google Cultural Institute website."

[via: http://www.theverge.com/2015/3/12/8199135/chrome-extension-replaces-new-tabs-with-art ]
at  arthistory  chrome  extensions  google  artproject  googleartproject 
march 2015 by robertogreco
The Kleiner Perkins Lawsuit, and Rethinking the Confidence-Driven Workplace - NYTimes.com
"When a group of men and women took a science exam and scored the same, the women underestimated their performance and refused to enter a science fair, while the men did the opposite.

At Google, men were being promoted at a much higher rate than women — because they were nominating themselves for promotion and women were not.

And at Kleiner Perkins Caufield & Byers, the venture capital firm currently on trial for gender discrimination, employees have described a culture in which the people who got ahead were those who hyped themselves and talked over others. Again, they were usually men.

The confidence gap between men and women is well documented. But it is also clear that a lack of confidence does not necessarily equate to a lack of competence (or the other way around.) So the challenge for workplaces is to enable people without natural swagger to be heard and get promoted.

At Kleiner Perkins, one solution was to give Ellen Pao, the former junior partner who is suing the firm, coaching to improve her speaking skills to participate in the firm’s “interrupt-driven” environment. Testimony is continuing in Ms. Pao’s lawsuit, and the jury hasn’t yet been given the case to decide whether Kleiner Perkins is liable.

But the interruption coaching raises an interesting question. Is it the employees’ responsibility to learn how to interrupt, or is it the employers’ job to create a culture in which people without the loudest voice or most aggressive manner can still be heard?

“On the one hand, we need to lean in and be more confident and put ourselves out there, and there is more hesitance to do that for women than men,” said Joyce Ehrlinger, a psychology professor at Washington State University and an author of the study that found that women underestimated their performance on the science exam.

“But there’s this issue of how women are perceived,” she said. “We don’t put ourselves out there because we know it’s not going to be accepted in the same way.”

It is “the double bind of speaking while female,” as Sheryl Sandberg, the Facebook executive and “Lean In” author, and Adam Grant, a professor at the Wharton School, recently wrote in The New York Times.

People who coach executives on public speaking in Silicon Valley said interruption training was not common, but they said that to reach positions of power in the tech industry, people need to be able to aggressively speak in meetings. Coaching often includes work on how to effectively communicate in meetings and is sometimes described as assertiveness training. Kleiner has said it provided Ms. Pao with coaching so she could learn to “own the room.”

Venture capitalists describe typical partner meetings as full of verbal intimidation where the people who speak most confidently are the ones who succeed. Some women in the industry said that training such as Ms. Pao received would be helpful in that environment.

The confidence gap begins very early, Ms. Ehrlinger said, when parents overestimate the crawling ability of boys and underestimate that of girls, for example. It can be reinforced at work, where women are paid less than men and evaluated in more negative terms.

Some employers have figured out ways to address it, other than teaching women to interrupt. The show runner for “The Shield” banned interrupting during pitches by writers, Ms. Sandberg and Mr. Grant reported.

At Google, senior women began hosting workshops to encourage and prepare women to nominate themselves for promotion.

Women are often more comfortable asserting themselves when they talk about ideas in terms of a group, Ms. Ehrlinger said – describing a plan that has been vetted by multiple people or explaining how it would benefit the whole company, not just their own careers.

Still, in an age when the No. 1 piece of career advice is to build one’s personal brand, that is not necessarily a clear path to success, either."

[via: http://log.scifihifi.com/post/113466975721/at-kleiner-perkins-one-solution-was-to-give-ellen ]
gender  confidence  culture  2015  clairecainmiller  ellenpao  siliconvalley  technology  inclusion  vc  venturecapitalism  google  behavior  patriarchy  inlcusivity  inclusivity 
march 2015 by robertogreco
Why I’m Saying Goodbye to Apple, Google and Microsoft — Backchannel — Medium
"I’d periodically played with Linux and other alternatives on my PC over the years, but always found the exercise tedious and, in the end, unworkable. But I never stopped paying attention to what brilliant people like Richard Stallman and Cory Doctorow and others were saying, namely that we were leading, and being led, down a dangerous path. In a conversation with Cory one day, I asked him about his use of Linux as his main PC operating system. He said it was important to do what he believed in—and, by the way, it worked fine.

Could I do less, especially given that I’d been public in my worries about the trends?

So about three years ago, I installed the Ubuntu variant — among the most popular and well-supported — on a Lenovo ThinkPad laptop, and began using it as my main system. For a month or so, I was at sea, making keystroke errors and missing a few Mac applications on which I’d come to rely. But I found Linux software that worked at least well enough, and sometimes better than its Mac and Windows counterparts.

And one day I realized that my fingers and brain had fully adjusted to the new system. Now, when I used a Mac, I was a bit confused."



"As mobile computing has become more dominant, I’ve had to rethink everything on that platform, too. I still consider the iPhone the best combination of software and hardware any company has offered, but Apple’s control-freakery made it a nonstarter. I settled on Android, which was much more open and readily modified.

But Google’s power and influence worry me, too, even though I still trust it more than many other tech companies. Google’s own Android is excellent, but the company has made surveillance utterly integral to the use of its software. And app developers take disgusting liberties, collecting data by the petabyte and doing god-knows-what with it. (Security experts I trust say the iPhone is more secure by design than most Android devices.) How could I walk my talk in the mobile age?"



"So I keep looking for ways to further reduce my dependence on the central powers. One of my devices, an older tablet running Cyanogenmod, is a test bed for an even more Google-free existence.

It’s good enough for use at home, and getting better as I find more free software — most of it via the “F-Droid” download library — that handles what I need. I’ve even installed a version of Ubuntu’s new tablet OS, but it’s not ready, as the cliche goes, for prime time. Maybe the Firefox OS will be a player.

But I’ve given up the idea that free software and open hardware will become the norm for consumers anytime soon, if ever—even though free and open-source software is at the heart of the Internet’s back end.

If too few people are willing to try, though, the default will win. And the defaults are Apple, Google and Microsoft.

Our economic system is adapting to community-based solutions, slowly but surely. But let’s face it: we collectively seem to prefer convenience to control, at least for the moment. I’m convinced more and more people are learning about the drawbacks of the bargain we’ve made, wittingly or not, and someday we may collectively call it Faustian.

I keep hoping more hardware vendors will see the benefit of helping their customers free themselves of proprietary control. This is why I was so glad to see Dell, a company once joined at the hip with Microsoft, offer a Linux laptop. If the smaller players in the industry don’t themselves enjoy being pawns of software companies and mobile carriers, they have options, too. They can help us make better choices.

Meanwhile, I’ll keep encouraging as many people as possible to find ways to take control for themselves. Liberty takes some work, but it’s worth the effort. I hope you’ll consider embarking on this journey with me."
apple  google  microsoft  dangilmour  linux  opensource  2015  community  hardware  dell  cyanogenmod  ios  android  windows  mac  osx  f-droid  ubuntu  firefoxos  firefox  os  mozilla  lenovo  richardstallman  corydoctorow  libreoffice 
march 2015 by robertogreco
Matt Jones: Jumping to the End -- Practical Design Fiction on Vimeo
[Matt says (http://magicalnihilism.com/2015/03/06/my-ixd15-conference-talk-jumping-to-the-end/ ):

"This talk summarizes a lot of the approaches that we used in the studio at BERG, and some of those that have carried on in my work with the gang at Google Creative Lab in NYC.

Unfortunately, I can’t show a lot of that work in public, so many of the examples are from BERG days…

Many thanks to Catherine Nygaard and Ben Fullerton for inviting me (and especially to Catherine for putting up with me clowning around behind here while she was introducing me…)"]

[At ~35:00:
“[(Copy)Writers] are the fastest designers in the world. They are amazing… They are just amazing at that kind of boiling down of incredibly abstract concepts into tiny packages of cognition, language. Working with writers has been my favorite thing of the last two years.”
mattjones  berg  berglondon  google  googlecreativelab  interactiondesign  scifi  sciencefiction  designfiction  futurism  speculativefiction  julianbleecker  howwework  1970s  comics  marvel  marvelcomics  2001aspaceodyssey  fiction  speculation  technology  history  umbertoeco  design  wernerherzog  dansaffer  storytelling  stories  microinteractions  signaturemoments  worldbuilding  stanleykubrick  details  grain  grammars  computervision  ai  artificialintelligence  ui  personofinterest  culture  popculture  surveillance  networks  productdesign  canon  communication  johnthackara  macroscopes  howethink  thinking  context  patternsensing  systemsthinking  systems  mattrolandson  objects  buckminsterfuller  normanfoster  brianarthur  advertising  experiencedesign  ux  copywriting  writing  film  filmmaking  prototyping  posters  video  howwewrite  cognition  language  ara  openstudioproject  transdisciplinary  crossdisciplinary  interdisciplinary  sketching  time  change  seams  seamlessness 
march 2015 by robertogreco
« earlier      
per page:    204080120160

Copy this bookmark:





to read