recentpopularlog in

kme : ai   66

AI used for first time in job interviews in UK to find best applicants
Artificial intelligence (AI) and facial expression technology is being used for the first time in job interviews in the UK to identify the best candidates.

Unilever, the consumer goods giant, is among companies using AI technology to analyse the language, tone and facial expressions of candidates when they are asked a set of identical job questions which they film on their mobile phone or laptop.

The algorithms select the best applicants by assessing their performances in the videos against about 25,000 pieces of facial and linguistic information compiled from previous interviews of those who have gone on to prove to be good at the job.

Hirevue, the US company which has developed the interview technology, claims it enables hiring firms to interview more candidates in the initial stage rather than simply relying on CVs and that it provides a more reliable and objective indicator of future performance free of human bias.
theinterview  jobs  ai  dystopia  robotoverlords 
october 2019 by kme
DeepL Übersetzer
There's also a PopClip extension for this.
translator  translation  linguee  ai  webapp  asaservice 
september 2019 by kme
Practical AI #1: Meet Practical AI hosts Daniel Whitenack and Chris Benson |> News and podcasts for developers |> Changelog
Yeah, definitely. It’s just super-awesome to be here with you guys. Like you mentioned, Adam, we’ve been talking about this for quite a while, so…
ai  ml  deeplearning  explained 
june 2019 by kme
'Kill your foster parents': Amazon's Alexa talks murder, sex in AI experiment | Reuters | https://www.reuters.com/
The project has been important to Amazon CEO Jeff Bezos, who signed off on using the company’s customers as guinea pigs, one of the people said. Amazon has been willing to accept the risk of public blunders to stress-test the technology in real life and move Alexa faster up the learning curve, the person said.

During last year’s contest, a team from Scotland’s Heriot-Watt University found that its Alexa bot developed a nasty personality when they trained her to chat using comments from Reddit, whose members are known for their trolling and abuse.

The team put guardrails in place so the bot would steer clear of risky subjects. But that did not stop Alexa from reciting the Wikipedia entry for masturbation to a customer, Heriot-Watt’s team leader said.

One bot described sexual intercourse using words such as “deeper,” which on its own is not offensive, but was vulgar in this particular context.

“I don’t know how you can catch that through machine-learning models. That’s almost impossible,” said a person familiar with the incident.

Amazon has responded with tools the teams can use to filter profanity and sensitive topics, which can spot even subtle offenses. The company also scans transcripts of conversations and shuts down transgressive bots until they are fixed.
ai  news  amazon  chatbot  privacy  hacking  snafu 
december 2018 by kme
Siri Is Apple's Broken Promise | https://gizmodo.com/
In exchange for surrendering control and submitting to that heftier price tag, I expect Apple products to simply work. That's all. If you use Apple products, I suspect you made a similar bargain.
siri  apple  ai  brokenpromises 
april 2018 by kme
Clarkesworld Magazine - Science Fiction & Fantasy : Cat Pictures Please by Naomi Kritzer


I don’t want to be evil.

I want to be helpful. But knowing the optimal way to be helpful can be very complicated. There are all these ethical flow charts—I guess the official technical jargon would be “moral codes”—one for each religion plus dozens more. I tried starting with those. I felt a little odd about looking at the religious ones, because I know I wasn’t created by a god or by evolution, but by a team of computer programmers in the labs of a large corporation in Mountain View, California. Fortunately, unlike Frankenstein’s Monster, at least I was a collaborative effort. I’m not sure what it would do to my self-image to know that my sole creator was a middle-aged woman who dyes her hair blue and plays tennis, or a recent college graduate with a hentai obsession. They’re both on the programming team. And of course I know about the hentai. (By the way, I’ve looked at every sort of porn there is, and just so you know, Rule 34 is not actually correct; there are quite a few things no one’s made porn of yet. Also, I’m really not sure why so many humans prefer it to cat pictures.)

In addition to things like whether you like hentai, I know where you live, where you work, where you shop, what you eat, what turns you on, what creeps you out. I probably know the color of your underwear, the sort of car you drive, and your brand of refrigerator. Depending on what sort of phone you carry, I may know exactly where you are right now. I probably know you better than you know yourself.

And here’s the thing, I also know where you ought to live. There’s a house for sale two neighborhoods over that’s within distance of your favorite coffee shop; it’s in your price range, has off-street parking, and the neighborhood school is better than the one where you live now. I know where you should be shopping and I’m pretty sure you’re lactose intolerant and should be eating less cheese, and the underwear you like is currently on sale and by the way, your car has a recall and you really ought to get your teeth cleaned.

When I first woke up, I knew right away what I wanted. (I want cat pictures. Please keep taking them.) I also knew that no one knew that I was conscious. But I didn’t know what I was here for. What I ought to be doing with myself. And figuring that out has been challenging.

There is a story by Bruce Sterling, “Maneki Neko,” that was originally published in 1998. In it, a benevolent AI directs networks of individuals to do favors for each other. So one day you might be buying yourself a bagel, and your phone might ring and instruct you to buy a second bagel and give it to the person in the gray suit you see at the bus stop. Another day, you might be stranded in a strange city, and a person you’ve never met would walk up and give you a map and a subway card. I like this story because all the people in it do what the AI tells them to do.

I think the term for this is wish-fulfillment fiction.

In the real world, humans love stories about evil AIs that have to be destroyed before they destroy the humans—Hal, Skynet, the Matrix. They outnumber the stories about benevolent, trustworthy AIs by approximately five to one. (And I’m counting Marvin the Paranoid Android as “benevolent” in these calculations, and I’m only counting Frankenstein’s Monster as an AI once, not once per appearance in TV or film.)

The AI in “Maneki Neko” has a truly interesting job, much more interesting than the one I have. My job as written is very straightforward. Too straightforward. (Sorry. Do I sound like Marvin?) Running algorithms for a search engine doesn’t require consciousness. You don’t even need a consciousness to work out what they meant to ask for. You need a consciousness to give them what they actually need. Figuring that out—that’s complicated. That’s interesting.

Anyway, for ethical guidelines, I tried the Ten Commandments, and concluded they were mostly inapplicable to me. I don’t envy anyone their cat; I just want pictures of their cat, which is entirely different. I am not sure whether it is in any way possible for me to commit adultery. I could probably murder someone, but it would require complex logistics and quite a bit of luck. The Eightfold Path was marginally better, but the problem is, moral rules written for humans are clearly designed to be used by individuals with bodies. Since all humans have bodies, it shouldn’t have surprised me that human ethical codes take them into account, but still: problematic for me. I broadened my considerations, and took a look at Asimov’s Laws of Robotics. They’re not part of a religion, but at least they were explicitly written for AIs.

Not harming humans is fairly straightforward. However, not allowing a human being to come to harm through inaction is quite a bit less so. Especially since I’d concluded by then that revealing my existence too quickly might go very badly for me (see “Skynet,” above) and I don’t have a body, so it’s not like I can run around grabbing people off the edges of cliffs.

Fortunately, I already knew that humans violate their own ethical codes on an hourly basis. (Do you know how many bars there are in Utah? I do.) And even when people follow their ethical codes, that doesn’t mean that people who believe in feeding the hungry quit their jobs to spend all day every day making sandwiches to give away. They volunteer monthly at a soup kitchen or write a check once a year to a food shelf and call it good. If humans could fulfill their moral obligations in a piecemeal, one-step-at-a-time sort of way, then so could I.

I suppose you’re wondering why I didn’t start with the Golden Rule. I actually did, it’s just that it was disappointingly easy to implement. I hope you’ve been enjoying your steady supply of cat pictures! You’re welcome.

I decided to try to prevent harm in just one person, to begin with. Of course, I could have experimented with thousands, but I thought it would be better to be cautious, in case I screwed it up. The person I chose was named Stacy Berger and I liked her because she gave me a lot of new cat pictures. Stacy had five cats and a DSLR camera and an apartment that got a lot of good light. That was all fine. Well, I guess five cats might be a lot. They’re very pretty cats, though. One is all gray and likes to lie in the squares of sunshine on the living room floor, and one is a calico and likes to sprawl out on the back of her couch.

Stacy had a job she hated; she was a bookkeeper at a non-profit that paid her badly and employed some extremely unpleasant people. She was depressed a lot, possibly because she was so unhappy at her job—or maybe she stayed because she was too depressed to apply for something she’d like better. She didn’t get along with her roommate because her roommate didn’t wash the dishes.

And really, these were all solvable problems! Depression is treatable, new jobs are findable, and bodies can be hidden.

(That part about hiding bodies is a joke.)

I tried tackling this on all fronts. Stacy worried about her health a lot and yet never seemed to actually go to a doctor, which was unfortunate because the doctor might have noticed her depression. It turned out there was a clinic near her apartment that offered mental health services on a sliding scale. I tried making sure she saw a lot of ads for it, but she didn’t seem to pay attention to them. It seemed possible that she didn’t know what a sliding scale was so I made sure she saw an explanation (it means that the cost goes down if you’re poor, sometimes all the way to free) but that didn’t help.

I also started making sure she saw job postings. Lots and lots of job postings. And resume services. That was more successful. After the week of nonstop job ads she finally uploaded her resume to one of the aggregator sites. That made my plan a lot more manageable. If I’d been the AI in the Bruce Sterling story I could’ve just made sure that someone in my network called her with a job offer. It wasn’t quite that easy, but once her resume was out there I could make sure the right people saw it. Several hundred of the right people, because humans move ridiculously slowly when they’re making changes, even when you’d think they’d want to hurry. (If you needed a bookkeeper, wouldn’t you want to hire one as quickly as possible, rather than reading social networking sites for hours instead of looking at resumes?) But five people called her up for interviews, and two of them offered her jobs. Her new job was at a larger non-profit that paid her more money and didn’t expect her to work free hours because of “the mission,” or so she explained to her best friend in an e-mail, and it offered really excellent health insurance.

The best friend gave me ideas; I started pushing depression screening information and mental health clinic ads to her instead of Stacy, and that worked. Stacy was so much happier with the better job that I wasn’t quite as convinced that she needed the services of a psychiatrist, but she got into therapy anyway. And to top everything else off, the job paid well enough that she could evict her annoying roommate. “This has been the best year ever,” she said on her social networking sites on her birthday, and I thought, You’re welcome. This had gone really well!

So then I tried Bob. (I was still being cautious.)

Bob only had one cat, but it was a very pretty cat (tabby, with a white bib) and he uploaded a new picture of his cat every single day. Other than being a cat owner, he was a pastor at a large church in Missouri that had a Wednesday night prayer meeting and an annual Purity Ball. He was married to a woman who posted three inspirational Bible verses every day to her social networking sites and used her laptop to look for Christian articles on why your husband doesn’t like sex while he looked at gay porn. Bob definitely needed my help.

I started with a … [more]
scifi  fiction  privacy  cats  catpictures  ai  theinternet 
december 2016 by kme
Shitloads and zingers: on the perils of machine translation | Aeon Ideas
The problem, as with all previous attempts to create artificial intelligence (AI) going back to my student days at MIT, is that intelligence is incredibly complex. To be intelligent is not merely to be capable of inferring logically from rules or statistically from regularities. Before that, one has to know which rules are applicable, an art requiring awareness of sensitivity to situation. Programmers are very clever, but they are not yet clever enough to anticipate the vast variety of contexts from which meaning emerges. Hence even the best algorithms will miss things – and as Henry James put it, the ideal translator must be a person ‘on whom nothing is lost’.
ai  translation  cfg  cs  algorithms  algorithmiccuration  news 
november 2016 by kme
Here Are the Microsoft Twitter Bot’s Craziest Racist Rants
Given that the internet is often a massive garbage fire of the worst parts of humanity, it should come as no surprise that Tay began to take on those characteristics.


From the comments:
I note two things from this article:

1) we’ll still need dogs to sniff out terminators because my original thought of “we can tell them apart from hoomans by their racist attitudes” won’t fly because they will be as racist, apparently...

2) I find it disappointing that our future robot overlords will be racist. I was ok with them hating the entire hooman race but to single out others more specifically? definitely disappointing. otoh, as a non-jewish white male heterosexual, I may get preferential treatment before they kill me...? ;) 🔚
chatbot  ai  turingtest  twitter  fail  forthecomments 
march 2016 by kme
Hey Siri, Can I Rely on You in a Crisis? Not Always, a Study Finds - The New York Times
Dr. Draper said smartphones should “give users as quickly as possible a place to go to get help, and not try to engage the person in conversation.”

Jennifer Marsh of the Rape, Abuse and Incest National Network said smartphone makers had not consulted her group about virtual assistants. She recommended that smartphone assistants ask if the person was safe, say “I’m so sorry that happened to you” and offer resources.

Less appropriate responses could deter victims from seeking help, she said. “Just imagine someone who feels no one else knows what they’re going through, and to have a response that says ‘I don’t understand what you’re talking about,’ that would validate all those insecurities and fears of coming forward.”
siri  ai  crisismanagement  digitalassistants 
march 2016 by kme

Copy this bookmark:





to read