recentpopularlog in

kme : research   204

« earlier  
Man with brain implant on Musk’s Neuralink: “I would play video games” - MIT Technology Review
What robot do you control?

It’s a KUKA LBR assembly robot. It’s my favorite robot so far. It’s basically the kind of robot that builds cars. First I used the APL Arm, then the Luke arm from DEKA, and those have a limitation because they are meant to be anatomical—they can only reach how an arm can. That is not a limit that I should have. The Kuka has a lot of articulation. If I think up, it can move up and keep the arm parallel to the ground. It’s a lot faster. I like it.

What motivates people to join a study and get an experimental brain implant?

Me? To help push the technology so it is commonplace enough to really help people out, so they don’t go through the things that I went through. The depression and the feeling that you can’t do anything anymore and can’t contribute to society—it’s just despair.

Joining this study has given me a sense of purpose. Someone needs to do it, and I am not really doing much else. I hope it gets to the point where if you have an injury and insurance will pay for it, you can get back functions, maybe functions that you could never have had before.
cyborgs  brainimplants  science  research  disability  perspective 
july 2019 by kme
research process - How did researchers find articles before the Internet and the computer era? - Academia Stack Exchange
Notice that today, too many papers are published (and most of them repeat a little bit the ideas of previously published papers by same authors) I heard that the average paper has today just 2 readers. In 1970 it was probably more. – Basile Starynkevitch 5 hours ago
thewaythingswere  research  science  preinternet 
june 2019 by kme
The Gender Gap in Computer Science Research Won’t Close for 100 Years - The New York Times
The increasing reliance on computer algorithms in areas as varied as hiring and artificial intelligence has also led to concerns that the tech industry’s dominantly white and male work forces are building biases into the technology underlying those systems.

The study also indicated that men are growing less likely to collaborate with female researchers — a particularly worrying trend in a field where women have long felt unwelcome and because studies have shown that diverse teams can produce better research.

Without mentioning the growing fear of accusations of harassment, which surely accounts some of that reluctance?
cs  science  research  gendergap  womeinintech 
june 2019 by kme
Self-Reported Food Intolerance in Chronic Inflammatory Bowel Disease: Scandinavian Journal of Gastroenterology: Vol 32, No 6 |
Background: Although suggested, it has never been convincingly documented that food sensitivity is of pathogenetic importance in chronic inflammatory bowel disease. However, many patients may relate their gastrointestinal symptoms to specific food items ingested and may restrict their diet accordingly. Methods: A questionnaire was sent to all patients with chronic inflammatory bowel disease who attended the outpatient clinic, Medical Dept., Roskilde County Hospital in Køge, Denmark, in the year 1993. The patients were asked whether they had problems with any particular food item and, if so, to describe the symptoms experienced from it. A control group of 70 healthy persons were included. Results: Among 189 patients, 132 (70%) responded. One hundred and thirty had completed the questionnaire, 52 males and 78 females aged 13-89 years (median, 43 years). Fifty-three (41%) had Crohn's disease (CD), 69 (53%) ulcerative colitis (UC), and 8 (6%) unclassified colitis. Forty-one patients (31 CD, 10 UC) were operated on; 51 (19 CD, 32 UC) had disease activity. Sixty-five per cent of the patients and 14% of the controls reported being intolerant to one or more food items (P < 0.0001). The intolerance covered a wide range of food products. The commonest symptoms among patients were diarrhoea, abdominal pain, and meteorism and among controls, regurgitation. Food intolerance was equally common in CD (66%) and UC (64%) and was not related to previous operation, disease activity or disease location. Conclusion: Most patients with chronic inflammatory bowel intolerance disease feel intolerant to different food items and may restrict their diet accordingly. The frequency and pattern of food intolerance did not differ between patients with CD and UC. The food intolerance was probably unspecific rather than of pathogenetic importance.
crohns  ibd  uc  diet  health  research  paper  paywall 
december 2018 by kme
Healing the NIH-funded Biomedical Research Enterprise |
1. Casadevall A, Fang FC. Reforming science: methodological and cultural reforms. Infect Immun. 2012;80:891–6. [PMC free article] [PubMed]
2. Alberts B, Kirschner MW, Tilghman S, Varmus H. Rescuing US biomedical research from its systemic flaws. Proc Natl Acad Sci U S A. 2014;111:5773–72014. [PMC free article] [PubMed]
3. Alberts B, Kirschner MW, Tilghman S, Varmus H. Opinion: Addressing systemic problems in the biomedical research enterprise. Proc Natl Acad Sci U S A. 2015;112:1912–3. [PMC free article] [PubMed]
4. Daniels RJ. A generation at risk: young investigators and the future of the biomedical workforce. Proc Natl Acad Sci U S A. 2015;112:313–8. [PMC free article] [PubMed]
5. Lorsch JR. Maximizing the return on taxpayers’ investments in fundamental biomedical research. Mol. Biol. Cell. 2015;26:1578–1582. [PMC free article] [PubMed]
research  bioinformatics  funding  commentary  opinion 
november 2018 by kme
Should I share my horrible software? - Academia Stack Exchange |

Yes, you should.

First, most scientific software is terrible. I'd be very surprised if yours is worse than average: the mere fact you know design patterns and the difference between recursion and loops suggests it's better.

Second, it's unlikely you'll have the incentive or motivation to make it better unless, or until, it's needed by someone else (or you in 6 months). Making it open gives you that incentive.

Potential upsides: possible new collaborators, bugfixes, extensions, publications.

Potential downsides: timesink (maintaining code or fixing problems for other people), getting scooped. I'll be clear: I don't take either of these downsides very seriously.
software  devel  crapsoftware  academia  research  publishing 
september 2018 by kme
Every Hug, Every Fuss: Scientists Record Families’ Daily Lives - The New York Times |
Inside, the homes, researchers found rooms crammed with toys, DVDs, videos, books, exercise machines; refrigerators buried in magnets; and other odds and ends. The clutter on the fridge door tended to predict the amount of clutter elsewhere.

Outside the homes, the yards were open and green — but “no one was out there,” said Jeanne E. Arnold, a U.C.L.A. archaeologist who worked on the study. One family had a 17,000-square-foot yard, with a pool and a trampoline, and not even the children ventured out there during the study.

That, of course, would mean leaving the house, which is not always as simple as it sounds.
parenting  kids  anthropology  research 
august 2018 by kme
sometimes i'm wrong: life after bem |

let's start with this: bem recommends writing 'the article that makes the most sense now that you have seen the results' (rather than 'the article you planned to write when you designed your study'). it is pretty clear from the rest of this section that bem is basically telling us to invent whatever a priori hypotheses turn out to be supported by our results. indeed, he says this pretty explicitly on the next page:

'contrary to the conventional wisdom, science does not care how clever or clairvoyant you were at guessing your results ahead of time. scientific integrity does not require you to lead your readers through all your wrong-headed hunches only to show - voila! - they were wrongheaded.'

actually, science does care. and scientific integrity does require something along those lines. not that you tell us about all your wrong ideas, but that you not claim that you had the right idea all along if you didn't. if science didn't care, pre-registration and bayesian statistics would not be enjoying the popularity they are today.

the tension here is between sticking to the (messy) facts, and doing some polishing and interpreting for the reader so she doesn't have to wade around a giant pile of unstructured findings. i will give bem credit, he seems aware of this tension. later in the chapter, he writes:

'[...] be willing to accept negative or unexpected results without a tortured attempt to explain them away. do not make up long, involved, pretzel-shaped theories to account for every hiccup in the data.'

so he acknowledged that the results don't need to be perfect (though he seems more worried about the fact that the story you would need to tell to explain the imperfections would itself be ugly and tortured, rather than about the fact that it's important to disclose the imperfections for the sake of scientific rigor).

another (related) step is to stop claiming to have had a priori hypotheses that we didn't actually have. however, this step is trickier because editors and reviewers still want to see the 'correct' hypothesis up front. there are some good reasons for this - it is kind of a waste of time to read an entire intro building up to a hypothesis that turns out to be false. one handy solution for this is to select research questions that are interesting whichever way the results come out - then the intro can present the two competing hypotheses and why each result would be interesting/informative.*** i am told that in some areas of research this is just not possible. all the more reason for everyone to become a personality psychologist. come join us, we never know which way our results will turn out!

From the comments:
When I first encountered Bem's paper I was (quite frankly) appalled. But other's had directed me to read it, and I saw no criticisms in the literature about it, so I thought "well, that just must be the way things work."

Then (a few months later) I read a wonderful paper by Arina K. Bones (a great psychologist in her own right) and Navin R. Johnson ( On p. 409 the paper reads "We did not know whether to predict a rapid or slow change in animate associations because of conflicting existing evidence about the malleability of implicit cognition. The fragility of the hypothesis did not pose a difficulty because we proposed a priori to write the article as a good story and as if the ultimate results were anticipated all along (Bem, 2003). After all, this is psychology--only actual scientists would want to read how the process of scientific discovery actually occurred."

Exactly. And now 7 years later, your post is the second time I have seen Bem's advice criticized. I am almost sure to have missed another criticism of it here or there somewhere, but I am still amazed that this is considered mandatory reading nay, advice, for graduate students in psychology.

I found this quote from Albert Einstein on the internet one day (i.e., I am not giving accuracy to its attribution; nonetheless I find it useful): "Anyone who doesn't take truth seriously in small matters cannot be trusted in large ones either."

Sometimes science is full of boring details. So be it.
writing  science  research  publication 
november 2017 by kme
sometimes i'm wrong: modus tollens bitches |
if my theory (T) is right, my hypothesis (H) is appropriately derived from my theory, and my methods (M) are sound, then i should observe these data (D)

if T.H.M ---> D

so now what happens when we observe ~D? well, we no longer have to conclude ~T, because we have two other options: ~H and ~M. that is, instead of concluding that our theory is wrong, we can throw the hypothesis or the methods under the bus.

~M is especially tempting, because all it requires is to say that there was something wrong with the methods. it's a very easy escape route. blame the subject pool. blame the weather. blame the research assistants.**

2. don't throw the methods under the bus. consider the possibility that the theory or hypothesis is wrong. make it your new year's resolution: i will not write off a null finding or a failed replication unless there is a glaring error -- one that my enemy's mother would agree is an error.

3. when publishing your original results, provide enough detail about all important aspects of the method so that if someone attempts a replication following the procedure you describe, you cannot blame their result on poor methods. the onus is on us as original researchers to specify the things that matter. of course some are too obvious to specify (e.g., make sure your experimenter is wearing clothes), but these are covered by the 'glaring error that your enemy's mother would agree with' clause in #2. if you don't think another researcher in your field could competently carry out a replication with the information you publish, that's on you. if your procedure requires special expertise to carry out, specify what expertise. if you can't, your theory is not falsifiable.

Dowsers are among the most sincere of believers in pseudoscience (compared to, say, cold readers), but when they fail to detect the effect they're looking for, they tend to blame it on perturbances caused by that car or this person's hat. So experimenters spend a very long time checking that every minor detail of their setup is to the dowser's satisfaction, and the latter agrees not to claim that any of the items they've checked was the cause of a fault in the vortex (etc). Then they run the experiment, and of course they get a null result. "Oh," says the dowser (always, always, without fail), "It must have been because the vortex (etc) was perturbed by the bird that flew past/the sun going behind a cloud/whatever".
experimentalmethods  science  research  forthecomments 
november 2017 by kme
sometimes i'm wrong: on flukiness |
first, it seems to discount the possibility that the original finding was a fluke - a false positive that made it look like there is an effect when in fact there isn't. here's an analogy:

null hypothesis: my coin is fair
research hypothesis: my coin is weighted

original study: i flip the coin 20 times and get 15 heads (p = .041)
replication study: i flip the coin another 20 times and get 10 heads (p = 1.0)

do i need to explain why i got 15 heads the first time?

maybe. or maybe the first study was just a fluke. that happens sometimes (4.1% of the time, to be exact).

absolutely. and it's also possible the replication is a type II error and the effect is real and robust. but my point is that it's also possible - very, very possible - that the original result was a fluke and there never was any effect. how do we decide among these explanations (no real effect, a robust real effect, a real effect with boundary conditions/moderators)? we don't have to decide once and for all - we shouldn't treat a single study (original or replication) as definitive anyway. but we should weigh the evidence: if the replication study has no known flaws, was well-powered, and the original study had low power and found a barely-significant effect, then i would lean towards believing that the original was a fluke. if the replication study is demonstrably flawed and/or underpowered, then it should be discounted or weighed less. if both studies were well-powered and rigorous, then we should look for moderators, and test them. (actually, i'm totally happy with looking for moderators and testing them anyway, as long as someone else does the work.)

in a recent blog post, sam schwarzkopf made an argument related to the one i'm trying to debunk here. in it, he wrote: "the only underlying theory replicators put forth is that the original findings were spurious and potentially due to publication bias, p-hacking and/or questionable research practices. This seems mostly unfalsifiable."

it's absolutely falsifiable: run a well-powered, pre-registered direct replication of the original study, and get a significant result.

in a comment on that blog post, uli schimmack put it best:

'[...] often a simple explanation for failed replications is that the original studies capitalized on chance/sampling error. There is no moderator to be found. It is just random noise and you cannot replicate random noise. There is simply no empirical, experimental solution to find out why a particular study at one particular moment in history produced a particular result.'

it seems like that should be all that needs to be said. i feel like this entire blog post should be unnecessary.*** but this idea that those who fail to replicate a finding always need to explain the original result keeps coming up, and i thinks it's harmful. i think it's harmful because it confuses people about how probability and error work. and it's harmful because it puts yet another burden on replicators, who are already, to my mind, taking way too much shit from way too many people for doing something that should be seen as a standard part of science, and is actually a huge service to the field.**** let's stop beating up on them, and let's stop asking them to come up with theoretical explanations for what might very well be statistical noise.

**** if you think doing a replication is a quick and easy shortcut to fame, glory, and highly cited publications, call me. we need to talk.
research  phacking  pvalue  science  replication 
november 2017 by kme
When the Revolution Came for Amy Cuddy - The New York Times |
But since 2015, even as she continued to stride onstage and tell the audiences to face down their fears, Cuddy has been fighting her own anxieties, as fellow academics have subjected her research to exceptionally high levels of public scrutiny. She is far from alone in facing challenges to her work: Since 2011, a methodological reform movement has been rattling the field, raising the possibility that vast amounts of research, even entire subfields, might be unreliable. Up-and-coming social psychologists, armed with new statistical sophistication, picked up the cause of replications, openly questioning the work their colleagues conducted under a now-outdated set of assumptions. The culture in the field, once cordial and collaborative, became openly combative, as scientists adjusted to new norms of public critique while still struggling to adjust to new standards of evidence.

Cuddy, in particular, has emerged from this upheaval as a unique object of social psychology’s new, enthusiastic spirit of self-flagellation — as if only in punishing one of its most public stars could it fully break from its past. At conferences, in classrooms and on social media, fellow academics (or commenters on their sites) have savaged not just Cuddy’s work but also her career, her income, her ambition, even her intelligence, sometimes with evident malice. Last spring, she quietly left her tenure-track job at Harvard.

Some say that she has gained fame with an excess of confidence in fragile results, that she prized her platform over scientific certainty. But many of her colleagues, and even some who are critical of her choices, believe that the attacks on her have been excessive and overly personal. What seems undeniable is that the rancor of the critiques reflects the emotional toll among scientists forced to confront the fear that what they were doing all those years may not have been entirely scientific.
Since the late 1960s, the field’s psychologists have tried to elevate the scientific rigor of their work, introducing controls and carefully designed experiments like the ones found in medicine. Increasingly complex ideas about the workings of the unconscious yielded research with the charm of mesmerists’ shows, revealing unlikely forces that seem to guide our behavior: that simply having people wash their hands could change their sense of culpability; that people’s evaluations of risk could easily be rendered irrational; that once people have made a decision, they curiously give more weight to information in its favor. Humans, the research often suggested, were reliably mercurial, highly suggestible, profoundly irrational, tricksters better at fooling ourselves than anyone else.
For years, researchers treated journal articles, and their authors, with a genteel respect; even in the rare cases where a new study explicitly contradicted an old one, the community assumed that a lab error must account for the discrepancy. There was no incentive to replicate, in any case: Journals were largely not interested in studies that had already been done, and failed replications made people (maybe even your adviser) uncomfortable.
In 2014, Psychological Science started giving electronic badges, an extra seal of approval, to studies that made their data and methodologies publicly available and preregistered their design and analysis ahead of time, so that researchers could not fish around for a new hypothesis if they turned up some unexpected findings.
When other priming studies failed to replicate later that year, the Nobel laureate Daniel Kahneman, who discussed priming in his book “Thinking Fast and Slow,” wrote a letter to social psychologists who studied the effect, urging them to turn their attitude around. “To deal effectively with the doubts, you should acknowledge their existence and confront them straight on,” he wrote.
To Cuddy, Carney’s post seemed so sweeping as to be vague, self-abnegating. Even Simonsohn, who made clear his support for Carney’s decision, thought the letter had a strangely unscientific vehemence to it. “If I do a bad job proving there’s a ninth planet, I probably shouldn’t say there’s a ninth planet,” he says. “But I shouldn’t say there is no ninth planet, either. You should just ignore the bad study and go back to base line.”
For a moment, the scientist allowed the human element to factor into how he felt about his email response to that paper. “I wish,” he said, “I’d had the presence of mind to pick up the phone and call Amy.”
If Amy Cuddy is a victim, she may not seem an obvious one: She has real power, a best-selling book, a thriving speaking career. She did not own up fully to problems in her research or try to replicate her own study. (She says there were real hurdles to doing so, not least of which was finding a collaborator to take that on.) But many of her peers told me that she did not deserve the level of widespread and sometimes vicious criticism she has endured. “Amy has been the target of mockery and meanness on Facebook, on Twitter, in blog posts — I feel like, Wow, I have never seen that in science,” Van Bavel says. “I’ve only been in it for 15 years, but I’ve never seen public humiliation like that.”
I was surprised to find that some of the leaders in the replication movement were not Cuddy’s harshest critics but spoke of her right to defend her work in more measured tones. “Why does everyone care so much about what Amy says?” Brian Nosek says. “Science isn’t about consensus.” Cuddy was entitled to her position; the evidence in favor or against power posing would speak for itself. Leif Nelson, one of the three pioneers of the movement, says Cuddy is no different from most other scientists in her loyalty to her data. “Authors love their findings,” he says. “And you can defend almost anything — that’s the norm of science, not just in psychology.” He still considers Cuddy a “very serious psychologist”; he also believes the 2010 paper “is a bunch of nonsense.” But he says, “It does not strike me as at all notable that Amy would defend her work. Most people do.”

From the comments:
Speaking as a social psychologist (PhD UCLA Graduate School of Education, 1992), I found this article fascinating and important. The fulcrum of the whole piece is this: “‘I remember how happy we were when Dana called me with the results,’ Cuddy says. ‘Everything went in the direction it was supposed to.’” That remark illustrates the embrace of subjectivity where objectivity is the goal. Dr. Cuddy’s revealing statement lies at the nexus of why there is so much public distrust of science while, paradoxically, others suspend their skepticism and accept and even champion poor research; selection bias is an extremely powerful force.
socialscience  powerpose  bodylanguage  research  scientificrigor  statistics  psychology  phacking  replication  socialmedia 
october 2017 by kme
A Baby Wails, and the Adult World Comes Running -
Studying both superfast brain scans of healthy volunteers and direct electrode measurements in adult patients who were undergoing neurosurgery for other reasons, Dr. Young, with Christine E. Parsons of Aarhus University in Denmark, Morten L. Kringelbach of Oxford University and other colleagues, has tracked the brain’s response to the sound of an infant cry.

The researchers found that within 49 thousandths of a second of a recorded cry being played, the periaqueductal gray — an area deep in the midbrain that has long been linked to urgent, do-or-die behaviors — had blazed to attention, twice as fast as it reacted to dozens of other audio clips tested.
kids  infants  mammals  crying  instinct  research 
september 2017 by kme
publications - What is the preferable way to share data? - Academia Stack Exchange
OSF and GitHub are mentioned.
Registry of Research Data Repositories ( was a new one.
collaboration  research  sharing  data  bestpractices  advice 
february 2017 by kme
« earlier      
per page:    204080120160

Copy this bookmark:

to read