recentpopularlog in

kme : science   168

« earlier  
Man with brain implant on Musk’s Neuralink: “I would play video games” - MIT Technology Review
What robot do you control?

It’s a KUKA LBR assembly robot. It’s my favorite robot so far. It’s basically the kind of robot that builds cars. First I used the APL Arm, then the Luke arm from DEKA, and those have a limitation because they are meant to be anatomical—they can only reach how an arm can. That is not a limit that I should have. The Kuka has a lot of articulation. If I think up, it can move up and keep the arm parallel to the ground. It’s a lot faster. I like it.

What motivates people to join a study and get an experimental brain implant?

Me? To help push the technology so it is commonplace enough to really help people out, so they don’t go through the things that I went through. The depression and the feeling that you can’t do anything anymore and can’t contribute to society—it’s just despair.

Joining this study has given me a sense of purpose. Someone needs to do it, and I am not really doing much else. I hope it gets to the point where if you have an injury and insurance will pay for it, you can get back functions, maybe functions that you could never have had before.
cyborgs  brainimplants  science  research  disability  perspective 
july 2019 by kme
research process - How did researchers find articles before the Internet and the computer era? - Academia Stack Exchange
Notice that today, too many papers are published (and most of them repeat a little bit the ideas of previously published papers by same authors) I heard that the average paper has today just 2 readers. In 1970 it was probably more. – Basile Starynkevitch 5 hours ago
thewaythingswere  research  science  preinternet 
june 2019 by kme
The Gender Gap in Computer Science Research Won’t Close for 100 Years - The New York Times
The increasing reliance on computer algorithms in areas as varied as hiring and artificial intelligence has also led to concerns that the tech industry’s dominantly white and male work forces are building biases into the technology underlying those systems.

The study also indicated that men are growing less likely to collaborate with female researchers — a particularly worrying trend in a field where women have long felt unwelcome and because studies have shown that diverse teams can produce better research.


Without mentioning the growing fear of accusations of harassment, which surely accounts some of that reluctance?
cs  science  research  gendergap  womeinintech 
june 2019 by kme
How a Rant Against Short Shorts Overturned the ‘Good Ol’ Turtle Boy Club’ - The Chronicle of Higher Education
When Kirsten Hecht read Dodd’s letter, she said, she was "horrified" but considered it a symptom of a larger sickness. Hecht, a Ph.D. candidate at the University of Florida, said most of her male peers are welcoming. Still, some men with little expertise in her area of study have nonetheless tried to explain it to her. Or they’ve expressed concern about her productivity because she’s a mother, she said.
shortshorts  science  conferences 
june 2019 by kme
ensuring.pdf
Ensuring the Longevity of Digital Information
archive  digitization  archival  datastorage  permanence  tipsandtricks  science  article 
january 2019 by kme
HOWTO: Get tenure | http://matt.might.net/
Simpler advice would be: “Find a problem where your passions intersect society’s needs.” The rest will follow.

Doing a good job with teaching is perversely seen as a cardinal sin in some departments.

Focusing on teaching gets interpreted as a lack of dedication to research.

Let’s be clear: refusing to improve one’s teaching is morally unacceptable.

Torturing a captive audience every semester with soul-sapping lectures is criminal theft of tuition.

On metrics

Pre-tenure professors are often bombarded with metrics, targets and benchmarks to hit for tenure.

Everyone has heard horror stories of departments obsessing over specific metrics for tenure, and of the golden yet square pegs that failed to fit into round holes.

Goodhart’s law applies:

“When a measure becomes a target, it ceases to be a good measure.”

And, a quote I once heard on NPR:

“We can’t measure what counts, so we count what we can measure.”

Good departments will find a way of side-stepping metrics to judge what counts.

I realize that few patients or parents have the ability to do what I did, and they never will, until all of academic medicine goes open access.

In computer science, academic paywalls stifle.

In medicine, academic paywalls kill.
science  academia  tenure  phd  highered  advice  education  teaching  openaccess  publishing 
october 2018 by kme
Amy Cuddy’s power pose research is the latest example of scientific overreach. | https://www.slate.com/
If you read the New York Times, watch CBS News, or tune in to TED talks, you will encounter the power pose as solid science with a human touch. These media organizations portray it as a laboratory-tested idea that can help people live their lives better. But TV–approved and newspaper-endorsed social science is for outsiders. Insiders who are aware of the replication crisis in psychology research are suspicious of these sorts of dramatic claims based on small experiments. And you should be too.
powerpose  socialscience  replication  science 
november 2017 by kme
sometimes i'm wrong: life after bem | https://web.archive.org/://sometimesimwrong.typepad.com/wrong/2014/03/life-after-bem.html

let's start with this: bem recommends writing 'the article that makes the most sense now that you have seen the results' (rather than 'the article you planned to write when you designed your study'). it is pretty clear from the rest of this section that bem is basically telling us to invent whatever a priori hypotheses turn out to be supported by our results. indeed, he says this pretty explicitly on the next page:

'contrary to the conventional wisdom, science does not care how clever or clairvoyant you were at guessing your results ahead of time. scientific integrity does not require you to lead your readers through all your wrong-headed hunches only to show - voila! - they were wrongheaded.'

actually, science does care. and scientific integrity does require something along those lines. not that you tell us about all your wrong ideas, but that you not claim that you had the right idea all along if you didn't. if science didn't care, pre-registration and bayesian statistics would not be enjoying the popularity they are today.

the tension here is between sticking to the (messy) facts, and doing some polishing and interpreting for the reader so she doesn't have to wade around a giant pile of unstructured findings. i will give bem credit, he seems aware of this tension. later in the chapter, he writes:

'[...] be willing to accept negative or unexpected results without a tortured attempt to explain them away. do not make up long, involved, pretzel-shaped theories to account for every hiccup in the data.'

so he acknowledged that the results don't need to be perfect (though he seems more worried about the fact that the story you would need to tell to explain the imperfections would itself be ugly and tortured, rather than about the fact that it's important to disclose the imperfections for the sake of scientific rigor).

another (related) step is to stop claiming to have had a priori hypotheses that we didn't actually have. however, this step is trickier because editors and reviewers still want to see the 'correct' hypothesis up front. there are some good reasons for this - it is kind of a waste of time to read an entire intro building up to a hypothesis that turns out to be false. one handy solution for this is to select research questions that are interesting whichever way the results come out - then the intro can present the two competing hypotheses and why each result would be interesting/informative.*** i am told that in some areas of research this is just not possible. all the more reason for everyone to become a personality psychologist. come join us, we never know which way our results will turn out!


From the comments:
When I first encountered Bem's paper I was (quite frankly) appalled. But other's had directed me to read it, and I saw no criticisms in the literature about it, so I thought "well, that just must be the way things work."

Then (a few months later) I read a wonderful paper by Arina K. Bones (a great psychologist in her own right) and Navin R. Johnson (http://pps.sagepub.com/content/2/4/406.full.pdf+html). On p. 409 the paper reads "We did not know whether to predict a rapid or slow change in animate associations because of conflicting existing evidence about the malleability of implicit cognition. The fragility of the hypothesis did not pose a difficulty because we proposed a priori to write the article as a good story and as if the ultimate results were anticipated all along (Bem, 2003). After all, this is psychology--only actual scientists would want to read how the process of scientific discovery actually occurred."

Exactly. And now 7 years later, your post is the second time I have seen Bem's advice criticized. I am almost sure to have missed another criticism of it here or there somewhere, but I am still amazed that this is considered mandatory reading nay, advice, for graduate students in psychology.

I found this quote from Albert Einstein on the internet one day (i.e., I am not giving accuracy to its attribution; nonetheless I find it useful): "Anyone who doesn't take truth seriously in small matters cannot be trusted in large ones either."

Sometimes science is full of boring details. So be it.
writing  science  research  publication 
november 2017 by kme
sometimes i'm wrong: modus tollens bitches | https://web.archive.org/://sometimesimwrong.typepad.com:80/wrong/2015/01/modus-tollens.html
if my theory (T) is right, my hypothesis (H) is appropriately derived from my theory, and my methods (M) are sound, then i should observe these data (D)

or:
if T.H.M ---> D

so now what happens when we observe ~D? well, we no longer have to conclude ~T, because we have two other options: ~H and ~M. that is, instead of concluding that our theory is wrong, we can throw the hypothesis or the methods under the bus.

~M is especially tempting, because all it requires is to say that there was something wrong with the methods. it's a very easy escape route. blame the subject pool. blame the weather. blame the research assistants.**

2. don't throw the methods under the bus. consider the possibility that the theory or hypothesis is wrong. make it your new year's resolution: i will not write off a null finding or a failed replication unless there is a glaring error -- one that my enemy's mother would agree is an error.

3. when publishing your original results, provide enough detail about all important aspects of the method so that if someone attempts a replication following the procedure you describe, you cannot blame their result on poor methods. the onus is on us as original researchers to specify the things that matter. of course some are too obvious to specify (e.g., make sure your experimenter is wearing clothes), but these are covered by the 'glaring error that your enemy's mother would agree with' clause in #2. if you don't think another researcher in your field could competently carry out a replication with the information you publish, that's on you. if your procedure requires special expertise to carry out, specify what expertise. if you can't, your theory is not falsifiable.

Dowsers are among the most sincere of believers in pseudoscience (compared to, say, cold readers), but when they fail to detect the effect they're looking for, they tend to blame it on perturbances caused by that car or this person's hat. So experimenters spend a very long time checking that every minor detail of their setup is to the dowser's satisfaction, and the latter agrees not to claim that any of the items they've checked was the cause of a fault in the vortex (etc). Then they run the experiment, and of course they get a null result. "Oh," says the dowser (always, always, without fail), "It must have been because the vortex (etc) was perturbed by the bird that flew past/the sun going behind a cloud/whatever".
experimentalmethods  science  research  forthecomments 
november 2017 by kme
sometimes i'm wrong: on flukiness | https://web.archive.org/://sometimesimwrong.typepad.com/wrong/2015/04/on-flukiness.html
first, it seems to discount the possibility that the original finding was a fluke - a false positive that made it look like there is an effect when in fact there isn't. here's an analogy:

null hypothesis: my coin is fair
research hypothesis: my coin is weighted

original study: i flip the coin 20 times and get 15 heads (p = .041)
replication study: i flip the coin another 20 times and get 10 heads (p = 1.0)

do i need to explain why i got 15 heads the first time?

maybe. or maybe the first study was just a fluke. that happens sometimes (4.1% of the time, to be exact).

absolutely. and it's also possible the replication is a type II error and the effect is real and robust. but my point is that it's also possible - very, very possible - that the original result was a fluke and there never was any effect. how do we decide among these explanations (no real effect, a robust real effect, a real effect with boundary conditions/moderators)? we don't have to decide once and for all - we shouldn't treat a single study (original or replication) as definitive anyway. but we should weigh the evidence: if the replication study has no known flaws, was well-powered, and the original study had low power and found a barely-significant effect, then i would lean towards believing that the original was a fluke. if the replication study is demonstrably flawed and/or underpowered, then it should be discounted or weighed less. if both studies were well-powered and rigorous, then we should look for moderators, and test them. (actually, i'm totally happy with looking for moderators and testing them anyway, as long as someone else does the work.)

in a recent blog post, sam schwarzkopf made an argument related to the one i'm trying to debunk here. in it, he wrote: "the only underlying theory replicators put forth is that the original findings were spurious and potentially due to publication bias, p-hacking and/or questionable research practices. This seems mostly unfalsifiable."

it's absolutely falsifiable: run a well-powered, pre-registered direct replication of the original study, and get a significant result.

in a comment on that blog post, uli schimmack put it best:

'[...] often a simple explanation for failed replications is that the original studies capitalized on chance/sampling error. There is no moderator to be found. It is just random noise and you cannot replicate random noise. There is simply no empirical, experimental solution to find out why a particular study at one particular moment in history produced a particular result.'

it seems like that should be all that needs to be said. i feel like this entire blog post should be unnecessary.*** but this idea that those who fail to replicate a finding always need to explain the original result keeps coming up, and i thinks it's harmful. i think it's harmful because it confuses people about how probability and error work. and it's harmful because it puts yet another burden on replicators, who are already, to my mind, taking way too much shit from way too many people for doing something that should be seen as a standard part of science, and is actually a huge service to the field.**** let's stop beating up on them, and let's stop asking them to come up with theoretical explanations for what might very well be statistical noise.

**** if you think doing a replication is a quick and easy shortcut to fame, glory, and highly cited publications, call me. we need to talk.
research  phacking  pvalue  science  replication 
november 2017 by kme
Sokal affair - Wikipedia
In an interview on the U.S. radio program All Things Considered, Sokal said he was inspired to submit the bogus article after reading Higher Superstition (1994), in which authors Paul R. Gross and Norman Levitt claim that some humanities journals would publish anything as long as it had "the proper leftist thought" and quoted (or was written by) well-known leftist thinkers.[6][7]
science  publication  ethics  fakescience 
february 2017 by kme
How domestication changes species, including the human | Aeon Essays
Keeping pets meant inviting animals into the family. It also created new relationships of inequality. The anthropologist Tim Ingold at the University of Aberdeen in Scotland, who has spent years studying the reindeer herders of Lapland, argues that it is a mistake to regard domestication as a form of progress, from living in opposition to nature to harnessing it for our benefit. In The Perception of the Environment (2000), he notes that foraging peoples generally regard animals as their equals. Hunting is not a form of violence so much as a willing sacrifice on the part of the animal. Pastoralists, on the other hand, tend to regard animals as servants, to be mastered and controlled. Domestication doesn’t entail making wild animals tame, Ingold says. Instead, it means replacing a relationship founded on trust with one ‘based on domination’.
aminals  science  homosapiens  domestication 
december 2016 by kme
Deep time’s uncanny future is full of ghostly human traces | Aeon Ideas
Although ostensibly inert, like Chernobyl’s ‘undead’ isotopes, plastics are in fact intensely lively, leaching endocrine-disrupting chemicals. Single-use plastic might seem to disappear when I dispose of it, but it (and therefore I) will nonetheless continue to act on the environments in which it persists for millennia.
science  anthropocene  time  pollution  plastics  agw  climatechange  thefuture 
november 2016 by kme
« earlier      
per page:    204080120160

Copy this bookmark:





to read