recentpopularlog in

pierredv : statistics   210

« earlier  
What’s next for psychology’s embattled field of social priming - Dec 2019
Many researchers say they now see social priming not so much as a way to sway people’s unconscious behaviour, but as an object lesson in how shaky statistical methods fooled scientists into publishing irreproducible results.
NatureJournal  psychology  reproducibility  replication  statistics 
5 weeks ago by pierredv
The gene-based hack that is revolutionizing epidemiology - Nature news Dec 2019
"Mendelian randomization offers a simple way to distinguish causation from correlation. But are scientists overusing it? "
"... genetics has transformed how people untangle correlation from causation. But it has come to raise epidemiology, not bury it. Genetic differences, it turns out, can help remove confounding variables from analyses, by standing in as proxies for environmental exposure. The technique is called Mendelian randomization."

"In principle, a Mendelian randomization analysis can be done wherever a genetic variant can be found to naturally mimic the effects of an environmental exposure."

"To an economist, Mendelian randomization looks a lot like something called instrumental variable analysis, in which a variable referred to as the instrument is used to help unpick hidden relationships between two other observations."
NatureJournal  genetics  epidemiology  causation  causal-inference  statistics  bias 
5 weeks ago by pierredv
Benford’s Law can detect malicious social bots | Golbeck | First Monday
Abstract: "Social bots are a growing presence and problem on social media. There is a burgeoning body of work on bot detection, often based in machine learning with a variety of sophisticated features. In this paper, we present a simple technique to detect bots: adherence with Benford’s Law. Benford’s Law states that, in naturally occurring systems, the frequency of numbers first digits is not evenly distributed. Numbers beginning with a 1 occur roughly 30 percent of the time, and are six times more common than numbers beginning with a 9. In earlier work, we established that Benford’s Law holds for social connections across online social networks. In this paper, we show that this principle can be used to detect bots because they violate the expected distribution. In three studies — an analysis of a large Russian botnet we discovered, and studies of purchased retweets on Twitter and purchased likes on Facebook — we show that bots’ social patterns consistently violate Benford’s Law while legitimate users follow it closely. Our results offer a computationally efficient new tool for bot detection. There are also broader implications for understanding fraudulent online behavior. Benford’s Law is present in many aspects of online social interactions, and looking for violations of the distribution holds promise for a range of new applications."
bots  socialmedia  statistics  maths  FirstMonday 
august 2019 by pierredv
J. Richard Gott - Wikipedia - "Copernicus method" of lifetime estimation - wikipedia
Via David Runciman, Talking Politics, Apr 2019

"Gott first thought of his "Copernicus method" of lifetime estimation in 1969 when stopping at the Berlin Wall and wondering how long it would stand. Gott postulated that the Copernican principle is applicable in cases where nothing is known; unless there was something special about his visit (which he didn't think there was) this gave a 75% chance that he was seeing the wall after the first quarter of its life. Based on its age in 1969 (8 years), Gott left the wall with 50% confidence that it wouldn't be there in 1993 (1969 + 8•(1.5/0.5)).

In fact, the wall was brought down in 1989, and 1993 was the year in which Gott applied his "Copernicus method" to the lifetime of the human race. His paper in Nature was the first to apply the Copernican principle to the survival of humanity...

He made a major effort subsequently to defend his form of the Doomsday argument from a variety of philosophical attacks, and this debate (like the feasibility of closed time loops) is still ongoing. To popularize the Copernicus method, Gott gave The New Yorker magazine a 95% confidence interval for the closing time of forty-four Broadway and Off Broadway productions based only on their opening dates. He was more or less 95% correct.[5] "
Wikipedia  probability  statistics  prediction 
april 2019 by pierredv
[pdf] The Nile River Records Revisited - How good were Joseph's predictions? Ghil_NileTalk.pdf
Ecole Normale Supérieure, Paris, and University of California, Los Angeles
The Nile River Records Revisited: How good were Joseph's predictions?
Michael Ghil
Egypt  modeling  statistics 
april 2019 by pierredv
Is the FDA Too Conservative or Too Aggressive?: A Bayesian Decision Analysis of Clinical Trial Design, Aug 2015, Vahid Montazerhodjat & Andrew W. Lo
Via Tom Hazlett, Nov 2017

NBER Working Paper No. 21499
Issued in August 2015
NBER Program(s):Health Care, Health Economics

Implicit in the drug-approval process is a trade-off between Type I and Type II error. We explore the application of Bayesian decision analysis (BDA) to minimize the expected cost of drug approval, where relative costs are calibrated using U.S. Burden of Disease Study 2010 data. The results for conventional fixed-sample randomized clinical-trial designs suggest that for terminal illnesses with no existing therapies such as pancreatic cancer, the standard threshold of 2.5% is substantially more conservative than the BDA-optimal threshold of 27.9%. However, for relatively less deadly conditions such as prostate cancer, 2.5% is more risk-tolerant or aggressive than the BDA-optimal threshold of 1.2%. We compute BDA-optimal sizes for 25 of the most lethal diseases and show how a BDA-informed approval process can incorporate all stakeholders’ views in a systematic, transparent, internally consistent, and repeatable manner.
NBER  medicine  risk-assessment  probability  statistics  decision-making  Bayesian  research  healthcare  cancer  BDA  FDA 
december 2018 by pierredv
MIMO testbed with an insight into signal strength distribution around transmitter/receiver sites - IEEE Conference Publication
Abstract:
This paper presents a high precision test-bed to obtain an insight into the operation of a multiple element antenna (MEA) communication system in an indoor environment. In this system, transmitter and receiver are equipped with 180deg or 90deg 3 dB hybrids whose two output ports are terminated with co-polar monopole antennas. By feeding the signal to one of the remaining two ports of the hybrids various radiation patterns are established. These in turn create various communication channels formed by MEA and a surrounding scattering environment. The testbed allows for the signal strength measurements around the receiver/transmitter sides for various radiation pattern modes generated by transmitting and receiving array antennas operating in conjuncture with the hybrids. Details concerning mechanical, electrical and electronics hardware and associated measurement software are described. Initial measurement results in the ISM 2.4 GHz band for a 2times2 MIMO system that includes Bluetooth modules equipped with planar monopoles and 180deg/90deg hybrids are given
IEEE-Xplore  MIMO  antennas  signal-strength  statistics 
october 2018 by pierredv
South Florida’s Hurricane Building Code is Strong—And North Florida’s Could Be Stronger by Bob Henson | Category 6 | Weather Underground
"Figure 1 (below) shows the 3-second wind gusts used in Florida since 2010 to set the minimum building code that applies to most homes and other structures apart from hospitals and other health care facilities. These values were calculated based on extensive computer modeling and observations, drawing in part on the state’s multiple hurricane landfalls in 2004. They’re designed to represent the highest gusts one would expect to recur at a given point in a typical 700-year period."

Can apply to spectrum statistics: uses combination of "extensive computer modeling and observation"

From Sun Sentinel quote: '' Insurance Journal added: “The shift toward less rigorous codes is driven by several factors, experts say: Rising anti-regulatory sentiment among state officials, and the desire to avoid anything that might hurt home sales and the tax revenue that goes with them. And fierce lobbying from home builders.” ''
Wunderground  risk-assessment  weather  building-codes  interference  statistics 
october 2018 by pierredv
(18) What is the relationship between R-squared and p-value in a...
R-square value tells you how much variation is explained by your model. So 0.1 R-square means that your model explains 10% of variation within the data. The greater R-square the better the model. Whereas p-value tells you about the F statistic hypothesis testing of the "fit of the intercept-only model and your model are equal". So if the p-value is less than the significance level (usually 0.05) then your model fits the data well.
Thus you have four scenarios:
statistics 
may 2018 by pierredv
Probability of precipitation - Wikipedia
According to the U.S. National Weather Service (NWS), POP is the probability of exceedance that more than 0.01 inches (0.25 mm) of precipitation will fall in a single spot, averaged over the forecast area.[1] This can be expressed mathematically:

{\displaystyle {\text{PoP}}=C\times A} [1]
C = the confidence that precipitation will occur somewhere in the forecast area.
A = the percent of the area that will receive measurable precipitation, if it occurs at all.
For instance, if there is a 100% probability of rain covering one half of a city, and a 0% probability of rain on the other half of the city, the POP for the city would be 50%. A 50% chance of a rainstorm covering the entire city would also lead to a POP of 50%. The POP thus usually expresses a combination of degree of confidence and geographic coverage.
weather  statistics  probability 
february 2018 by pierredv
Dying for a Paycheck: These Jobs Are More Dangerous Than Military Service
"82 servicemembers per 100,000 have died each year to all possible causes, and the leading cause of death is ... [accident related, at] 40.8 deaths per 100,000 servicemembers per year since 1980."

"Logging: 127.8 deaths per 100,000 workers in 2012
Fishing: 117.0 deaths per 100,000 workers in 2012"

Per http://www.governing.com/gov-data/law-enforcement-fatality-rates-by-state.html, South Dakota had the highest combined death rates for police officers, at 22 per 100,000 officers

Per https://calwatchdog.com/2013/01/23/firefighter-one-of-nations-safest-jobs/, "Firefighters die at a rate of 2.5 per 100,000 workers, which is slightly above the rate for cashiers (1.6)."

"Police officers died at the rate of 18.6 per 100,000"
statistics  mortality  military  police  firefighting 
february 2018 by pierredv
New companies needed to maintain small satellite market growth - SpaceNews.com Jan 2018
"The number of small satellites launched in 2017 exceeded even the most optimistic forecasts, but continued growth of the market will require new companies to start deploying constellations in the next few years."

"The increase represents a rebound after two years of declines in the number of such satellites launches, which the company said was linked to a lack of launch capacity from launch failures and other delays."

"That growth was also due, in large part, to a single satellite operator, Planet. "
SpaceNews  space  small-sats  launch  Planet  SpaceWorks  data  statistics 
january 2018 by pierredv
How statistical deception created the appearance that statins are safe and effective in primary and secondary prevention of cardiovascular disease: Expert Review of Clinical Pharmacology: Vol 8, No 2
David M Diamond & Uffe Ravnskov

https://doi.org/10.1586/17512433.2015.1012494

Abstract: "We have provided a critical assessment of research on the reduction of cholesterol levels by statin treatment to reduce cardiovascular disease. Our opinion is that although statins are effective at reducing cholesterol levels, they have failed to substantially improve cardiovascular outcomes. We have described the deceptive approach statin advocates have deployed to create the appearance that cholesterol reduction results in an impressive reduction in cardiovascular disease outcomes through their use of a statistical tool called relative risk reduction (RRR), a method which amplifies the trivial beneficial effects of statins. We have also described how the directors of the clinical trials have succeeded in minimizing the significance of the numerous adverse effects of statin treatment."

Per https://www.sciencedaily.com/releases/2015/02/150220110850.htm, "According to Diamond and Ravnskov, statins produce a dramatic reduction in cholesterol levels, but they have "failed to substantially improve cardiovascular outcomes." They further state that the many studies touting the efficacy of statins have not only neglected to account for the numerous serious adverse side effects of the drugs, but supporters of statins have used what the authors refer to as "statistical deception" to make inflated claims about their effectiveness."
statins  ASCVD  disease  health  healthcare  statistics 
december 2017 by pierredv
What is ‘stationarity’, and why does it matter? - martingeddes
"[stationarity] Technically it is the absence of change in the probability distribution of some random variable (in this case of packet loss and delay). You can think of it as being the ‘predictability’ of the network transport."
Martin-Geddes  statistics  stationarity  broadband 
november 2017 by pierredv
Stationarity is the new speed - Martin Geddes, Oct 2017
"The goal of this presentation is to share exemplars of important broadband Internet access performance phenomena. In particular, we highlight the critical role of stationarity. When they have non-stationarity, networks are useless for most applications. We show real-world examples of both stationarity and non-stationarity, and discuss the implications for broadband stakeholders. These phenomena are only visible when using state-of-the-art high-fidelity metrics and measures that capture instantaneous flow"
Slideshare  Martin-Geddes  broadband  statistics 
october 2017 by pierredv
Earth stations on moving platforms - IEEE Xplore Document
Via Dale Hatfield
Enrique Cuevas, Vijitha Weerackody et al
Abstract:
Earth stations on moving platforms (ESOMPs) are a new generation of satellite terminals designed to provide on-the-move broadband communication services to land vehicles, aircraft, and ships. ESOMPs use very small antennas and require tracking systems to maintain accurate pointing to the target satellite. Because they operate while moving, there may be instances when antenna mispointing may produce interference to other satellites or other radio systems. To account for pointing errors and other time-varying characteristics of a network of ESOMPs, it is necessary to use statistical approaches for interference analysis. This paper provides an overview of ESOMPs, their technical and operational characteristics, statistical approaches for interference analysis, and the standards and regulatory challenges that must be addressed for their successful operation.
satellite  Interference  statistics 
may 2017 by pierredv
Cholesterol wars: Does a pill a day keep heart attacks away? | New Scientist issue 3112, Feb 2017
"For a start, heart attacks may have halved in the JUPITER trial, but the absolute incidence of heart attacks in the study population was low anyway. Only 99 people had a fatal heart attack during the trial period, 31 of whom were taking the statin. Viewed that way, less than 0.5 per cent of the people treated with rosuvastatin benefited, casting a different light on the drug’s effectiveness.

Similar caveats arise in other analyses. As highlighted in a 2014 editorial in the Annals of Internal Medicine, for example, two meta-analyses of studies from 2012 and 2013 managed to come to opposite conclusions about statins’ effectiveness, despite the mortality levels they found differing by less than half a per cent."

"An alternative measure of a drug’s efficacy is “number needed to treat” (NNT), the number of people that have to be given a therapy for a specified time for one to benefit"

"Muscle pain, or myalgia, is the most commonly cited side-effect of statins. "

"Last year, modelling of available data by Judith Finegold at Imperial College London showed that a 50-year-old, non-smoking man without diabetes and with average cholesterol and blood pressure will increase his life expectancy by seven months on average after starting preventative statin therapy. But that average is highly misleading, Finegold says: it disguises the fact that 7 of that 100 will gain an average of 99 months (8.25 years) of life – while the remaining 93 get nothing at all."
ASCVD  heart  health  cholesterol  statins  NewScientist  NNT  statistics  myalgia  side-effects 
may 2017 by pierredv
Significant Digits: Responsible Use of Quantitative Information - European Commission
In this workshop we will review a seminal essay by Andrea Saltelli and Mario Giampietro, The Fallacy of Evidence Based Policy. That paper contains positive recommendations for the development of a responsible quantification. The conference will be devoted to the analysis and development of those ideas.
EU  statistics  research  workshop  report  policy  science  Saltelli  Giampietro  economics 
october 2016 by pierredv
Significant digits - Information policy - EU Bookshop
Responsible use of quantitative information : inspirational workshop 2
The DG JRC organised in 9-10 June 2015 a workshop “Significant Digits: Responsible Use of Quantitative Information”, mostly targeted at European Commission’s colleagues, which invited to reflection on the problem of irresponsible misuse of quantitative information in policy relevant matters

Corporate author(s): European Commission, Joint Research Centre

Private author(s): Ângela Guimarães Pereira, Jerome Ravetz, Andrea Saltelli

Themes: Information policy, Social sciences research

Target audience: Scientific
Key words: information society, information policy, information analysis, quantitative analysis, social sciences, information processing, research report
EU  statistics  research  workshop  report 
october 2016 by pierredv
[pdf] Pay no attention to the model behind the curtain - Philip Stark
To appear in Significant Digits: Responsible use of quantitative Information, Andrea Saltelli and Ângela Guimarães Pereira Eds
statistics  * 
october 2016 by pierredv
Taking the American Pulse This Election Season | RAND
Background story
"For the past decade, a RAND survey has asked and answered that question on topics as diverse as the economic crisis, health care reform, and the political force of one Donald J. Trump.
It's called the RAND American Life Panel, and whereas many surveys present snapshots of public opinion, this one offers a stop-motion film of changing tastes and evolving attitudes. That approach helped make it one of the most accurate surveys in the 2012 presidential elections—and it's about to face another test in the crucible of 2016."
polling  surveys  RAND  USA  politics  statistics 
may 2016 by pierredv
A new kind of logic: How to upgrade the way we think
Example of Simpson's paradox: "Say a clinical trial involving a large group of people shows that a new drug is effective at treating an illness. Break things down, though, and it turns out that the recovery rate is lower for men who got the drug compared with those who didn’t – and it is also lower for women. . . the reason is that the drug does not cause better recovery – it’s actually your sex that does"

Pearl's "causal structural model": "Rather than combing a data set for things you think might be cause and effect, the algorithms instead create a hypothetical set of relationships and see if they fit with the data in every instance."It" can be a mammoth task"; reminds me of computational complexity of Tononi's integrated information theory (IIT) of consciousness.
NewScientist  statistics  causality  correlation  Judea-Pearl  paradox 
april 2016 by pierredv
REsearchGate paper: Correlations between the Interference Statistics of Finnish Radio Systems and EMC Market Surveillance In Finland
Abstract: In this paper two sets of interference statistics have been compared against those of the EMC market surveillance statistics made by The Safety Technology Authority (TUKES). The main focus has been on electrical equipment. The aim of the paper is to find out whether there is a potential use and usefulness for the applying of interference statistics when evaluating the effectiveness of EMC market surveillance, and also when in allocating EMC market surveillance resources. Also, the costs of EMC market surveillance have been compared against the costs of interference problem resolving.
interference  statistics 
march 2016 by pierredv
Climate change means the flood defence rule book needs a rewrite - New Scientist - Opinion 16 Jan 2016
"The chaotic behaviour of weather systems makes it impossible to accurately predict rainfall, river flows and the like more than a few weeks in advance. However, hydrologists tend to assume that these variables fluctuate randomly in the long run, which means that their average value, or the probability of exceeding a given threshold, can be estimated accurately from lots of observations. What’s more, these results do not change over time – a property known as “stationarity”– and so form the basis of flood defence plans."

But some hydrologists are saying that climate change => stationarity is dead.
NewScientist  opinion  climate-change  modeling  statistics  weather 
march 2016 by pierredv
Statisticians issue warning over misuse of P values : Nature News & Comment March 2016
"Misuse of the P value — a common test for judging the strength of scientific evidence — is contributing to the number of research findings that cannot be reproduced, the American Statistical Association (ASA) warns in a statement released today. The group has taken the unusual step of issuing principles to guide use of the P value, which it says cannot determine whether a hypothesis is true or whether results are important."
statistics  NatureJournal 
march 2016 by pierredv
When U.S. air force discovered the flaw of averages | Toronto Star
"In the early 1950s, a young lieutenant realized the fatal flaw in the cockpit design of U.S. air force jets. Todd Rose explains in an excerpt from his book, The End of Average."
averages  statistics  aviation 
february 2016 by pierredv
Science Stories - Unexpected - YouTube
"We need mathematical help to tell the difference between a real discovery and the illusion of one. Fellow of the Royal Society and future President of the Royal Statistical Society, Sir David Spiegelhalter visits Dr Nicole Janz to discuss reproducibility in scientific publications."
RoyalSociety  David-Spiegelhalter  reproducibility  video  statistics 
january 2016 by pierredv
Secret source code pronounces you guilty as charged | Ars Technica
"The results from a Pennsylvania company's TrueAllele DNA testing software have been used in roughly 200 criminal cases, from California to Florida, helping put murderers and rapists in prison. Criminal defense lawyers, however, want to know whether it's junk science. Defense attorneys have routinely asked, and have been denied, access to examine the software's 170,000 lines of source code in a bid to challenge the authenticity of its conclusions. The courts generally have agreed with Cybergenetics, the company behind TrueAllele, that an independent examination of the code is unwarranted, that the code is a proprietary trade secret, and disclosing it could destroy the company financially. A new challenge, pending before the California Supreme Court, concerns some of the company's latest conclusions"
DNA-testing  statistics  ArsTechnica 
october 2015 by pierredv
'Great Pause' Among Prosecutors As DNA Proves Fallible : NPR
"Over the summer, the Texas Forensic Science Commission, which sets standards for physical evidence in state courts, came to an unsettling conclusion: There was something wrong with how state labs were analyzing DNA evidence. ... But when a state lab reran the analysis of a DNA match from a murder case about to go to trial in Galveston, Texas, it discovered the numbers changed quite a bit. Under the old protocol, says defense lawyer Roberto Torres, DNA from the crime scene was matched to his client with a certainty of more than a million to one. ... "When they retested it, the likelihood that it could be someone else was, I think, one in 30-something, one in 40. So it was a significant probability that it could be someone else," Torres says." "What happened in Texas, he says, is that labs have been using cutting-edge "testing kits" that can extract tiny traces of DNA from crime scenes, but those samples were then analyzed with math that's not suited to "weak" samples that combine "
NPR  probability  statistics  law 
october 2015 by pierredv
Dempster–Shafer theory - Wikipedia, the free encyclopedia
"The theory of belief functions, also referred to as evidence theory or Dempster–Shafer theory (DST), is a general framework for reasoning with uncertainty, with understood connections to other frameworks such as probability, possibility and imprecise probability theories."
statistics  probability  epistemology  wikipedia 
september 2015 by pierredv
The Average - 2014 : WHAT SCIENTIFIC IDEA IS READY FOR RETIREMENT? - Edge.org
"Our focus on averages should be retired. Or, if not retired, we should give averages an extended vacation. During this vacation, we should catch up on another sort of difference between groups that has gotten short shrift: we should focus on comparing the difference in variance (which captures the spread or range of measured values) between groups."
Edge.org  statistics 
september 2015 by pierredv
Reproducibility - 2014 : WHAT SCIENTIFIC IDEA IS READY FOR RETIREMENT? - www.edge.org
Victoria Stodden Distinguishes 3 kinds of reproducibility: empirical, computational, statistical.
Edge.org  reproducibility  statistics  method  scientific-method 
september 2015 by pierredv
Chance: Peace talks in the probability wars - physics-math - 16 March 2015 - Control - New Scientist
"statisticians are slowly coming to a new appreciation: in a world of messy, incomplete information, the best way might be to combine the two very different worlds of probability – or at least mix them up a little." "a crucial distinction between two different sorts of uncertainty: stuff we don't know, and stuff we can't know" Can't know -> frequentist Don't know -> divides frequentists and Bayesians Strengths & Weaknesses "Where data points are scant and there is little chance of repeating an experiment, Bayesian methods can excel in squeezing out information." -- e.g. astrophysics "Where [well-grounded theories that provide good priors] don't exist, a Bayesian analysis can easily be a case of garbage in, garbage out." "Frequentism in general works well where plentiful data should speak in the most objective way possible." -- e.g. Higgs boson "frequentism's main weakness: the way it ties itself in knots through its disdain for all don't-know uncertainties."
statistics  bayesian 
june 2015 by pierredv
Anscombe's quartet - Wikipedia, the free encyclopedia
"Anscombe's quartet comprises four datasets that have nearly identical simple statistical properties, yet appear very different when graphed" via Margaret Schwertner
statistics  visualization 
may 2015 by pierredv
Statistics: P values are just the tip of the iceberg : Nature News & Comment
"Ridding science of shoddy statistics will require scrutiny of every step, not merely the last one, say Jeffrey T. Leek and Roger D. Peng."
statistics  comment  NatureJournal 
april 2015 by pierredv
Doing Bayesian Data Analysis: Now in JAGS! Now in JAGS!
"I have created JAGS versions of all the BUGS programs in Doing Bayesian Data Analysis. Unlike BUGS, JAGS runs on MacOS, Linux, and Windows. JAGS has other features that make it more robust and user-friendly than BUGS. I recommend that you use the JAGS versions of the programs. "
bayesian  statistics  books  JAGS  [R]  textbooks  Kruschke 
april 2015 by pierredv
Science’s Significant Stats Problem - Issue 4: The Unlikely -- Nautil.us
-- Columbia University political scientist and statistician Andrew Gelman puts it bluntly: “The scientific method that we love so much is a machine for generating exaggerations.”
statistics  science  Nautil.us 
march 2015 by pierredv
Science’s Significant Stats Problem - Issue 4: The Unlikely - Nautilus
"Researchers’ rituals for assessing probability may mislead as much as they enlighten." BY TOM SIEGFRIED
statistics  nautil.us  science  scientific-method  research 
february 2015 by pierredv
Statistical and causal approaches to machine learning - YouTube
Published on Dec 16, 2014 "Where would you take machine learning? 2014's Milner Award winner Professor Bernhard Schölkopf, Max Planck Institute for Intelligent Systems, talks through basic concepts of machine learning to pioneering research now widely used in science and industry."
RoyalSociety  statistics  causality  video  machine-learning  lectures 
december 2014 by pierredv
Cause And Effect: The Revolutionary New Statistical Test That Can Tease Them Apart — The Physics arXiv Blog — Medium
"The basis of the new approach is to assume that the relationship between X and Y is not symmetrical. In particular, they say that in any set of measurements there will always be noise from various cause. The key assumption is that the pattern of noise in the cause will be different to the pattern of noise in the effect. That’s because any noise in X can have an influence on Y but not vice versa."
statistics  causality  arxiv 
december 2014 by pierredv
Bayes rule
Nice short explanation of Bayes rule
statistics  bayes-rule  probability 
november 2014 by pierredv
Science joins push to screen statistics in papers : Nature News & Comment July 2014
"The journal Science is adding an extra round of statistical checks to its peer-review process, editor-in-chief Marcia McNutt announced today. The policy follows similar efforts from other journals, after widespread concern that basic mistakes in data analysis are contributing to the irreproducibility of many published research findings.. . . Working with the American Statistical Association, the journal has appointed seven experts to a statistics board of reviewing editors (SBoRE). Manuscript will be flagged up for additional scrutiny by the journal’s internal editors, or by its existing Board of Reviewing Editors (more than 100 scientists whom the journal regularly consults on papers) or by outside peer reviewers. The SBoRE panel will then find external statisticians to review these manuscripts."
Science  statistics  research  NatureJournal 
july 2014 by pierredv
The Internet in Real-Time
from Sigurdur Hjalmarsson via Gudjon Mar Gudjonsson
web  statistics  visualization 
may 2014 by pierredv
Maths: Not Ofcom's strong suit? - Wireless Waffle May 2014
See subsequent for WW recalc: http://www.wirelesswaffle.com/index.php?entry=entry140421-123413 "t seems that following the ESOA submission to Ofcom concerning the apparent errors in the RealWireless study on spectrum demand for mobile data reported by Wireless Waffle on 15 Febuary, the offending report has now been re-issued (note the publication date is now 11 April 2014) with the axis on Figure 44 which shows data traffic density re-labelled from 'PB/month/km²' (PetaBytes) to 'TB/month/km²' (TeraBytes), thereby reducing the calculated data traffic by a factor of 1000 and now making the document internally consistent." "But... even a 10th grade student could complete the sum that is behind the ITU data forecasts and realise that the axis should have read 'PB' all along (and therefore that the internal inconsistencies are not fixed and that the data in the ITU and RealWireless models is still hundreds of times too large). "
Ofcom  statistics  analysis  spectrum  spectrum-shortage  ESOA  Wireless  Waffle  RealWireless 
may 2014 by pierredv
Hidden depths: Brain science is drowning in uncertainty - life - 17 October 2013 - New Scientist
"The edifice of research being built with brain scans is flawed. It’s time to rethink the approach to build a more complete understanding of the mind" Built on critiques by Ioannidis. See also editorial at http://www.newscientist.com/article/mg22029391.500-neuroscience-wrongs-will-make-a-right.html
neuroscience  false-positives  Gallant  method  NewScientist  research  Ioannidis  statistics 
november 2013 by pierredv
Gun Homicide Rate Down 49% Since 1993 Peak; Public Unaware | Pew Social & Demographic Trends
"National rates of gun homicide and other violent gun crimes are strikingly lower now than during their peak in the mid-1990s, paralleling a general decline in violent crime, according to a Pew Research Center analysis of government data. Beneath the long-term trend, though, are big differences by decade: Violence plunged through the 1990s, but has declined less dramatically since 2000."
Pew  Research  Center  homicide  crime  media  statistics 
october 2013 by pierredv
Sampling in Statistical Inference - statistic vs. parameter
Quote: The use of randomization in sampling allows for the analysis of results using the methods of statistical inference. Statistical inference is based on the laws of probability, and allows analysts to infer conclusions about a given population based on results observed through random sampling. Two of the key terms in statistical inference are parameter and statistic: A parameter is a number describing a population, such as a percentage or proportion. A statistic is a number which may be computed from the data observed in a random sample without requiring the use of any unknown parameters, such as a sample mean.
tutorial  ex  Yale  statistics 
september 2013 by pierredv
Bias | ACM Interactions - Jonathan Grudin
"Confirmation bias is built into us. Ask me to guess what a blurry image is, then bring it slowly into focus. When it has become clear enough to be recognizable by someone seeing it this way for the first time, I will still not recognize it. My initial hypothesis blinds me. Quantitative studies are no guard against these problems. In fact, they often exhibit a seductive form of confirmation bias: inference of a causal relationship from correlational data, a major problem in conference and journal submissions I have reviewed over the years. Am I biased about the importance of confirmation bias? I’m convinced that we must relentlessly seek it out in our own work and that of our colleagues, knowing that we won’t always succeed. Perhaps now I see it everywhere and overlook more significant obstacles. So decide how important it is, and be vigilant."
qualititative  methods  experiment  grounded  theory  confirmation  bias  correlation  vs  causation  Francis  Bacon  bias  statistics 
august 2013 by pierredv
Measuring risk: Snakes and ladders | The Economist book review
The Norm Chronicles: Stories and Numbers About Danger. By Michael Blastland and David Spiegelhalter. Introduce idea of "MicroMort"
risk  books  reviews  TheEconomist  book  reviews  statistics 
july 2013 by pierredv
« earlier      
per page:    204080120160

Copy this bookmark:





to read