recentpopularlog in


« earlier   
Forecasting for data-driven decision making
forecasting  python  from twitter
3 days ago by Cdr6934
OSF Preprints | Explanation, prediction, and causality: Three sides of the same coin?
In this essay we make four interrelated points. First, we reiterate previous arguments (Kleinberg et al 2015) that forecasting problems are more common in social science than is often appreciated. From this observation it follows that social scientists should care about predictive accuracy in addition to unbiased or consistent estimation of causal relationships. Second, we argue that social scientists should be interested in prediction even if they have no interest in forecasting per se. Whether they do so explicitly or not, that is, causal claims necessarily make predictions; thus it is both fair and arguably useful to hold them accountable for the accuracy of the predictions they make. Third, we argue that prediction, used in either of the above two senses, is a useful metric for quantifying progress. Important differences between social science explanations and machine learning algorithms notwithstanding, social scientists can still learn from approaches like the Common Task Framework (CTF) which have successfully driven progress in certain fields of AI over the past 30 years (Donoho, 2015). Finally, we anticipate that as the predictive performance of forecasting models and explanations alike receives more attention, it will become clear that it is subject to some upper limit which lies well below deterministic accuracy for many applications of interest (Martin et al 2016). Characterizing the properties of complex social systems that lead to higher or lower predictive limits therefore poses an interesting challenge for computational social science.
social_science  prediction  explanation  forecasting  causality  philosophy_of_science  duncan.watts  for_friends  teaching 
5 days ago by rvenkat
Statistical and Machine Learning forecasting methods: Concerns and ways forward
"Machine Learning (ML) methods have been proposed in the academic literature as alternatives to statistical ones for time series forecasting. Yet, scant evidence is available about their relative performance in terms of accuracy and computational requirements. ... After comparing the post-sample accuracy of popular ML methods with that of eight traditional statistical ones, we found that the former are dominated across both accuracy measures used and for all forecasting horizons examined. Moreover, we observed that their computational requirements are considerably greater than those of statistical methods. The paper discusses the results, explains why the accuracy of ML models is below that of statistical ones and proposes some possible ways forward."
statistics  machinelearning  forecasting 
12 days ago by aapl
Perceptions of probability
It is interesting that some phrases (for example, "We believe," "Likely," and "Probable") have the same median value but wildly different interquartile ranges
probability  visualization  language  analysis  forecasting  statistics 
12 days ago by yorksranter
RT : Results From Comparing Classical and Methods for Time Series :…
Forecasting  MachineLearning  from twitter_favs
15 days ago by cdrago

Copy this bookmark:

to read