recentpopularlog in

data_mining

« earlier   
[2002.05193] A Hierarchy of Limitations in Machine Learning
"All models are wrong, but some are useful", wrote George E. P. Box (1979). Machine learning has focused on the usefulness of probability models for prediction in social systems, but is only now coming to grips with the ways in which these models are wrong---and the consequences of those shortcomings. This paper attempts a comprehensive, structured overview of the specific conceptual, procedural, and statistical limitations of models in machine learning when applied to society. Machine learning modelers themselves can use the described hierarchy to identify possible failure points and think through how to address them, and consumers of machine learning models can know what to question when confronted with the decision about if, where, and how to apply machine learning. The limitations go from commitments inherent in quantification itself, through to showing how unmodeled dependencies can lead to cross-validation being overly optimistic as a way of assessing model performance.
machine_learning  data_mining  ethics  philosophy_of_technology  sociology_of_technology  via:cshalizi 
5 days ago by rvenkat
[2002.05193] A Hierarchy of Limitations in Machine Learning
""All models are wrong, but some are useful", wrote George E. P. Box (1979). Machine learning has focused on the usefulness of probability models for prediction in social systems, but is only now coming to grips with the ways in which these models are wrong---and the consequences of those shortcomings. This paper attempts a comprehensive, structured overview of the specific conceptual, procedural, and statistical limitations of models in machine learning when applied to society. Machine learning modelers themselves can use the described hierarchy to identify possible failure points and think through how to address them, and consumers of machine learning models can know what to question when confronted with the decision about if, where, and how to apply machine learning. The limitations go from commitments inherent in quantification itself, through to showing how unmodeled dependencies can lead to cross-validation being overly optimistic as a way of assessing model performance."
in_NB  to_read  prediction  data_mining  malik.momin_m.  kith_and_kin 
6 days ago by cshalizi
Should We Trust Algorithms? · Harvard Data Science Review
"There is increasing use of algorithms in the health care and criminal justice systems, and corresponding increased concern with their ethical use. But perhaps a more basic issue is whether we should believe what we hear about them and what the algorithm tells us. It is illuminating to distinguish between the trustworthiness of claims made about an algorithm, and those made by an algorithm, which reveals the potential contribution of statistical science to both evaluation and ‘intelligent transparency.’ In particular, a four-phase evaluation structure is proposed, parallel to that adopted for pharmaceuticals."
to:NB  algorithmic_fairness  statistics  data_mining  spiegelhalter.david  to_teach:data-mining  to_teach:statistics_of_inequality_and_discrimination 
15 days ago by cshalizi
Hollywood’s Next Great Studio Head Will Be a Computer
Evidence that data-mining social media is actually better at prediction than 1930s-vintage audience research is conspicuously absent from this.
Also, it misses the equilibrium point: suppose data-analytics firm X can improve predictions about how popular a film will be, and this would be worth $Y to a studio. A risk-neutral studio will pay up to $Y-\epsilon for this information, and be no better off. (And, of course, predictions are _also_ an experience good..)
movies  marketing  data_mining  have_read  shot_after_a_fair_trial 
4 weeks ago by cshalizi
Wie funktioniert Natural Language Processing in der Praxis? Ein Überblick
Natural Language Processing (NLP,auf Deutsch auch als Computerlinguistik bezeichnet) gilt als ein Teilbereich des Machine Learning und der Sprachwissenschaften.

Beim NLP geht es vom Prinzip um das Extrahieren und Verarbeiten von Informationen, die in den natürlichen Sprachen enthalten sind. Im Rahmen von NLP wird die natürliche Sprache durch den Rechner in Zahlenabfolgen umgewandelt. Diese Zahlenabfolgen kann wiederum der Rechner benutzen, um Rückschlüsse auf unsere Welt zu ziehen. Kurz gesagt erlaubt NLP dem Computer unsere Sprache in ihren verschiedenen Formen zu verarbeiten. 

Eine ausführlichere Definition von NLP wurde auf dem Data Science Blog von Christopher Kipp vorgenommen. 

In diesem Beitrag werde ich dagegen einen Überblick über die spezifischen Schritte im NLP als Prozess darstellen, denn NLP erfolgt in mehreren Phasen, die aufeinander Folgen und zum Teil als Kreislauf verstanden werden können. In ihren Grundlagen ähneln sich diese Phasen bei jeder NLP-Anwendung, sei es Chatbot Erstellung oder Sentiment Analyse.

1. Datenreinigung / Normalisierung 

In dieser Phase werden die rohen Sprachdaten aus ihrem ursprünglichen Format entnommen, sodass am Ende nur reine Textdaten ohne Format erhalten bleiben. 

Beispielsweise können die Textdaten für unsere Analyse aus Webseiten stammen und nach ihrer Erhebung in HTML Code eingebettet sein.

Das Bild zeigt eine Beispielseite. Der Text hier ist noch in einen HTML Kontext eingebettet. Der erste Schritt muss daher sein, den Text von den diversen HTML-Tags zu bereinigen. 

 

2. Tokenisierung und Normalisierung (Tokenizing and Normalizing) 

Nach dem ersten Schritt steht als Ergebnis idealerweise reiner Text da, der aber auch Sprachelemente wie Punkte, Kommata sowie Groß- und Kleinschreibung beinhaltet. 

Hier kommt der nächste Schritt ins Spiel – die Entfernung der Interpunktion vom Text. Der Text wird auf diese Weise auf seine Wort-Bestandteile (sog. Tokens) reduziert. 

Zusätzlich zu diesem Schritt kann auch Groß- und Kleinschreibung entfernt werden (Normalisierung). Dies spart vor allem die Rechenkapazität. 

So wird aus folgendem Abschnitt:

Auf diese Weise können wir die Daten aggregieren und in Subsets analysieren. Wir müssen nicht immer das ganze Machine Learning in Hadoop und Spark auf dem gesamten Datensatz starten.

folgender Text 

auf diese weise können wir die daten aggregieren und in subsets analysieren wir müssen nicht immer das ganze machine learning in hadoop und spark auf dem gesamten datensatz starten

 

3. Füllwörterentfernung / Stop words removal 

Im nächsten Schritt entfernen wir die sogenannten Füllwörter wie „und“, „sowie“, „etc.“. In den entsprechenden Python Bibliotheken sind die gängigen Füllwörter bereits gespeichert und können leicht entfernt werden. Trotzdem ist hier Vorsicht geboten. Die Bedeutung der Füllwörter in einer Sprache verändert sich je nach Kontext. Aus diesem Grund ist dieser Schritt optional und die zu entfernenden Füllwörter müssen kontextabhängig ausgewählt werden. 

Nach diesem Schritt bleibt dann in unserem Beispiel folgender Text erhalten: 

können daten aggregieren subsets analysieren müssen nicht immer machine learning hadoop spark datensatz starten

 

4. Pats of speech (POS) 
Als weiterer Schritt können die Wörter mit ihrer korrekten Wortart markiert werden. Der Rechner markiert sie entsprechend als Verben, Nomen, Adjektive etc. Dieser Schritt könnte für manche Fälle der Grundformreduktion/Lemmatization notwendig sein (dazu sogleich unten).

 

5. Stemming und Lemmatization/Grundformreduktion

In weiteren Schritten kann weiter das sogenannte Stemming und Lemmatization folgen. Vom Prinzip werden hier die einzelnen Wörter in ihre Grundform bzw. Wörterbuchform gebracht. 

Im Fall von Stemming werden die Wörter am Ende einfach abgeschnitten und auf den Wortstamm reduziert. So wäre zum Beispiel das Verb „gehen“, „geht“ auf die Form „geh“ reduziert. 

Im Fall der Lemmatization bzw. Grundformreduktion werden die Wörter in ihre ursprüngliche Wörterbuchform gebracht: das Verb „geht“ wäre dann ins „gehen“ transformiert. 

Parts of Speech, Stemming als auch Lemmatising sind vorteilhaft für die Komplexitätsreduktion. Sie führen deswegen zu mehr Effizienz und schnellerer Anwendbarkeit. Dies geschieht allerdings auf Kosten der Präzision. Die auf diese Weise erstellten Listen können dann im Fall einer Suchmaschine weniger relevante Ergebnisse liefern.

Nachfolgende Schritte beim NLP transformieren den Text in mathematische Zahlenfolgen, die der Rechner verstehen kann. Wie wir in diesem Schritt vorgehen, hängt stark davon ab, was das eigentliche Ziel des Projektes sei. Es gibt ein breites Angebot an Python Paketen, die die Zahlenbildung je nach Projektziel unterschiedlich gestalten

 

6a. Bag of Words Methoden in Python (https://en.wikipedia.org/wiki/Bag-of-words_model)

Zu den Bag of Words Methoden in Python gehört das sogenannte TF-IDF Vectorizer. Die Transformationsmethode mit dem TF-IDF eignet sich beispielsweise zum Bau eines Spamdetektors, da der TF-IDF Vectorizer die Wörter im Kontext des Gesamtdokumentes betrachtet.

 

6b. Word Embeddings Methoden in Python: Word2Vec, GloVe (https://en.wikipedia.org/wiki/Word_embedding)

Wie der Name bereits sagt transformiert Word2Vec die einzelnen Wörter zu Vektoren (Zahlenfolgen). Dabei werden ähnliche Wörter zu ähnlichen Vektoren transformiert. Die Methoden aus der Word Embeddings Kiste eignen sich zum Beispiel besser, um einen Chatbot zu erstellen. 

Im letzten Schritt des NLP können wir die so prozessierte Sprache in die gängigen Machine Learning Modelle einspeisen. Das Beste an den oben erwähnten NLP Techniken ist die Transformation der Sprache in Zahlensequenzen, die durch jeden ML Algorithmus analysiert werden können. Die weitere Vorgehensweise hängt hier nur noch vom Ziel des Projektes ab. 

Dies ist ein Überblick über die notwendigen (und optionalen) Schritte in einem NLP Verfahren. Natürlich hängt die Anwendung vom jeweiligen Use Case ab. Die hier beschriebenen NLP Phasen nehmen viele Ungenauigkeiten in Kauf, wie zum Beispiel die Reduzierung der Wörter auf Wortstämmen bzw. den Verzicht auf Großschreibung. Bei der Umsetzung in der Praxis müssen immer Kosten und Nutzen abgewogen werden und das Verfahren dem besonderen Fall angepasst werden. 

Quellen:

Mandy Gu: „Spam or Ham: Introduction to Natural Language Processing Part 2“ https://towardsdatascience.com/spam-or-ham-introduction-to-natural-language-processing-part-2-a0093185aebd

Christopher D. Manning, Prabhakar Raghavan & Hinrich Schütze: „Introduction to Information Retrieval”, Cambridge University Press, https://nlp.stanford.edu/IR-book/

Hobson Lane, Cole Howard, Hannes Max Hapke: „Natural Language Processing in Action. Understanding, analyzing, and generating text with Python.” Manning Shelter Island
Business_Analytics  Data_Mining  Data_Science  Data_Science_News  Tutorial  computerlinguistik  Machine_Learning  Natural_Language_Processing  NLP  nlp_pipeline  Text_Mining  user/-/state/com.google/read  user/-/state/com.google/starred  from google
4 weeks ago by roeglinl
What data on myself I collect and why? | Mildly entertainingᵝ
What data on myself I collect and why?
How I am using 50+ sources of my personal data
This is the list of personal data sources I use or planning to use with rough guides on how to get your hands on that data if you want it as well.

It's still incomplete and I'm going to update it regularly.

My goal is automating data collection to the maximum extent possible and making it work in the background, so one can set up pipelines once and hopefully never think about it again.

This is kind of a follow-up on my previous post on the sad state of personal data, and part of my personal way of getting around this sad state.
privacy  data  data_mining 
5 weeks ago by archangel
Rev. Mod. Phys. 91, 045002 (2019) - Machine learning and the physical sciences
"Machine learning (ML) encompasses a broad range of algorithms and modeling tools used for a vast array of data processing tasks, which has entered most scientific disciplines in recent years. This article reviews in a selective way the recent research on the interface between machine learning and the physical sciences. This includes conceptual developments in ML motivated by physical insights, applications of machine learning techniques to several domains in physics, and cross fertilization between the two fields. After giving a basic notion of machine learning methods and principles, examples are described of how statistical physics is used to understand methods in ML. This review then describes applications of ML methods in particle physics and cosmology, quantum many-body physics, quantum computing, and chemical and material physics. Research and development into novel computing architectures aimed at accelerating ML are also highlighted. Each of the sections describe recent successes as well as domain-specific methodology and challenges."
to:NB  machine_learning  data_mining  physics  data_analysis 
6 weeks ago by cshalizi
[1909.06342] Explainable Machine Learning in Deployment
"Explainable machine learning offers the potential to provide stakeholders with insights into model behavior by using various methods such as feature importance scores, counterfactual explanations, or influential training data. Yet there is little understanding of how organizations use these methods in practice. This study explores how organizations view and use explainability for stakeholder consumption. We find that, currently, the majority of deployments are not for end users affected by the model but rather for machine learning engineers, who use explainability to debug the model itself. There is thus a gap between explainability in practice and the goal of transparency, since explanations primarily serve internal stakeholders rather than external ones. Our study synthesizes the limitations of current explainability techniques that hamper their use for end users. To facilitate end user interaction, we develop a framework for establishing clear goals for explainability. We end by discussing concerns raised regarding explainability."
to:NB  prediction  data_mining  explainable_machine_learning  to_teach:data-mining  via:rvenkat 
9 weeks ago by cshalizi
Too Much Data: Prices and Inefficiencies in Data Markets
"When a user shares her data with an online platform, she typically reveals relevant information about other users. We model a data market in the presence of this type of externality in a setup where one or multiple platforms estimate a user’s type with data they acquire from all users and (some) users value their privacy. We demonstrate that the data externalities depress the price of
data because once a user’s information is leaked by others, she has less reason to protect her data and privacy. These depressed prices lead to excessive data sharing. We characterize conditions under which shutting down data markets improves (utilitarian) welfare. Competition between platforms does not redress the problem of excessively low price for data and too much data sharing, and may further reduce welfare. We propose a scheme based on mediated data-sharing that improves efficiency."

--- My usual issue with Acemoglou's theoretical papers is that the assumptions are _very_ much stacked in favor of certain conclusions. (My issue with his empirical papers is that he has apparently never heard of regression diagnostics.) In this case, the conclusions are ones that reinforce my prejudices, but the last tag applies.
to:NB  privacy  data_mining  market_failures_in_everything  economics  acemoglou.daron  to_be_shot_after_a_fair_trial 
november 2019 by cshalizi
[1910.12207] An Active Approach for Model Interpretation
"Model interpretation, or explanation of a machine learning classifier, aims to extract generalizable knowledge from a trained classifier into a human-understandable format, for various purposes such as model assessment, debugging and trust. From a computaional viewpoint, it is formulated as approximating the target classifier using a simpler interpretable model, such as rule models like a decision set/list/tree. Often, this approximation is handled as standard supervised learning and the only difference is that the labels are provided by the target classifier instead of ground truth. This paradigm is particularly popular because there exists a variety of well-studied supervised algorithms for learning an interpretable classifier. However, we argue that this paradigm is suboptimal for it does not utilize the unique property of the model interpretation problem, that is, the ability to generate synthetic instances and query the target classifier for their labels. We call this the active-query property, suggesting that we should consider model interpretation from an active learning perspective. Following this insight, we argue that the active-query property should be employed when designing a model interpretation algorithm, and that the generation of synthetic instances should be integrated seamlessly with the algorithm that learns the model interpretation. In this paper, we demonstrate that by doing so, it is possible to achieve more faithful interpretation with simpler model complexity. As a technical contribution, we present an active algorithm Active Decision Set Induction (ADS) to learn a decision set, a set of if-else rules, for model interpretation. ADS performs a local search over the space of all decision sets. In every iteration, ADS computes confidence intervals for the value of the objective function of all local actions and utilizes active-query to determine the best one."
to:NB  experimental_design  statistics  data_mining 
october 2019 by cshalizi
[1905.12101] Differential Privacy Has Disparate Impact on Model Accuracy
"Differential privacy (DP) is a popular mechanism for training machine learning models with bounded leakage about the presence of specific points in the training data. The cost of differential privacy is a reduction in the model's accuracy. We demonstrate that in the neural networks trained using differentially private stochastic gradient descent (DP-SGD), this cost is not borne equally: accuracy of DP models drops much more for the underrepresented classes and subgroups.
"For example, a gender classification model trained using DP-SGD exhibits much lower accuracy for black faces than for white faces. Critically, this gap is bigger in the DP model than in the non-DP model, i.e., if the original model is unfair, the unfairness becomes worse once DP is applied. We demonstrate this effect for a variety of tasks and models, including sentiment analysis of text and image classification. We then explain why DP training mechanisms such as gradient clipping and noise addition have disproportionate effect on the underrepresented and more complex subgroups, resulting in a disparate reduction of model accuracy."

--- Isn't this fairly intuitive? DP guarantees that adding or removing one record can't change the probability too much. But if the minority is distinctive, so the over-all distribution is bimodal, each individual minority member has more influence on the minority mode, so the DP-ensuring noise is going to mess things up for the minority more than for the majority.
to:NB  differential_privacy  algorithmic_fairness  data_mining 
october 2019 by cshalizi
[1910.11410] Almost Politically Acceptable Criminal Justice Risk Assessment
"In criminal justice risk forecasting, one can prove that it is impossible to optimize accuracy and fairness at the same time. One can also prove that it is impossible optimize at once all of the usual group definitions of fairness. In the policy arena, one is left with tradeoffs about which many stakeholders will adamantly disagree. In this paper, we offer a different approach. We do not seek perfectly accurate and fair risk assessments. We seek politically acceptable risk assessments. We describe and apply to data on 300,000 offenders a machine learning approach that responds to many of the most visible charges of "racial bias." Regardless of whether such claims are true, we adjust our procedures to compensate. We begin by training the algorithm on White offenders only and computing risk with test data separately for White offenders and Black offenders. Thus, the fitted algorithm structure is exactly the same for both groups; the algorithm treats all offenders as if they are White. But because White and Black offenders can bring different predictors distributions to the white-trained algorithm, we provide additional adjustments if needed. Insofar are conventional machine learning procedures do not produce accuracy and fairness that some stakeholders require, it is possible to alter conventional practice to respond explicitly to many salient stakeholder claims even if they are unsupported by the facts. The results can be a politically acceptable risk assessment tools."
to:NB  prediction  crime  algorithmic_fairness  statistics  data_mining  to_teach:data-mining  berk.richard  not_often_you_see_an_abstract_framed_as_"fine_have_it_your_way"  to_teach:statistics_of_inequality_and_discrimination 
october 2019 by cshalizi
Dissecting racial bias in an algorithm used to manage the health of populations | Science
"Health systems rely on commercial prediction algorithms to identify and help patients with complex health needs. We show that a widely used algorithm, typical of this industry-wide approach and affecting millions of patients, exhibits significant racial bias: At a given risk score, Black patients are considerably sicker than White patients, as evidenced by signs of uncontrolled illnesses. Remedying this disparity would increase the percentage of Black patients receiving additional help from 17.7 to 46.5%. The bias arises because the algorithm predicts health care costs rather than illness, but unequal access to care means that we spend less money caring for Black patients than for White patients. Thus, despite health care cost appearing to be an effective proxy for health by some measures of predictive accuracy, large racial biases arise. We suggest that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts."
to:NB  to_read  medicine  prediction  social_measurement  measurement  racism  data_mining  algorithmic_fairness  scores_and_classes 
october 2019 by cshalizi
[1910.10871] Preventing Adversarial Use of Datasets through Fair Core-Set Construction
We propose improving the privacy properties of a dataset by publishing only a strategically chosen "core-set" of the data containing a subset of the instances. The core-set allows strong performance on primary tasks, but forces poor performance on unwanted tasks. We give methods for both linear models and neural networks and demonstrate their efficacy on data.
data_mining  computational_statistics  statistics  prediction  compression 
october 2019 by cshalizi

Copy this bookmark:





to read