recentpopularlog in

robustness

« earlier   
[1909.10117] Sensitivity of collective outcomes identifies pivotal components
A social system is susceptible to perturbation when its collective properties depend sensitively on a few, pivotal components. Using the information geometry of minimal models from statistical physics, we develop an approach to identify pivotal components to which coarse-grained, or aggregate, properties are sensitive. As an example we introduce our approach on a reduced toy model with a median voter who always votes in the majority. With this example, we construct the Fisher information matrix with respect to the distribution of majority-minority divisions and study features of the matrix that pinpoint the unique role of the median. More generally, these features identify pivotal blocs that precisely determine collective outcomes generated by a complex network of interactions. Applying our approach to data sets from political voting, finance, and Twitter, we find remarkable variety from systems dominated by a median-like component (e.g., California State Assembly) to those without any single special component (e.g., Alaskan Supreme Court). Other systems (e.g., S\&P sector indices) show varying levels of heterogeneity in between these extremes. By providing insight into such sensitivity, our information-geometric approach presents a quantitative framework for considering how nominees might change a judicial bench, serve as a measure of notable temporal variation in financial indices, or help analyze the robustness of institutions to targeted perturbation.
collective-intelligence  voting  mechanism-design  robustness  rather-interesting  decision-making  to-write-about  to-simulate  consider:graph-templates 
5 days ago by Vaguery
[1809.02104] Are adversarial examples inevitable?
A wide range of defenses have been proposed to harden neural networks against adversarial attacks. However, a pattern has emerged in which the majority of adversarial defenses are quickly broken by new attacks. Given the lack of success at generating robust defenses, we are led to ask a fundamental question: Are adversarial attacks inevitable? This paper analyzes adversarial examples from a theoretical perspective, and identifies fundamental bounds on the susceptibility of a classifier to adversarial attacks. We show that, for certain classes of problems, adversarial examples are inescapable. Using experiments, we explore the implications of theoretical guarantees for real-world problems and discuss how factors such as dimensionality and image complexity limit a classifier's robustness against adversarial examples.
classification  adversarial-examples  robustness  machine-learning  neural-networks  problems-with-continuous-embeddings-for-discrete-problems  to-write-about  to-simulate 
18 days ago by Vaguery
[1906.00001] Functional Adversarial Attacks
We propose functional adversarial attacks, a novel class of threat models for crafting adversarial examples to fool machine learning models. Unlike a standard ℓp-ball threat model, a functional adversarial threat model allows only a single function to be used to perturb input features to produce an adversarial example. For example, a functional adversarial attack applied on colors of an image can change all red pixels simultaneously to light red. Such global uniform changes in images can be less perceptible than perturbing pixels of the image individually. For simplicity, we refer to functional adversarial attacks on image colors as ReColorAdv, which is the main focus of our experiments. We show that functional threat models can be combined with existing additive (ℓp) threat models to generate stronger threat models that allow both small, individual perturbations and large, uniform changes to an input. Moreover, we prove that such combinations encompass perturbations that would not be allowed in either constituent threat model. In practice, ReColorAdv can significantly reduce the accuracy of a ResNet-32 trained on CIFAR-10. Furthermore, to the best of our knowledge, combining ReColorAdv with other attacks leads to the strongest existing attack even after adversarial training.
machine-learning  neural-networks  adversarial-attacks  robustness  rather-interesting  define-your-terms  to-write-about  to-simulate 
4 weeks ago by Vaguery
[1906.11908] Approximate Solutions of 4-regular Matchstick Graphs with 51 -- 62 Vertices
A 4-regular matchstick graph is a planar unit-distance graph whose vertices have all degree 4. Examples of 4-regular matchstick graphs are currently known for all number of vertices ≥ 52 except for 53, 55, 56, 58, 59, 61, and 62. In this article we present 32 different examples with 51 -- 62 vertices which contain two, three, or four distances which differ slightly from the unit length. These graphs should show why this subject is so extraordinarily difficult to deal with and should also be an incentive for the interested reader to find solutions for the missing numbers of vertices. The first example consisting of 51 or less vertices or a proof that such examples cannot exist will be honored by the author with 10000 US-Dollar.
graph-theory  constraint-satisfaction  butterfly-collecting  rather-interesting  approximation  robustness  consider:looking-to-see  consider:robustness  consider:lexicase-sorting 
5 weeks ago by Vaguery
[1907.10924] Influence of the size of the intruder on the reorganization of a 2D granular medium
We consider the rearrangements of a vertical twodimensional granular packing induced by the withdrawal of an intruder. Here, we focus on the influence of the size of the intruder on the reorganization process. The long term evolution of the granular packing is investigated as well as the avalanche dynamics that characterize the short term rearrangements around the intruder. For small enough intruder, we observe the formation of arches that periodically destabilize and influence the reorganization dynamics of the two-dimensional packing through larger rearrangement events.
granular-materials  nonlinear-dynamics  complexology  rather-interesting  experiment  looking-to-see  robustness  self-organization  mesoscale-structure-formation  to-simulate  consider:dominoes 
5 weeks ago by Vaguery
[1907.03902] Uncertainty and causal emergence in complex networks
The connectivity of a network conveys information about the dependencies between nodes. We show that this information can be analyzed by measuring the uncertainty (and certainty) contained in paths along nodes and links in a network. Specifically, we derive from first principles a measure known as effective information and describe its behavior in common network models. Networks with higher effective information contain more information within the dependencies between nodes. We show how subgraphs of nodes can be grouped into macro-nodes, reducing the size of a network while increasing its effective information, a phenomenon known as causal emergence. We find that causal emergence is common in simulated and real networks across biological, social, informational, and technological domains. Ultimately, these results show that the emergence of higher scales in networks can be directly assessed, and that these higher scales offer a way to create certainty out of uncertainty.
network-theory  rather-interesting  robustness  looking-to-see  to-write-about  to-understand  feature-construction  consider:causal-gp-dynamics 
8 weeks ago by Vaguery
[1908.05659] Distributionally Robust Optimization: A Review
"The concepts of risk-aversion, chance-constrained optimization, and robust optimization have developed significantly over the last decade. Statistical learning community has also witnessed a rapid theoretical and applied growth by relying on these concepts. A modeling framework, called distributionally robust optimization (DRO), has recently received significant attention in both the operations research and statistical learning communities. This paper surveys main concepts and contributions to DRO, and its relationships with robust optimization, risk-aversion, chance-constrained optimization, and function regularization."
to:NB  optimization  robustness  probability 
8 weeks ago by cshalizi
[1902.06705] On Evaluating Adversarial Robustness
Correctly evaluating defenses against adversarial examples has proven to be extremely difficult. Despite the significant amount of recent work attempting to design defenses that withstand adaptive attacks, few have succeeded; most papers that propose defenses are quickly shown to be incorrect.
We believe a large contributing factor is the difficulty of performing security evaluations. In this paper, we discuss the methodological foundations, review commonly accepted best practices, and suggest new methods for evaluating defenses to adversarial examples. We hope that both researchers developing defenses as well as readers and reviewers who wish to understand the completeness of an evaluation consider our advice in order to avoid common pitfalls.
machine-learning  adversarial-examples  robustness  security 
april 2019 by arsyed
[1807.05620] NEUZZ: Efficient Fuzzing with Neural Program Smoothing
Fuzzing has become the de facto standard technique for finding software vulnerabilities. However, even state-of-the-art fuzzers are not very efficient at finding hard-to-trigger software bugs. Most popular fuzzers use evolutionary guidance to generate inputs that can trigger different bugs. Such evolutionary algorithms, while fast and simple to implement, often get stuck in fruitless sequences of random mutations. Gradient-guided optimization presents a promising alternative to evolutionary guidance. Gradient-guided techniques have been shown to significantly outperform evolutionary algorithms at solving high-dimensional structured optimization problems in domains like machine learning by efficiently utilizing gradients or higher-order derivatives of the underlying function. However, gradient-guided approaches are not directly applicable to fuzzing as real-world program behaviors contain many discontinuities, plateaus, and ridges where the gradient-based methods often get stuck. We observe that this problem can be addressed by creating a smooth surrogate function approximating the discrete branching behavior of target program. In this paper, we propose a novel program smoothing technique using surrogate neural network models that can incrementally learn smooth approximations of a complex, real-world program's branching behaviors. We further demonstrate that such neural network models can be used together with gradient-guided input generation schemes to significantly improve the fuzzing efficiency. Our extensive evaluations demonstrate that NEUZZ significantly outperforms 10 state-of-the-art graybox fuzzers on 10 real-world programs both at finding new bugs and achieving higher edge coverage. NEUZZ found 31 unknown bugs that other fuzzers failed to find in 10 real world programs and achieved 3X more edge coverage than all of the tested graybox fuzzers for 24 hours running.
robustness  neural-all-the-things  neural-networks  fuzz-testing  stress-testing  software-development  performance-measure  ruthless-efficiency  to-write-about  I-told-him-we-already-had-one 
april 2019 by Vaguery
[1802.08188] The spatial Lambda-Fleming-Viot process with fluctuating selection
We are interested in populations in which the fitness of different genetic types fluctuates in time and space, driven by temporal and spatial fluctuations in the environment. For simplicity, our population is assumed to be composed of just two genetic types. Short bursts of selection acting in opposing directions drive to maintain both types at intermediate frequencies, while the fluctuations due to 'genetic drift' work to eliminate variation in the population.
We consider first a population with no spatial structure, modelled by an adaptation of the Lambda (or generalised) Fleming-Viot process, and derive a stochastic differential equation as a scaling limit. This amounts to a limit result for a Lambda-Fleming-Viot process in a rapidly fluctuating random environment. We then extend to a population that is distributed across a spatial continuum, which we model through a modification of the spatial Lambda-Fleming-Viot process with selection. In this setting we show that the scaling limit is a stochastic partial differential equation. As is usual with spatially distributed populations, in dimensions greater than one, the 'genetic drift' disappears in the scaling limit, but here we retain some stochasticity due to the fluctuations in the environment, resulting in a stochastic p.d.e. driven by a noise that is white in time but coloured in space.
We discuss the (rather limited) situations under which there is a duality with a system of branching and annihilating particles. We also write down a system of equations that captures the frequency of descendants of particular subsets of the population and use this same idea of 'tracers', which we learned from Hallatschek and Nelson (2008) and Durrett and Fan (2016), in numerical experiments with a closely related model based on the classical Moran model.
population-biology  genetics  simulation  theoretical-biology  evolutionary-biology  nonlinear-dynamics  robustness  noise 
april 2019 by Vaguery

Copy this bookmark:





to read