Does God roll dice? Neutrality and determinism in evolutionary ecology

A tension between perspectives that emphasize deterministic versus stochastic processes has sparked controversy in ecology since pre-Darwinian times. The most recent manifestation of the contrasting perspectives arose with Hubbell’s proposed “neutral theory”, which hypothesizes a paramount role for stochasticity in ecological community composition. Here we shall refer to the deterministic and the stochastic perspectives as the niche-based and neutral-based research programs, respectively. Our goal is to represent these perspectives in the context of Lakatos’ notion of a scientific research program. We argue that the niche-based program exhibits all the characteristics of a robust, progressive research program, including the ability to deal with disconfirming data by generating new testable predictions within the program. In contrast, the neutral-based program succeeds as a mathematical tool to capture, as epiphenomena, broad-scale patterns of ecological communities but appears to handle disconfirming data by incorporating hypotheses and assumptions from outside the program, specifically, from the niche-based program. We conclude that the neutral research program fits the Lakatosian characterization of a degenerate research program.


Introduction
Any well-trained naturalist can identify both similarities and differences among species, regardless of relational distance. Through much of the development of ecology, from its beginnings in natural philosophy and descriptive natural history, those characteristics were used to help explain patterns of distribution and abundance of species. Ernst Haeckel (quoted in Mayr 1977) defined ecology as the study of the complex interactions among species and their environment that determine relative abundance and distribution. Darwin (1859) referred to those interactions as the "struggle for existence." This perspective led to first Grinnell's (1917) and then Elton's (1927) concepts of the niche, both fully integrated in Hutchinson's (1957) hypervolume.
Despite the order perceived by naturalists from von Humboldt (1849) to modern times (e.g., Diamond 1975), some questioned whether these ecological communities (or assemblages) were in fact distinguishable from communities assembled randomly (e.g., Simberloff 1978;Connor and Simberloff 1979). The tension between perspectives that emphasize deterministic versus stochastic processes has sparked controversy since pre-Darwinian times (see White 1789). The most recent manifestation of the contrasting perspectives arose with Hubbell's proposed "neutral theory" (Bell 2000;Caswell 1976;Hubbell and Foster 1986;Hubbell 2001). Here we shall refer to these perspectives as the niche-based and neutral-based research programs. Our goal is to represent these perspectives in the context of Lakatos' notion of a scientific research program (Lakatos 1978).
The idea that natural communities assemble via deterministic processes appeared in the early twentieth century with the super-organism framework of Clements (1916). His work represented an early attempt to use species differences to predict community-level phenomena. Clements' view of communities is often contrasted with that of Gleason's (1926) individualistic concept of communities which posits a role for some random processes in the assemblage of community structure. But even Gleason recognized that communities were not purely random assemblages of species (Nicolson et al. 2002). In Gleason's scheme, chance plays a role in dispersal and establishment of a species within an area, but success and coexistence of species in that area are governed by adaptive traits like physiology, resource exploitation, morphology, and behavior (Brown 1995). Later in the twentieth century, ecologists employed Hutchinson's niche hypervolume concept to seek a mechanistic understanding of community structure (Hutchinson 1957;MacArthur and Levins 1967;MacArthur 1972). More recently, the niche concept has included additional mechanistic processes such as resource-consumer dynamics (Chase and Leibold 2003;Holt 2009). Then, during the last decade of the twentieth century, a new type of theory was proposed that sharply challenged the underlying assumptions of the niche theory. This "neutral" theory (Hubbell 2001;Bell 2001) assumes that patterns of community process and structure can be most parsimoniously understood without reference to niches or differences between species that manifest in their demographic rates. Neutral theory seemed to score some early successes (Hubbell 1997;Bell 2000) but subsequent reviews highlighted some shortcomings . As all theories do, the neutral theory has seen modifications and enhancement since its inception (Rosindell et al. 2011). And recently, some workers have suggested that the two theories each have their strengths and should be integrated (Wennekes et al. 2012;Matthews and Whittaker 2014;Janzen et al. 2015).
Here we examine the niche-based and the neutral-based approaches through the lens of scientific research programs (Lakatos 1978;Mitchell and Valone 1990). We argue that the niche-based program exhibits all the characteristics of a robust, progressive research program, including the ability to deal with disconfirming data by generating new testable predictions within the program. In contrast, the neutral-based program succeeds as a mathematical tool to capture, as epiphenomena, broad-scale patterns of ecological communities but appears to handle disconfirming data by incorporating hypotheses and assumptions from outside the program (see Glossary).

Niche versus neutral theory
Evolutionary ecologists working within the framework of the niche-based program typically hold the bedrock principle that natural selection optimizes fitness (e.g., Hutchinson 1957;Williams 1966;Maynard Smith 1982;Vincent and Brown 2005;Brown 2016). The species' niche is defined by a combination of its environment and adaptive traits which allow individuals to gather resources, evade enemies and perform other functions that influence their relative birth and death rates, which then go onto influence species distributions, abundances, coexistence, and thus community composition.
To illustrate, consider the example of habitat selection by animals. Individual animals, subject to constraints on information, together with physiological, morphological and behavioral characteristics, select habitat that maximizes their expected offspring production. This, however, produces a paradox: If all individuals in a population choose to live in the same (preferred) habitat, that habitat will become crowded, thereby reducing average fitness. As a habitat becomes crowded, the best response of an additional individual is to select instead the next best habitat, until the expected pay-off in all available habitats is equal (Fretwell and Lucas 1969;Morris 1988). The dispersion of individuals among habitats within a heterogeneous landscape in this scenario will achieve temporary ecological stability when no individual can gain by dispersing elsewhere. The distribution at this time is at a dynamic equilibrium known as the ideal free distribution (Krivan et al. 2008). Competition for ever-diminishing resources thus influences habitat preference.
When two species choose habitats, differences in their abilities to acquire resources in the different habitats can lead to species-specific density-dependent optimal habitat choice (Rosenzweig 1981). This occurs when trade-offs in resource exploitation strategies allow different species to possess a fitness advantage in different habitats. In this case, diversity is maintained by the separation of species into different ecological niches that result from different adaptations to environmental heterogeneity. A community is then a composition of species whose competitive 3 Page 4 of 16 interaction strengths are determined by their eco-evolutionary trade-offs over one or more axes of environmental heterogeneity (Kotler and Brown 1988;Chase and Leibold 2003).
Critics of the niche-based program argue that niches are unnecessary to explain community patterns and processes. Instead, they propose that many characteristics of natural communities can be explained parsimoniously with stochastic or neutral models that invoke no niche differences among species (e.g., Connor and Simberloff 1979;Storch and Frynta 1999;Hubbell 2001;Bell 2001;Jonzén et al. 2004;Alonso et al. 2006;Rosindell et al. 2011Rosindell et al. , 2012. These neutral models assume that individuals within a trophic level are functionally equivalent in birth, death, dispersal, and speciation (Hubbell 2001). In this paradigm, the success or failure of any individual is determined simply by random "ecological drift"-random variation in births, deaths, or dispersal. Consequently, random variation is what creates differences and structure within a community or ecosystem. Apparent habitat partitioning among species is simply the random assortment of individuals among habitats.
Randomness is used by scientists to handle two parts of their systems: the unknown and the irrelevant. The unknown flows from the fact that scientists lack perfect information about their systems; they characterize their uncertainty by using confidence intervals, maximum likelihood estimations, and Bayesian probabilities. The irrelevant part arises because scientists typically expect that some variables will have little effect on their questions of interest. Inclusion of such irrelevant variables renders the theory less tractable without increasing its empirical content or success. As von Neumann (1947) once said, "[the] truth…is much too complicated to allow anything but approximations". Therefore, any model will necessarily be a simplification of reality. Often these features incidental to the model's message are expressed as random variation. In the next two sections, we explore these uses of randomness in the context of niche and neutral theory.

Randomness as the unknown in niche and neutral explanations
Niche theory, which can be traced to Darwin (1859), was developed into its modern form through the works of Hutchinson (1957), MacArthur and Levins (1967), Mac-Arthur (1972), andChase andLeibold (2003). Evolutionary ecologists have recently enhanced the mechanistic underpinnings and predictive power of niche theory by incorporating resource-consumer models (Chase and Leibold 2003;Holt 2009). The Hutchinson-MacArthur school (Hutchinson 1957;MacArthur 1972;Vincent and Brown 2005;Slack 2010) works with the assumption that, even with tremendous complexity and a myriad of ecological interactions, natural communities are dynamic systems organized by regular and deterministic forces resulting species' differences (e.g., Mitchell and Valone 1990;Brown 2001). Fundamental to these models is the notion that ecological systems are regulated by dynamic attractors in the space of species densities. Such attractors could be points, orbits, or more complicated structures. Because of these attractors, communities respond to disturbance in predictable ways, either converging on their previous attractor or a new one, depending on the perturbation (Pimm 1991). So, while species densities may fluctuate in response to disturbances, they are in principle predictable-they do not wander in a random walk of densities. Or, to paraphrase Einstein, God does not roll dice.
Consider the initial position of an entity, a population for instance, in a dynamic system with a set of forces. In deterministic models, the subsequent position of the entity is specific and predictable. Modelers include in their model systems only a subset of the processes that occur in nature, with stochastic elements standing in for the residual variation (Clark 2009). They focus on the subsets because there are other unknown forces that cannot be measured with current tools, technology and budgets. But the unknown forces, in principle, could be measured, and at least some will be included in future models as the means become practical. The expanded model increases the apparent determinism and predictability of the system by providing more explanation for what had previously been unknown.
Clark (2009) suggests a heuristic model of this process: The first term is deterministic and explains how processes contribute to the response. The second term is not explanatory and takes up uncertainty by representing variation that one cannot account for in the first term. This is the stochastic element.
Progress in science, according to Clark (2009), occurs when variation moves from the second term (unknown) to the first term (known). With movement of variation in this direction, processes or phenomena shift from being attributed to stochastic variation to deterministic causation. The neutral models, Clark (2009) states further, emphasize movement in the opposite direction. Hence neutral models, at their essence, hold that phenomena attributed to the second, stochasticity term better represent explanations for species co-occurrence than do known, observable processes whose cause and effect relationships are well understood. Advocates of neutral models derogate observable differences in species characteristics as paramount mechanisms for coexistence.

Randomness as the incidental in niche and neutral explanations
Physicists in the nineteenth century knew well that it is impossible in practice to know the positions of all the molecules in a volume of a gas. This spurred them to develop the methods of statistical physics, which led to an understanding of many properties of systems, such as gases, that are composed of a large number of molecules. However, Laplace's (1902) probability theory does not say anything about achieving a perfect knowledge and a universal predictability, rather how to proceed in the absence of such a complete knowledge.
Consider a compact sphere dropped in a vacuum. Its motions will obey Newton's second law (F = ma). Drop a banknote from a distance in the real world. It will flutter and at the end come to rest far from where it was dropped. Does this falsify the second law? No. The banknote's deviation from a free-fall trajectory is explained by other forces (including wind and air resistance) not explained in Newton's fundamental laws of motion (Hoeferyz 2003(Hoeferyz , p. 1406. Analogous situations arise in (1) Response = f (covariates, parameters) + error population ecology. That a population's growth does not follow a perfectly sigmoidal curve does not imply that the logistic or Lotka-Volterra equations are not predictive, nor that niche theory concept is flawed.
Deterministic population models may tell us nothing about why a particular population does not exhibit perfect sigmoid growth-there may be a host of factors affecting population growth. The deterministic models of population growth do tell us that population growth is a function of each species' own population size and that of the other species with which it interacts. The sigmoid growth model does not say anything about when and how a population may be doomed when struck by a catastrophic hurricane. These factors are incidental to the message of sigmoid models of population growth. The last decade witnessed many analyses of population regulation (e.g., Ziebarth et al. 2010;Knape and de Valpine 2012;Kalyuzhny et al. 2014), demonstrating that regulation may be weak, because of noise in ecological systems. This should not prompt ecologists to seek shelter under stochasticity, but realize our knowledge of the system was genuinely imperfect, and constantly gear our efforts to knowing more about the system.

Lakatosian research program
We suggest that the niche-based and neutral schools correspond to two different types of scientific research programs (sensu Lakatos 1978). According to Lakatos, a scientific research program comprises two kinds of assumptions (or hypotheses): hard core and auxiliary. Hard core assumptions are not directly testable in isolation, but must be conjoined with the auxiliary hypotheses to construct models to explain observations and generate predictions. An example of a hard core would be Newton's laws of attraction which must be combined with auxiliary assumptions regarding the masses and distances between planets (among other things), to produce a model capable of predicting planetary motion. Auxiliary assumptions may be specific to a system, such as the number and masses of planets. They may also include simplifying assumptions of nature known to be false, but which allow scientist to do the math required to make predictions.
In the early stages of Newton's program, he made the auxiliary assumptions that the solar system consisted of the sun and a single planet, and that the planet was a point mass with no volume. Even with these simplifications, Newton made new and successful predictions about planetary motion. There were of course some data inconsistent with this simple model. But Newton already knew that would be the case; indeed, he could even anticipate what new auxiliary assumptions would be needed to explain the anomalies. The models incorporating the new auxiliary hypotheses explained not only previous anomalies but also made new, testable predictions. The positive heuristic of a research program directs the scientist to alter only the auxiliary hypotheses while leaving the hard core untouched. In this manner, the hard core itself can be evaluated even as simplifying assumptions and failed auxiliary hypotheses are replaced. According to Lakatos, a progressive research program is not one that never fails an empirical test, but rather one that, in such cases, replaces auxiliary hypotheses to produce a new model that can explain the previously disconfirming observation and make new testable predictions. If some of these predictions are successful, then the program is said to be empirically progressive. A degenerating program, on the other hand, is one in which new auxiliary hypotheses are produced primarily to explain previous anomalies and data, but these new hypotheses either do not generate new predictions, or the predictions they make are not successful Lakatos proposed that scientific progress occurs when different research programs show different rates of success at anticipating anomalies, modifying auxiliary assumptions to generate new successful predictions.
In our view, the niche-based research program includes several hard core assumptions: (1) fitness is density-dependent, (2) organisms experience fitness trade-offs along one or more axes of environmental heterogeneity, such that traits improving fitness in one environment decrease fitness in a different environment, and (3) organisms respond adaptively to the densities and traits of conspecifics and hetero-specifics. These hard core assumptions must be conjoined with auxiliary hypotheses to make testable predictions. Examples of such auxiliary hypotheses might include the nature of a trade-off between speed and efficiency of foraging among competing species (Kotler and Brown 1988;Wilson et al. 1999), the adaptive responses by a prey species to predation-risk (e.g., Brown 1999), or the information regarding hetero-specific densities available to individuals selecting habitats (Rosenzweig 1981; Morris 1988). Some predictions may succeed, while others may fail. But the failures can be handled using the positive heuristic of modifying auxiliary hypotheses to explain anomalies and generate novel predictions (Lakatos 1978); perhaps we had assumed that prey have fixed, unchanging information about the predation-risk in a habitat. We can replace that assumption with one that allows prey to update their information about predation-risk (Welton et al. 2003;Martín and Lopez 2005). The new model explains the previous anomaly and makes new predictions which we can then test with a new experiment. Thus, we use the data to iterate through a sequence of increasingly better, more predictive models in a manner consistent with a progressive research program. Indeed, this is how we see in models variation shifting from the stochastic element to the deterministic element (Clark 2009(Clark , 2012. If we try to frame the neutral approach as a Lakatosian research program, we encounter a problem with how proponents treat its apparent hard core. This hard core would seem to be: (1) fitness is density-dependent, and (2) "all individuals within a particular trophic level have the same chances of reproduction and death regardless of their species identity" (Rosindell et al. 2011). The auxiliary assumptions of the program would include the demographic parameters for birth, death and dispersal, common to all species within a trophic level. A major distinction between the neutral and niche program is that, because the neutral hard core assumes that species are functionally similar, the neutral program has no role for environmental heterogeneity. We allow that neutral theory can produce models-including some of great complexity-that are consistent with previous observations (e.g., some species-area distributions). While that would seem to be a success, the same could be said of the Ptolemaic model of planetary motion which could be made to account for almost any planetary or heavenly body motion with the addition of more epicycles. What the Ptolemaic program lacked, but the Copernican (and later, Newtonian) program possessed, was the positive heuristic that allowed it to predict new data that were corroborated by observations. Lakatos and Zahar (1978) state the problem with the Ptolemaic program thusly, "Each move in the geocentric programme had dealt with certain anomalies but had done so in an ad hoc way. No novel predictions were produced, anomalies still abounded and certainly each move had deviated from the original Platonic heuristic." We believe that the neutral program in community ecology suffers similar shortcomings; there does not appear to be a positive heuristic that anticipates anomalies, or permits observed anomalies to motivate new models with new testable predictions that are then corroborated. In effect, the neutral research program does not appear progressive but degenerating.
A critical feature of a progressive Lakatosian research program is that it can handle empirical anomalies by revising auxiliary assumptions while retaining its hard core. But the neutral program's hard core tends to be revised in response to empirical anomalies. For example, the hard core assumption that species are functionally similar is replaced by adding guild structure to a community to improve the fit of the model to tree data from Barro Colorado Island (e.g., Janzen et al. 2015). While this may be characterized as "integrating" neutral and niche theory, the improved model invokes the hard core of the niche program, but not that of the neutral program. So, while it may be the case that some patterns can be predicted by both niche and neutral models , failed predictions lead to revision of the niche program's auxiliary assumptions, but the neutral program's hard core.
The distinction between the neutral and niche research program is not whether you favor simple or complex models, but rather the role that simplicity plays. In a Lakatosian research program, simplicity can be found in the auxiliary assumptions, especially early in the development of a program. Simple auxiliary assumptions are important for making predictions, and when those fail you replace a simplifying assumption with another, more complex hypothesis. Each new addition adds empirical content because it makes new predictions. Lakatos (1978) describes how Newton begins with the simplest of systems, a single point-like planet revolving around a fixed point-like sun. He applied his hard core assumptions to this system to generate predictions. And then he successively added more reality by allowing heavenly bodies mass, spin and shape. Eventually he included interplanetary forces. But even as he increased the complexity of the systems he retained the same immutable hard core. Each new addition to the auxiliary assumptions added complexity and improved the empirical content without changing that hard core. Contrast this with the neutral program where it is the hard core itself whose simplicity is replaced with other hypotheses of more empirical content capable of generating novel predictions.
Proponents of neutral theory, in defense of their assumption of demographic neutrality, cite the example of an ideal gas where the assumptions are that the molecules behave as point-like particles that do not interact and that exchange energy only with the walls of the container in which they are kept at a given temperature. But in contrast to inanimate matter, in ecosystems, we deal with entities that mutate, evolve, and change, and fine-tune their interactions with partners. Thus the problem at the core of the statistical physics of ecological systems is to identify the key elements one needs to incorporate in models to reproduce the known emergent patterns and eventually discover new ones (Azaele et al. 2016). Moreover, the ultimate goal is to discover, and critically test, causal mechanisms producing emergent patterns (Bunin 2017).
Neutral program proponents claim that some of the program's auxiliary assumptions can be relaxed, including speciation (Etienne et al. 2007), spatial structure Cornell 2007, 2009) and the zero-sum rule (Etienne et al. 2007;Haegeman and Etienne 2008), and that this often does not affect the theory's predictions or produces predictions that match observations better (see Wennekes et al. 2012). Some of these auxiliary assumptions, such as an unrealistic panmictic source pool, however, are difficult to defend. Neutral theory cannot, because of the symmetry assumption of its hard core, predict which species will be rare or common (Wootton 2005;Leigh 2007). Because the hard core is essentially devoid of empirical content, we can never be sure to what new topics it can be applied-we know neither the domain of the theory nor when its assumptions are more likely to apply. It is not progressive.
One of the main advantages of the niche relative to neutral approach is the ability to generate additional mechanistic hypotheses when the established mechanistic models fail to predict observed patterns. Rosindell et al. (2012) argue that many mechanistic extensions of the neutral model can also test additional hypotheses (see also Jabot and Chave 2011;Janzen et al. 2015). These studies, however, use the neutral model as a null hypothesis and then build onto it to further describe anomalies in the data not described by the neutral model. Is this use of the neutral model characteristic of a robust, progressive research program? No. A null model, as Gotelli and McGill (2006) posit, is a tool to test the existence of some process. It does this by predicting how data would look in the absence of the process. A null model does not include novel processes that generate novel predictions. So, while null models can play a critical role in the heuristic of a research programs, they would not seem to constitute part of the hard core.

Neutral theory as a null hypothesis
Some neutral theory proponents treat the neutral model both as a statistical null model and process-based model (see Gotelli and McGill 2006;Munoz and Huneman 2016). Statistical (traditional) null models are based on randomization of empirical data (stochasticity applied to existing data), while dynamical process-based models incorporate a stochastic process into a biological model (stochasticity applied to a process-based model; see Gotelli and McGill 2006). This duality, as Gotelli and McGill (2006) point out, should be explicitly expressed to avoid confusion. If the neutral model is used as a null hypothesis (e.g., Bell 2001), its alternative hypothesis needs to be made explicit. If it is considered as a process-based model, which seems to be the case (see Hubbell 2001;Wennekes et al. 2012), then it should be compared to the predictions of a (simpler) null model, such as the log-normal (weaker, less explicit test). Those advocating the neutral model as a process-based predictive model (see for instance, Munoz and Huneman 2016) rather than as a null model cannot take refuge in null models and use weaker tests (Harte 2003;Gotelli and McGill 2006). For traditional null models of community assembly, the alternative hypothesis is that species interactions are important, whereas, for neutral models, the alternative hypothesis encompasses species interactions and species differences (Gotelli and McGill 2006), as well as an axis of environmental heterogeneity on which species differences manifest fitness differences.
Some studies show that models with stochasticity can make novel, testable predictions, but they are mixed neutral-niche models (see Kalyuzhny et al. 2015). For example, Kadmon and Allouche's (2007) model predicts that an increased habitat heterogeneity leads to more stochastic extinctions and possibly lower richness, a prediction that was later corroborated (Allouche et al. 2012). Other neutral models were used to predict extinction debts (Halley and Iwasa 2011) and time to originations of clades (Maruvka et al. 2013). In this last case, using neutral models as a null model, the motivation of Maruvka et al. (2013) was to deviate from simpler stochastic models for the sake of prediction.

Discussion and conclusion
We ought then to regard the present state of the universe as the effect of its anterior state and the cause of the one which is to follow-Pierre-Simon Laplace.
Prediction is difficult, especially the future-Niels Bohr. Several attempts to integrate niche-based and neutral-based programs have been proposed (Vellend 2010;Fisher and Mehta 2014;O'Dwyer and Chisholm 2014). But we see two fundamental problems with integrating the niche-based and neutralbased research programs. First, the neutral program is, as we argue above, a degenerate scientific research program. While it does succeed at capturing static, aggregate, emergent patterns like species abundance distributions, it does not appear to succeed at generating novel predictions when the hard core is combined new auxiliary hypotheses from that program. A second problem is that attempts to merge the two conceptual frameworks do so by replacing the neutral hard core with the niche hard core (environmental heterogeneity and functional non-equivalence), and then introducing some stochasticity. Of course, this means the theory is no longer neutral (Gewin 2006) in the sense originally advocated by Hubbell and other proponents (see also Munoz and Huneman 2016).
Niche-based, deterministic frameworks and neutral-based, stochastic frameworks have been viewed as lying on opposite sides of a continuum (Gravel et al. 2006). We view this continuum, however, to be misleading. Species differ in vital rates, physiologies, morphologies, and many other ways that render the ecological equivalence assumption of neutral theory wrong. Likewise, the world does not seem to be wholly deterministic-we know that demographic and environmental stochasticity is real and may, at some times and some places, trump deterministic causal pathways. Hence, we advocate not so much an integration of the two approaches, but a resolution.
To some extent, the argument over niche versus neutral research programs, and attempts to integrate, synthesize or resolve them, is an exercise in re-inventing the wheel. May (1972) demonstrated that increased community complexity (increased species richness), contrary to the prevailing views at the time (MacArthur 1955), decreased stability. Subsequent analyses (Yodzis 1981;Pimm 1984) showed this surprising result arose from the random assembly of the species comprising May's (1972) community matrices. Communities based on actual food webs (Yodzis 1981;James et al. 2015;Jacquet et al. 2016) or with non-random interaction coefficients (Pimm 1984;Allesina and Tang 2012) were much more likely to exhibit stability and achieve an equilibrium. These latter analyses built upon and strengthen the earlier analysis: while random communities are more likely to be unstable as species are added, some non-random communities, specifically those that resemble communities we observe in nature, have properties that engender stability even at high species richness.
These developments, perhaps delayed or waylaid in recent years by the excursion into neutral theory, illustrate how niche theory exemplifies a progressive research program as envisioned by Lakatos. When May (1972, 1974 looked at the question of diversity and stability, he used randomness due to mathematical and computational constraints along with ecologists' general lack of knowledge about real world community structure, the incidental and unknown. To May, real ecological communities were never random; randomness was only an auxiliary, never a core assumption (e.g., May 1999). In comparison, attempts to rescue neutral theory proceed by relaxing assumptions of ecological equivalence, in essence, adding niche differences and associated trade-offs. Recently Munoz and Huneman (2016) interpret ecological equivalence in light of strong and weak versions, the former being a mechanistic view of neutral theory at the individual level while the latter was shown to involve both equalizing (no competition) and stabilizing (competition) factors, adopting Chesson's (2000) framework. Referring to these chimeras as "neutral" theories seem misguided at best, equivocal (Munoz and Huneman 2016, p. 330), and ultimately futile. A "neutral" research program that builds in niche differences has sacrificed its hard core and is no longer neutral. Hubbell (in Gewin 2006), referring to neutral theory, stated: "Other theories don't suffer the ignominy of having a self-destruct button." We would say instead that neutral theory seems, like a cat, to have nine (or more) lives. Inclusion of niche differences in "neutral" models reflects, as we argue above, the empirical success of niche-based approaches.
Where does this leave neutral theory? We find no fault with the mathematical principles used by neutral theorists. Averaging and aggregating is a valid technique used by scientific disciplines to understand and analyze the natural world. Developing a model of reality necessarily involves a simplification of reality, with the best models having the simplification tailored to the scale of the phenomenon to be understood (Levin 1992). Unfortunately, this is simply model building 101. Instead, we argue that the resurgence of niche theory (e.g., Levine and HilleRis-Lambers 2009;HilleRisLambers et al. 2012;Letten et al. 2016) and proposals to synthesize niche theory with Chesson's (2000) coexistence theory (Letten et al. 2016) represent the best way forward. This synthesis is fully consistent with the conception of a progressive research program (Lakatos 1978). Bertrand Russell (1997) said it most aptly: that scientists "… should seek causal laws is as obvious as the maxim that mushroom-gatherers should seek mushrooms." Acknowledgements Thanks are due to D. W. Morris for intellectual discussions when Som B. Ale was at Lakehead University with support from Canada's International Polar Year program "Arctic Wildlife Observatories Linking Vulnerable EcoSystems" and Canada' Natural Sciences and Engineering Research Council. The authors also thank Burt Kotler, an anonymous reviewer, and Linus Svensson for crisp comments on a previous draft of the manuscript. Abdel Halloway wishes to thank the National Science Foundation (NSF) Graduate Research Fellowship (DGE-0907994 and DGE-1444315) for support. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors(s).

Dispersal limitation
Limitation of distribution or abundance in the vicinity of its parents because of either constraints on dispersal or inadequate production of dispersing individuals.

Ecological drift
Under ecological drift (Hubbell 2001) all individuals in the community, regardless of species, have equal probabilities of giving birth, dying, immigrating to another location, and (in one version of the model) acquiring a mutation that will eventually result in speciation. This does not mean that all species have an equal chance. Abundant species have a greater likelihood of being drawn, but only by virtue of their abundance. Individuals are equal, but species, as collective entities, are not (Norris 2003) Ecological equivalence when differences among individuals belonging to different species do not translate into differences in their probabilities of beingand persisting, in the present and future community Neutrality equivalence, and symmetry That different individuals from different species belonging to the same functionally uniform ecological community have similar birth, death and dispersal rates (Hubbell 2001;Etienne and Olff 2005). The neutrality hypothesis is that differences in species traits do not either affect the chances of that species being present or absent in a community, or influence changes in their relative abundances Niche theory Species can stably coexist in an ecological community if their characteristics (or traits) allow them to specialize on one particular set of resources or environment conditions (niches) in which they are superior to their competitors (Grinnell 1917;Hutchinson 1957;Chase and Leibold 2003) Relative species abundance The probability that a species has n individuals in a given region. When multiplied by the total number of species in the region this gives the number of species with n individuals. This is known as the species-abundance distribution