It is a controversial opinion, but I hold the belief that there are fundamental flaws in the Darwinist macro theory of evolution. The reason for this belief stems from the intersection between my studies of microbiology, as it relates to genetic mutations and protein synthesis, and computation theory. The space and time complexity of protein synthesis, from the moment an instruction set begins to be created (mRNA) to the completion of a protein, is too great. As such the complexity and variation, we see in life at the macro level is infeasible under the constraints of Darwinistic evolution not enough time has passed to make that complexity at all probable. This implies that either our understanding of microbiology is flawed, or Darwinist theory of random mutation is flawed. I find the latter to be more likely as empirically the evidence is far vaster. Having said all off this I still find macro evolution necessary because there is certainly strong evidence for biological change. I am merely suggesting that a different theory of change to be more plausible given the problems that arise when viewing biological processes from a computational standpoint.
How did you reach that conclusion? Wouldn't you need to have a way to accurately estimate the time it takes for micro-organisms to evolve to something approximating the current stage of life...? I imagine that's difficult.
..But then again I'm just an uneducated buffoon.
The mathematics and population model is rather straight forward though it does become a more complicated story when bringing computation theory into the mix.
I will explain this further below.
I propose that individual events seem random but overall, the macro evolution is deterministic. The illusion of the stochastic nature of the process is derived from sensitive initial conditions, hence it is chaotically deterministic. In Schrödinger’s What is life? he alludes to this as compares life to statistical processes in thermodynamics. At the start of the book he asks a peculiar question, “Why are atoms so small?”, and then goes on to describes them thermodynamically. This only makes sense later when he asks, “Why are organisms so big?”, and then states the necessity for life to be such a size that stochastic randomness does not rule over it - his reason being that if life was small enough to be dictated by such randomness that its vast complexity could not arise as we see it. Hence, he reaches the same conclusions but purely in an intuitive manner informed by physics. His work now makes me think that there is in fact a law dictating these ‘random’ processes. The law is something like “Biological systems optimize themselves towards lower local entropy”.
Evolution through "natural selection" is not random. It's guided by "natural selection," including sexual selection. That's your lower local entropy.
Keep up with the theories though... It's refreshing.
Both of your points are deeply related given the fundamental claim of Darwinism, that evolution is derived from two processes: random heritable mutation and natural selection. To begin grasping the validity of Darwinism we must know the probability of a functional mutation and whether enough chances have been had to produce such functional mutations.
It is widely accepted that the emergence of new heritable traits is a matter of generating a new shape of protein. This begins with DNA which is made up of a sequence of nucleotides. Sections of these nucleotides form what we call genes which are instruction sets for constructing proteins. There is fundamentally two types of instruction sets, proven instruction sets that outline the construction of a functional protein and sets that consist of a useless sequence that does not outline the construction of a functional protein – the proteins that come from the former instruction set do useful things that while proteins from the latter do nothing. Biologists agree that most mutations are derived from the latter type of gene so that is the type we will deal with. Proteins are sequences of amino acids and the 3-D shape of proteins are determined by that sequence of amino acids, hence, to generate a new protein shape requires the sequence of amino acids to change.
The length of proteins varies and as such we can generalize its length by saying it consists of n-elements. Each element of the protein sequence consists of a single amino acid for which there is 20, so there are 20 possible choices to occupy a single element in the sequence. As such the total possible protein sequences is 20^n. The average size of a protein is 150 elements, though many geneticists and evolutionary biologists think its closer to 250. For a sequence of 150 elements there are 20^150 possible proteins. As stated previously most of these proteins do not do anything, the reason being the do not form a stable shape. So, it is necessary to also know the possibility of a mutation leading to a stable shape for a protein of 150 elements is 1/10^74 from stable shapes. We not only need a stable shape, but we need one that serves a useful function, the probability for a 150-element protein forming such a shape is 1/10^77. In other word for any given mutation the probability of leading to a stable and function protein is near 0.
It is now necessary to account for population genetics and we can utilize bacteria to do this given they are by far the most numerous organisms and have been around the longest. It is estimated that that 5.95*10^39 bacteria have ever existed; we will approximate this to 10^40 total bacteria. We will assume that for each bacterium there is a mutation making the total number of mutations in the bacteria population across time to be 10^40 (keep in mind there has been less mutations than this). If there has been 10^40 chances and the possibility of generating a stable/functional protein of 150 elements is 1/10^77, the possibility of generating a single functional protein being synthesized via mutation over the course of the populations entire existence is 10^40*(1/10^77) which is a 1/10^37 chance. As you can see the possibility is practically 0 for a single useful mutation to occur for this population let alone the much greater number we have actually observed, hence random mutation as Darwinism explains it is hopeless in producing the complexity we see in the biological world.
Using computation theory to model the process is tricky due to the complexity of system that produces proteins. But as described above it is easy to see that the time complexity is exponential. Generalized we have a search space 20^n that is dealt with via the ribosome 1 process at a time under type resource allocation given the availability of necessary amino acids (not to mention the search for and transport of them to the ribosome). Using the bacteria population as an example once more under the assumption of this exponential time complexity it would take roughly 8 billion years would have to transpire to guarantee the population generates a single function protein. That amount of time slightly less than twice the age of the earth and approximately twice the age of life on earth.
If we attribute the generation of new traits via random mutation we quickly run into problems and in my opinion, they cannot be ignored. We are either forced to accept that a massive number of improbable events have and continue to occur or that new traits are not consequence of random mutation. I find the latter of these two to me more palatable and as such I conclude that their must be something nonrandom driving these mutations towards the generation of functional proteins. As such I also find that sexual selection bias in the population and natural selection to be less fundamental than the ‘law’ that encourages the generation of useful traits. I believe natural selection can explain variations in local populations but I do not find that it paired with random mutation explains the bigger picture, that is the origin and variability of species.