Mathematical and Computational Statistics/Probability a long with a more general obsession with justification, verification, and inference.
I spend time in some mathematical communities where the hobby of analyzing the statistical results of papers from various fields is considered exciting. We literally just take the models that were used in a paper, we set it up the exact same way and then add noise. From that random noise the values the authors got will be spit out and with that the validity of their work should be met with extreme skepticism.
Social Science is the worst, followed by Psychology, and then Economics.
Though I find the economics situation to be the most egregious while the other two are understandable given their difficulty and the fact that your average psychologist isn't really in it for the statistics.
Okay, but how would you suggest they fix it?
It would take radical change but I think the following is at least possible.
If do a masters or phd in Psychology, generally by the end of the program you should have an equivalent of an associates and bachelors degree in Statistics respectively. Maybe there are different programs where this requirement is less strict.
I stress this because I have been studying statistics and probability theory pretty hardcore for that lat two years and I'd say I'm about the bachelors degree level. I did this to understand value dynamics of financial products which is really hard to do right. Applying statistical methods and models to Psychological Experimentation is harder than that yet nearly every Psych person I talk to has a stats 101 level of understanding. That's concerning given my response to your next set of paragraphs.
Beyond that, as I noted in a previous thread, publishing incentives need to change. A researchers career is tied to their ability to publish papers in research journals. Research journals do not publish papers that conclude with a result not being statistically significant. Metric gaining mechanisms hidden in the mathematics of modeling is what leads to false conclusions around some results being statistically significant. As such, you are financially better off if you are ignorant of how a metric functions within a particular model of an experiment.
As someone who verifies studies as a hobby it would be nice if the data used in an experiment had to be public once published. In almost all cases data is hidden, so people like me are left with recreating results using the model constructed in the paper and adding noise to see what happens with the metrics used. It would be much easier if we had access to the data because we would also be able to check if the given model itself is even correct. This is a massive hurdle for anyone trying to recreate an experiment as well.
The study of the mind isn't really at the point of math yet, the tools haven't really gotten there. Statistics isn't usually Psychology so much as Sociology, with Psych rooted from seeing how behaviors between people show as symptoms towards larger bodies of patterns.
Sociology is really really bad, certainly worse off than Psychology at the moment.
I am not familiar with Psychology in its totality as a field, I just know there is one specific area I have a bone to pick with. That is whatever you would call research based psychology that runs experiments and attempts to verify results statistically.
There are areas of the field that can be approached scientifically and quantitatively, I am sure there are other areas that cannot. My specific issue is that those who work in the more scientific area suck at what they do, have few real results, yet treat their findings as objective and then use these to consult on how to deal with patients and inform policy. They essentially push misinformation.
You have to somewhat eyeball it to get to the point of refining it into something more exacting further down it's development, and with experience it becomes easier to recognize the patterns to the point of being able to measure mental activity based on their physical tells and stories, allowing for the basis to form a case study that can be compared to former data from other patients of a similar path if not an admixture or recombination of multiple.
Again, when you sort people into who do and don't conform towards these clusters of symptoms, you begin to find things they have in common even outside of that categorization. It works like a means of sorting people, which can be useful for finding more uniform solutions that work broadly across the board and otherwise helping them get through similar struggles they seem predestined to find themselves in over their own tendencies and perceptions, sort of like how groups like AA appeal to more of a mass audience from being more relatable to one another.
This has to be treated with great care if we are to actually believe it to be robust knowledge. We are so good at seeing patterns where there are none. This is exactly why if psychology and social science are to become actually respected and believable, then rigorous forms of verification are necessary. Statistics is the only way we know of to deal with this atm.
If that is not achieved then a big part of the field is equivalent to philosophy in my eyes. Not as a subject matter but in how sound it is epistemicaly. It would be like Marxism for instance. Marxism is formed by viewing seemingly real patterns in the world. It becomes a method of analysis and as a theory if you apply to the world it really seems to make sense of everything.
I am sure you can reference theories that actually consistently reproduce. If that's the case, they can be analyzed statistically. If the the groups and associations being described are too ambiguous to be analyzed statistically I will find them interesting and will support their development further. However, I will be skeptical of them as legitimate categories and associations.
I actually knew someone in the field who was a lab tech/monkey for brain scans, it's a budding field but looks pretty optimistic.
Its much easier to apply metrics to a set of images, especially in the age of sophisticated image processing, than to things like testimony.
You can still observe over enough test subjects what kinds of responses are and aren't more typical, then for every person who accuses it of not having a large enough sample they can improve upon the data.
Indeed you can and in the same way you would deal with images. The only difference is its easier to set up an experiment with images because they are easily definable and analyzed.
Or pick fields where you study the mind rather than healing it, it's not like it's just a bunch of therapists and counselors with half of them being able to give you drugs.
True, and if you do this you add to the encyclopedia which is an honorable pursuit.
As to your comments on medication, it does seem crazy that a kid will be given Amphetamine everyday starting at a young age.
I am entering pure fantasy now but sometimes I wonder if a completely alternative education system would make this method obsolete. The system we impose on kids seems really unnatural and its the expectations in that system that make the apparent need for adhd meds necessary.
The average school day is nearly 7 hours with approximately 6 hours of that being instruction. I wonder how effective it would be if we limited instruction to just 2-3 hours and made the rest of it elective. Maybe you don't have to pump a kid up with speed because they fail to pay attention for 6 hours. The most productive aspect of school seems to be as a place of socialization anyway.
I went to a Monterssori school for a period and that experience had a big effect on me.