Well, I think it's by far the best mechanism we have for determining useful knowledge at this point. That being said there are some problems with it.
One of the biggest problems with it is the church of science mentality that the public has. This isn't a problem with the sciences per say, but it does affect how science operates in our society. It's people who, rather than treating scientific findings as the progressive (not politically so, but referring to linear progressive steps), falsifiable method which attempts to determine things through showing them to be consistently repeatable, instead treat it as if science "proves" or shows something to be outright truth. These are the people who will find a study that says something and post it to social media saying "Science proves ______" - and that's that. Then it gets spread around as some totally misguided appeal to authority which doesn't lead to much good for anyone. My response to this is if there is a scientific claim find the study making it, check its methodology, and if it seems reasonable file it away as "that's a reasonable claim" and wait for it to be falsified. Until then treat it like a reasonable, but not fully proven, claim. Not many people even enact even these imperfect checks and balances though and treat scientific studies as if they were the word of God from on high. I believe this is what the OP is frustrated with and, to an extent, he's right - just because "science says!" doesn't mean it's true. The onus is on you to be a good and critical reader of scientific studies rather than just a trusting consumer.
Another element of this is that publishing journals aren't always credible but they
look credible, and with people treating science as gospel, bad studies enter the public mindspace as if they were good ones. For an example of this:
"
In the latest ploy, reported Wednesday, a group of researchers at the University of Wroclaw, in Poland, tried to seat a fictional scholar onto the editorial boards of 360 academic publications.
The goal: to test whether, with just a CV — full of fake scientific degrees — and a profile on Academia.edu as well as a fake university, some would accept a scholar named “Anna O. Szust” (which translates to “Anna, a Fraud” in English) as a member of their editorial boards.
And many did. The sting, reported in Nature, netted 48 journals — nearly all of which were so-called “predatory” journals. Such journals accept manuscripts without reviewing them, print them without editing them, and otherwise make a mockery of the scientific literature by pumping out low-quality work.
Some offered Anna potentially lucrative profit-sharing. Others required payment from her.
Not one of the 120 legitimate publications included in the scheme fell for the ruse.
Although the operation was cute, those results weren’t surprising. After all, this isn’t the first such stunt; a top bovine excrement researcher, Hoss Cartwright, has ended up on boards, too. And at legitimate journals, editors are generally very wary of scientists who try to get themselves on editorial boards."
https://www.scientificamerican.com/article/science-sting-exposes-corrupt-journal-publishers/
The system has cracks and not all studies are good. For most of the public though, they will take any study as gospel if it serves their purposes because of, again, the "church of science" mentality. We often treat scientific findings as an appeal to authority rather than as the type of testing of falsifiable claims the method itself is built around.
The other side of it is a combination of money and corruption in peer review. Science isn't just a bunch of goofs in lab coats doing stuff because they love it and just really want the truth - there is a political apparatus around the processes of publication and funding which leaves very clear incentives to either abuse the system or conduct studies with the intent of finding a certain result.
A few more articles of interest:
"
This human tendency is not limited to the media. Science, oft sold as the clear-headed, unbiased answer to confirmation bias, is open to the same human manipulations, purposeful and accidental — and significantly more often than we might guess.
Research scientists are under pressure to get published in the most prominent journals possible, and their chances increase considerably if they find positive (thus “impactful”) results. For journals, the appeal is clear, writes Philip Ball for Nautilus: they’ll make a bigger splash if they discover some new truth, rather than if they simply refuted old findings. The reality is that science rarely produces data so appealing.
The quest for publication has led some scientists to manipulate data, analysis, and even their original hypotheses. In 2014, John Ioannidis, a Stanford professor conducting researching on research (or ‘meta-research’), found that across the scientific field, “many new proposed associations and/or effects are false or grossly exaggerated.” Ioannidis, who estimates that 85 percent of research resources are wasted, claims that the frequency of positive results well exceeds how often one should expect to find them. He pleads with the academic world to put less emphasis on “positive” findings"
https://wilsonquarterly.com/stories/sciences-under-discussed-problem-with-confirmation-bias/
Another:
"
A Wall Street Journal op-ed warning emphatically about problems with scientific peer review begins by summarizing an especially extensive case of scientific fraud (also outlined in a Physics Today Online News Pick):
Academic publishing was rocked by the news on July 8 that a company called Sage Publications is retracting 60 papers from its Journal of Vibration and Control, about the science of acoustics. The company said a researcher in Taiwan and others had exploited peer review so that certain papers were sure to get a positive review for placement in the journal. In one case, a paper's author gave glowing reviews to his own work using phony names.
The op-ed appeared a few days after the New York Times ran the commentary “Crack down on scientific fraudsters” by the cofounders of the blog Retraction Watch, which had broughtthe 60-retractions scandal into wide public view. Citing other cases, that piece argued that the penalties for scientific fraud are generally insufficient, with too little repayment of misused funding, with too little professional ostracism of offenders, and with resignations forced—and criminal charges filed—too rarely.
The WSJ op-ed’s author, Hank Campbell, condemns the “absence at many journals” of “sound peer-review practices” and cautions that some “errors can have serious consequences if bad science leads to bad policy.” Linking peer-review problems to the problem of irreproducibility (nonreplicability) of research results, he invokes the authority of National Institutes of Health leaders Francis Collins and Lawrence Tabak. They began a January Nature commentary by reporting that a “growing chorus of concern, from scientists and laypeople, contends that the complex system for ensuring the reproducibility of biomedical research is failing and is in need of restructuring.” They agree with that chorus and declare that recent evidence showing this “irreproducibility of significant numbers of biomedical-research publications demands immediate and substantive action.”"
https://physicstoday.scitation.org/do/10.1063/PT.5.8057/full/
Then if you add in an external political sphere which oftentimes hugely influences the purse strings of science and you've added in moneyed incentives to provide politically expedient findings. What this means is if you want to pursue scientific truth in a politically unpopular area, good luck getting funded. If you want to conduct even trivial work in a popular area though? Funding would get
a lot easier. The end result? Massive overrepresentations of "consensus science" in certain areas and and incentivizing supporting hypothesis which are popular and well funded with alternative scientific pursuits being exceedingly difficult to get funding for. Once again, the public will take this as sign that the God of science has cast its judgment, and we must all pray before its sanctified word.
Part of why I enjoy working in the humanities is that everyone knows we're kind of flim-flam men/women so they take us with a grain of salt are more likely to trust us when we have something really provocative to say. The hard sciences on the other hand? They're viewed as objective, unbiased strivings for the truth. People trust them outright because of that - trusting their authority rather than a skeptical trusting of sound methods. This can, unfortunately, be a mistake. Our job? Trust but verify.