First, Tiwari et al. [1] included seven studies: five cross-sectional [2,3,4,5,6], one case control [7], and one cohort [8]. The synthesis (forest plots) are very questionable since they put in the same analysis different study designs, which may generate misleading results. Estimated intervention effects for non-randomized studies of interventions (NRSI), with different study design features, can be expected to be influenced to varying degrees by different sources of bias. Results from NRSI with different combinations of study design features should be expected to differ systematically, resulting in increased heterogeneity. As heterogeneity among NRSI is expected to be substantial due to their diversity of study designs, detected in the meta-analysis, we recommend that NRSI with different design features should be analyzed separately. Meta-analysis methods based on estimates and standard errors, and in particular the generic inverse-variance method, will be suitable for NRSI [9, 10]. They should have done a subgroup analysis for different study designs and remove the total values (diamond).
Second, most outcomes semen (semen volume, sperm concentration, total sperm number, progressive mobility, sperm motility, vitality) and sex hormones (follicle-stimulating hormone, luteinizing hormone, testosterone, prolactin and estradiol) analysis presented substantial heterogeneity in the forest plots; in one hand, metanalysis would not be recommended, because they are synthesizing different study designs and people in the same analysis. If they decided to do it, they should have done a proper sensitivity analysis and explore the sources of heterogeneity, which could be statistical, methodological, and clinical diversity among studies [11].
Third, despite their results being statistically significant, they are not clinically relevant, since the differences are very small and do not affect any sort of decision-making.
Fourth, the funnel plot presented to investigate publication bias is not recommended due to the limited number of studies included; when there are fewer studies, the power of the tests is too low to distinguish chance from real asymmetry. Thus, any assumption based on that is not valid.
In conclusion, there are major concerns about future studies being designed based in this review and not critically looking at it. The point of having a systematic review is to map the area, provide synthesis of effects, and show the best available evidence to decision-making and to plan future studies.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Disclaimer:
This article is autogenerated using RSS feeds and has not been created or edited by OA JF.
Click here for Source link (https://www.springeropen.com/)