Experimental and observational studies
Combining information from reviews of experimental and observational studies is problematic as the methodological differences have consequences for the interpretation of internal and external validity. For example, the problem of confounding bias is prevented by the design in a well performed randomised trial but has to be adjusted for in the statistical analysis of an observational study. This issue should be addressed in more detail. See also Faber T, Ravaud P, Riveros C, Perrodeau C, Dechartres A. Meta-analyses including non-randomized studies of therapeutic interventions: a methodological review. BMC Medical Research Methodology 2016:35 DOI: 10.1186/s12874-016-0136-0.
Random effects meta-analysis
The statistical analysis includes both fixed and random effect models. In contrast to fixed effect models, which are used to estimate a common effect, random effect models estimate an average effect, and the variability of the effects represented by their average may have clinical implications. This can be discussed using a prediction interval, see Riley RD, Higgins JPT, Deeks JJ. Research Methods & Reporting: Interpretation of random effects meta-analyses. Br Med J 2011;342:d549.
The authors claim that their study was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) statement. PRISMA is, however, a reporting guideline (an evidence-based minimum set of items for reporting in systematic reviews and meta-analyses), not a guideline on how to conduct studies. I recommend rephrasing the sentence.