Robin looks for meta-analysis alternatives 1: JamoviMeta.

Meta-analyses. So much meta, many analyses. I’ve done a few, two are under review, and two almost ready for submission. Red thread in this is the Comprehensive Meta-Analysis (CMA) meta-analysis software package. CMA has brought the practice of meta-analysis (or ‘an exercise in mega-silliness‘, as Eysenck called it) to a broader audience because of its relative ease of use. Downside of this relative ease of use is the unbridled proliferation of biased meta-analyses that serve only ‘prove’ something works, but let’s not get into that – my blood pressure is high enough as it is.

Some years back, CMA changed from one-off purchases to an annual subscription plan, ranging from $195-$895 per year per user, obviously taking hints from other lucrative subscription-based plans (I’m looking at you, Office365). Moreover, CMA has a number of very irritating bugs and glitches: just to name a few, there’s issues with copying and pasting data, issues with not outputting high-resolution graphics but just a black screen, issues with system locale, etc. etc. On the whole, CMA is a bit cumbersome and expensive to work with, and I’ve been telling myself to go and learn R for years now; if anything to use the Metafor package, which is widely regarded as excellent.

Would I like some cheese with my whine?

However, I never found the time to take up the learning curve needed for R (i.e., I’m too stupid and lazy), and while recently whining on Twitter about how someone (most definitely not me) should make a graphical front-end for R that doesn’t pre-suppose advanced degrees in computer science, voodoo black arts and advanced nerdery; Wolfgang Viechtbauer pointed me to JamoviMeta.

In my quest to find a suitable alternative to CMA that even full-on unapologetic troglodytes like me can understand – let’s give it a test drive!

DISCLAIMER: Most of the time I have no idea that I’m doing, as will be readily apparent to any expert after even a cursory glance.

INSTALLING AND FIRST GLANCE

I was redirected to a github page, which instructed me to first download Jamovi, and add the module MetaModel.jmo.

Never heard of Jamovi before, but let’s give it a shot – the installer seems straightforward, MetaModel is an add-on for the Jamovi software package, which is itself a fairly new initiative at an “open” statistics package. I’m not entirely sure if Jamovi itself is an add-on to R, but at this point that’s not particularly relevant for what I want to do.

The main screen of Jamovi looks simple, clean and friendly. Now, to ‘sideload’ MetaModel. Nothing in the menu so, click Modules, sideload, find the downloaded MetaModel.jmo and import it.

ENTERING DATA

JamoviMeta main window

It’s not immediately apparent where I should start – the boxes with labels like “Group one sample size” look inviting as text boxes, but entering information doesn’t work. Using the horizontal arrow to shift the 3 bubbles with “A” on the left panel to the right doesn’t work and flashes the little yellow ruler(?) in the text box which isn’t a text box.

Entering variables (note how the dialogue box resembles SPSS).

The grey arrow pointing to the right brings me to a spreadsheet-like… Well, spreadsheet. Ah! The A, B, C refer to columns in this spreadsheet, and the software’s expecting data as you’d expect: study name, sample size, mean, standard deviations. Jamovi seems to automatically recognise the type of data I’ve entered, but also seems thrown off by my use of a comma instead of a period. Incidentally, this is/was a major issue with CMA, which depends your computer’s ‘locale’ settings – if you’re from a country that uses dots for thousands and commas for decimals (eg, €10.000,00) and you send a data file to a colleague who has US numbering (eg, $10,000.00), the data would be all screwed up. Adding variable labels isn’t immediately apparent either, but double-clicking a column header and then double clicking the letter of the column lets you change the label.

Variable labels & type window

Having entered the data, I go back to “Analyse”, and try to enter my newly made data into MetaModel. Everything works, except… It won’t accept the sample sizes for my data. When I try to, it flashes the yellow ruler (?) in red – Ah, this probably means it wants continuous data, but the sample sizes had been interpreted as ordinal data as denoted by the three bubbles (same icons as in SPSS).

This being corrected, MetaModel goes straight to work (apparently), and tells me “Need to specify ‘vi’ or ‘sei’ argument”. Well obviously. More random clicking is in order, I think – that’s never failed me, since psychology students are taught to keep clicking until the window says p<0.05 or smaller*). I’ve only just entered data, and haven’t actually told MetaModel what to do so it’s no surprise that nothing works.

I flip open ‘Model options’, ‘plots’ and ‘publication bias’.

…I quickly close ‘publication bias’ again, as it only shows options for Fail-safe N. Let us never mention Fail-safe N again, and I hope the developer removes this option ASAP. I am aware of the current discussion of how Trim & Fill probably doesn’t work very well either (nor does anything else, apart from 3PSM apparently, but I think everyone can probably agree that Fail-safe N should never be used.

Clicking around a bit (I won’t go into all the different types of meta-analysis model estimators), I find out that I have to choose either ‘Raw Mean Difference’ or ‘Log Transformed Ratio of Means’ to make the “Need to specify ‘vi’ or ‘sei’ argument” message go away. Not sure what this is about. However, all this looks encouraging, and it’s time for real data.

I prepared a small data file in CMA, based on a meta-analysis we’re currently working on, using Excel as an intermediary as CMA’s data import/export capabilities as non-existent and I need to change all decimal commas to decimal points, and copy-paste the data into MetaModel. Small issue: there’s no fixed column for subgroups within studies (or maybe I’m just doing it wrong, so I renamed the studies into Kok 2014, A, B, etc.

JamoviMeta data window

CMA data window

THE ANALYSES

However, running the analyses from here on was straightforward, easy and quick. The results are pretty much consistent with CMA (I used a DerSimonian-Laird model estimator, I think that is the CMA standard). I saw no strange differences or outliers, apart from a few (not particularly large) differences in effect sizes. These are probably due to subtle differences in calculations, but I take it both CMA and MetaModel have their own set of assumptions for calculations which explain the small variations. Kendall’s tau was even spot on.

MetaModel main results

CMA main results (click for bigger image)

EXPORTING OUTPUT

MetaModel has tackled one of my biggest gripes with CMA: high-quality images. Unhelpfully, CMA’s so-called ‘high resolution’ outputs have been quirky, ugly and too low resolution for most journals as it would only export to Word (ugh), Powerpoint (really?) and .WMF (WTF?). In MetaModel, right-clicking e.g., the funnel plot gives you the option

Right-click graphics export options

to export the image to a high-quality PDF which looks crisp and clear (download sample PDFs of the MetaModel funnel plot and MetaModel Forest plot here).

MetaModel forest plot

CMA “high resolution” forest plot

MetaModel funnel plot

CMA funnel plot (with imputed studies)

THE VERDICT:

If this is a ‘beta’, it looks and work better than OpenMetaAnalyst ever did (although to be fair, I should revisit that some time). The developer (Kyle Hamilton) has done an impressive job in coding relatively simple, but very usable module for meta-analysis. It is lightyears faster than CMA (which can crawl to a virtual stand-still on my i3 laptop) and can output high-quality graphics. Also, it does real-time analyses so there’s no need to keep mashing that “-> Run analyses” button after making small changes. Choosing Jamovi as a front-end was a good bet – its interface looks friendly modern and crisp. Of course, features are missing and this was just a very quick test run, but my first impression is very good. I’d very much like to see where this is going.

THE GOOD:

  • Pretty much MWAM (Moron Without A Manual) proof.
  • Feels much more modern than CMA. Looks better. MUCH faster.
  • More model estimators than CMA.
  • Contour-enhanced funnel plots and prediction intervals. Nice addition.
  • So far, no glitches or crashes.
  • It’s free!

THE BAD:

  • Hover-over hints (contextual information if you hover over a button) would be nice
  • Error messages aren’t especially helpful

THE UGLY:

  • Fail-safe N.

THE REQUESTS:

  • Modern alternatives for publication bias, e.g. p-curve, p-uniform, PET(-PEESE) or 3PSM.
  • 95% CIs around I²
  • Support for multiple subgroups and timepoints?

 

*) Only a slight exaggeration: this is what students teach themselves.

 

 

 

 

Good heavens, my h-index is still irrelevant.

My H-index rose from 6 (“HaaaLOSER“) to 7 (“mind-numbingly tedious and uninteresting“). At some point this year maybe it’ll rise to 8 – “Like a fully gorged woodlouse penis.

Meanwhile, here’s a silly comparison about the parallels between being in a band vs. being in academia. Good to know that if one failing career fails me, I can always go back to another failing career that failed me.

High-resolution Risk of Bias assessment graph… in Excel!

Some years ago, I found myself ranting and raving at the RevMan software kit, which is the official Cochrane Collaboration software suite for doing systematic reviews. Unfortunately, either because I’m an idiot or because the software is an idiot (possibly both), I found it impossible to export a Risk of Bias assessment graph at a resolution that was even remotely acceptable to journals. These days journals tend to accept only vector-based graphics or bitmap images in HUGE resolutions (presumably so they can scale these down to unreadable smudges embedded in a .pdf). At that time I had a number of meta-analyses on my hands so I decided to recreate the RevMan-style risk of bias assessment graph, but in Excel. This way anyone can make crisp-looking risk of bias assessment graphs at a resolution higher than 16dpi (or whatever pre-1990 graphics resolution RevMan appears to use…)

The sheet is relatively easy to use, just follow the embedded instructions. You need (1) percentages from your own risk of bias assessment (2) basic colouring skills that I’m sure you’ve picked up before the age of 3. All you basically do to make the risk of bias assessment graph is colour it in using Excel. It does involve a bit of fiddling with column and row heights and widths, but it gives you nice graphs like these:

Sample Risk of Risk of Bias assessment graph

Sample Risk of Bias Graph

Like anything I ever do, this comes with absolutely no guarantee of any kind, so don’t blame me if this Excel file blows up your computer, kills your pets, unleashes the Zombie Apocalypse or makes Jason Donovan record a new album.


Download available here (licensed under Creative Commons BY-SA):

Risk of Bias Graph in Excel – v2.6

MD5: 1FF2E1EED7BFD1B9D209E408924B059F

Changelog:

UPDATE November 2017 – I only just noticed that the first criterion says “Random sequence allocation” where it should of course say “generation“. Version 2.6 fixes this.

UPDATE January 2017 – another friendly person noted that I’m an idiot and hadn’t fixed the column formatting problem in the full Cochrane version of the Excel. Will I ever learn? Probably not. Version 2.5 corrects this (and undoubtedly introduces new awful bugs).

UPDATE September 2016 – a friendly e-mailer noted that the sheet was protected to disallow column formatting (which makes the thing useless). Version 2.4 corrects this.

 

 


 

 

eMental Health interview with VGCt [Dutch]

Nothing like an interview on eMental Health to make you feel important

I’m still reeling from the festivities surrounding my H-index increase from 3 (“aggressively mediocre“) to 4 (“impressively flaccid but with mounting tumescence“)*. Best gift I got: a sad, weary stare from my colleagues. Yay! But back to eMental Health (booooo hisssss).

Some while back I did an interview (in Dutch) with Anja Greeven from the Dutch Association for Cognitive Behavioural Therapy [Vereniging voor Gedragstherapie en Cognitieve Therapie] for their Science Update newsletter in December 2015. It’s about life, the universe and everything; but mostly about eHealth and eMental Health; implementation (or lack thereof), wishful thinking, perverse incentives (you have a filthy mind) and that robot therapist we’ve all been dreaming about (sorry, Alan Turing).

Kudos to me for the wonderful contradiction where I call everyone predicting the future a liar and a charlatan; after which I blithely shoot myself in the foot by trying to predict the future. In my defense, I never claimed I wasn’t a liar and a charlatan. It was great fun blathering on about all kinds of things, and massive respect to Anja who had to wade through a 2-hour recording of my irritating voice to find things that might pass as making sense to someone, presumably.

Anyway, the interview is in Dutch, so good luck Google Translating it!


Link to the VGCt interview in .pdf [Dutch]

 

*) Real proper technical sciencey descriptions for these numbers, actually. The views expressed in this interview are my own; and nobody I know or work for would ever endorse the silly incoherent drivel I’ve put forward in this interview.

Corrected JMIR citation style for Mendeley desktop

Endnooooooooo!te.

100 out of 100 academics agree that working with Endnote is about as enjoyable as putting your genitals through a rusty meat grinder while listening to Justin Bieber’s greatest hits at full blast and being waterboarded with liquid pig shit. I’ve spent countless hours trying to salvage the broken mess that Endnote leaves and have even lost thousands of carefully cleaned and de-duplicated references for a systematic review due to a completely moronic ‘database corruption’ that was unrecoverable.

Thankfully, there is an excellent alternative in the free, open source (FOSS) form of Mendeley Desktop, available for Windows, OS X, iToys and even Linux (yay!).

One of the big advantages of Mendeley over Endnote, apart from it not looking like the interface from a 1980s fax machine, is the ability to add, customise and share your own citation styles in the .csl (basically xml/Zotero) markup. While finishing my last revised paper I found out that the shared .csl file for the Journal of Medical Internet Research (a staple journal for my niche) is quite off and throws random, unnecessary fields in the bibliography that did not conform to JMIR’s instructions for authors.

The online repository of Mendeley is pretty wonky and the visual editor isn’t too user friendly, so I busted out some seriously nerdy h4xx0rz-skillz (which chiefly involved pressing backspace a lot) .

Get it.

Well, with some judicious hacking, I present to you a fixed JMIR .csl file for Mendeley (and probably Zotero, too). Download the JMIR .csl HERE (probably need to click ‘save as’, as your browser will try to display the xml stream). It’s got more than a few rough edges but it works for the moment. Maybe I’ll update it some time.

According to the original file, credits mostly go out to Michael Berkowitz, Sebastian Karcher and Matt Tracy. And a bit of me. And a bit of being licensed under a Creative Commons Attribution-ShareAlike 3.0 License. Don’t forget to set the Journal Abbreviation Style correctly in the Mendeley user interface.

Oh, I also have a Mendeley profile. Which may or may not be interesting. I’ve never looked at it. Tell me if there’s anything interesting there. So, TL;DR: Mendeley is FOSS (Free Open Source Software), Endnote is POSS (Piece of Shit Software).

Update: A friendly blogger from Zoteromusings informed me in the comments that I was wrong: Mendeley is indeed not FOSS but just free to use, and not open source. Endnote is still a piece of shit, though. I was right about that 😉

Academic journal recognition, not of a very savoury kind

You know an academic field has come to full maturity when the null results come rolling in. But you really know an academic discipline has come to maturity when the first shady academic journals pop up. Enter “E-Health Telecommunication Systems and Networks” from Scientific Research Publishing (love their description “An Academic Publisher“!).

Much publish. Very journal. So impact factor. Wow

An invitation mail mass-spam just popped into my spam e-mail box alongside invitations to submit to other “rapid peer review and publishing” journals with names that range from could-be reputable (‘British Journal of Medicine and Medical Research’) to downright weird. The basic premise of these journal is well-known to me as a musician: the much-maligned pay-to-play‘ principle has many a musician grunting (ha!) and complaining – it looks like the pay-to-publish racket is not very different.

Should you send an eHealth-related manuscript to these people? Is Scientific Research Publishing a predatory journal? Who knows. But the word on the virtual streets is not good. And a company publishing 200+ English-language journals in just over 6 years is ambitious, to say the least. Oh yes, and they gladly accepted a randomly generated maths paper (read: total bullshit) for publication after a mere 10 days of review – only the lead ‘author’ didn’t pony up the processing charges.

Long story short, if you want to publish your eHealth-related manuscript in one of my 666+ Bottom Of The Barrel Research Publishing Co. journals, just send me the manuscript and $2000. I’ll personally do some ‘peer-reviewing’. Or do it the boring old way and send your eHealth-related manuscript to reputable academic journals like the Journal of Medical Internet Research, Internet Interventions or the Journal of Telemedicine and Telecare.