site

files for beauhilton.com
git clone https://git.beauhilton.com/site.git
Log | Files | Refs

oo-et-al.md (6056B)


      1 # good machine learning in medicine
      2 
      3 <time id="post-date">2022-12-29</time>
      4 
      5 <p id="post-excerpt">
      6 Because most machine learning in medicine sucks, I thought you might like to see an example of it done well.
      7 </p>
      8 
      9 <https://doi.org/10.1056/CAT.22.0214>
     10 
     11 At least on the surface, this looks good.
     12 The article is paywalled so I haven't gotten to dig into their methods yet,
     13 but will update when I am able to read the whole thing.
     14 The approach is the thing I'm keyed in on - 
     15 implementation details of course matter, 
     16 but the biggest problem with machine learning in medicine is not technique, but angle.
     17 Too often, ML is a new shiny, 
     18 no meat on its bones 
     19 (I'm not discounting some of the delightful advances in e.g. automatic read of ophthalmologic images,
     20 that's kickass tech and truly practice-changing, 
     21 but it does amount to image processing. 
     22 The stuff I'm talking about here is not a smarter camera,
     23 but automated gestalt handed to you by a mechanical Turk).
     24 
     25 
     26 ## ML is mostly for automation, but automating medicine is scary
     27 
     28 
     29 The big idea is that 
     30 while machine learning is great for many things, 
     31 the most important thing in industrial machine learning is 
     32 teaching a machine to make independent decisions 
     33 to make things easier for humans who already have too much cognitive load,
     34 or not enough hands,
     35 and a pesky attachment to oxygen, food, and sleep.
     36 E.g. if there's a knob someone needs to turn when a certain complex set of things happens,
     37 and sometimes the human forgets because they were occupied with all the switches and buttons instead,
     38 or they were on lunch, 
     39 or sleeping off the Superbowl party,
     40 it sure would be nice to make an algorithm
     41 that can independently assess whether the complex condition has been sufficiently met,
     42 and go ahead and turn the knob for you
     43 (and maybe, while it's at it, get you a few ibuprofen for that hangover headache).
     44 
     45 In medicine we usually don't want a bot making independent decisions, 
     46 but this paper from UPMC is a great example of the kind of independent decision we could stomach, or even welcome.
     47 
     48 
     49 ## build a model that is high-yield and low-risk
     50 
     51 
     52 This system builds a mortality model, which is fun in itself, but then goes to the next level to automate an e-consult to the palliative care team for patients at highest risk of mortality after discharge.
     53 
     54 (Avati et al did the same basic thing at Stanford using a neural network, 
     55 which is a fine technology, 
     56 but their explanatory models were these shitty, 
     57 ugly text-based tables that make you want to stab yourself in the eyes - <https://doi.org/10.1186%2Fs12911-018-0677-8>).
     58 
     59 It's beautiful.
     60 Nothing bad bubbles up if the algorithm falls down altogether (we're just back where we were before the model went live).
     61 Nothing horrible happens if the algorithm makes a weird claim (every consult will still be triaged through a human).
     62 Possible positives are numerous. 
     63 The conversation itself may be one of the most pivotal in the patient's end-of-life journey,
     64 the hospital system will likely have reduced readmissions for cases that should be managed at home with hospice,
     65 and we will have more data to put towards figuring out how to identify the modifiable risk factors for early post-discharge death.
     66 
     67 This team used the same tech I did when I built a mortality model for CCF, 
     68 my favorite kind of algorithm, 
     69 tree-based models called gradient-boosting machines (GBMs). 
     70 
     71 
     72 ## interpretability - gotta do it, maybe they did
     73 
     74 
     75 What I can't see yet is if they took the next obvious step, 
     76 which is to apply interpretability models on top. 
     77 The main reason to use a GBM, 
     78 in my mind, other than they're fast to train compared to neural networks 
     79 and perform the same if there's enough data and you tune them properly, 
     80 is that they're inherently compatible with the best meta-models that allow you to interrogate, 
     81 both on a per-prediction and cohort level, 
     82 why the model is saying whatever it's saying - 
     83 they're actually less of a black box than many standard statistical models, 
     84 believe it or not. 
     85 
     86 The best tool for doing this is called SHAP, 
     87 and the output is gorgeous - [check it out](https://shap.readthedocs.io/en/latest/example_notebooks/overviews/An%20introduction%20to%20explainable%20AI%20with%20Shapley%20values.html).
     88 We used it here, 
     89 I think to lovely effect: 
     90 <https://www.nature.com/articles/s41746-020-0249-z>, 
     91 and it's only gotten better since then)
     92 
     93 The other thing I love about pairing the interpretability models 
     94 with the predictive models 
     95 is now you have something you can really learn from, and, hence, teach from.
     96 
     97 
     98 ## but, will this worsen burnout?
     99 
    100 
    101 The main issue, (given that the model works and has been proven trustworthy) 
    102 and which I don't think I've heard anyone talk about in great depth, 
    103 is the new alert fatigue this kind of system would make, 
    104 for already overworked palliative care teams, 
    105 and what mitigations they are taking to keep the firehose of new possible consults manageable. 
    106 One thing we could do, 
    107 and I have faith our house staff could do it well, 
    108 would be to implement the same system and have it first trigger an alert to the primary team, 
    109 with a recommendation to have the convo and reach out to pall care if there is any hint of a loose end, 
    110 or an automated pivot to pall care if the notes don't document GOC within a certain number of days of the alert 
    111 (or could pop up one of those boxes we love so...  much with a radio button asking you if you've had "the talk" yet).
    112 
    113 Anywho, 
    114 I'm not saying I want to actually do this project, 
    115 I've got other stuff going on,
    116 but if you're reading this you're the kind of person who is interested in what tech can do for a hospital system, 
    117 and this is a model (ha) combination of the very cutting edge of tech 
    118 and the oldest technique we have, 
    119 which is to offer a hand to the dying and with them face the abyss. 
    120 My vision of the future is less Skynet, cold and isolated, 
    121 but rather humans forefront, 
    122 with machines that run in the background to nudge us into and help make room for more 
    123 (non-digital, non-Apple mediated) facetime.