index.html (7844B)
1 <!DOCTYPE html> 2 <html lang="en"> 3 <head> 4 <link rel="stylesheet" href="/style.css" type="text/css"> 5 <meta charset="utf-8"> 6 <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> 7 <meta name="viewport" content="width=device-width, initial-scale=1.0"> 8 <link rel="stylesheet" type="text/css" href="/style.css"> 9 <link rel="icon" href="data:image/svg+xml,<svg xmlns=%22http://www.w3.org/2000/svg%22 viewBox=%220 0 100 100%22><text y=%22.9em%22 font-size=%2290%22>🏕️</text></svg>"> 10 <title></title> 11 </head> 12 <body> 13 <div id="page-wrapper"> 14 <div id="header" role="banner"> 15 <header class="banner"> 16 <div id="banner-text"> 17 <span class="banner-title"><a href="/">beauhilton</a></span> 18 </div> 19 </header> 20 <nav> 21 <a href="/about">about</a> 22 <a href="/now">now</a> 23 <a href="/thanks">thanks</a> 24 <a class="nav-active" href="/posts">posts</a> 25 <a href="https://notes.beauhilton.com">notes</a> 26 <a href="https://talks.beauhilton.com">talks</a> 27 <a href="https://git.beauhilton.com">git</a> 28 <a href="/contact">contact</a> 29 <a href="/atom.xml">rss</a> 30 </nav> 31 </div> 32 <main> 33 <h1> 34 good machine learning in medicine 35 </h1> 36 <p> 37 <time id="post-date">2022-12-29</time> 38 </p> 39 <p id="post-excerpt"> 40 Because most machine learning in medicine sucks, I thought you might like to see an example of it done well. 41 </p> 42 <p> 43 <a href="https://doi.org/10.1056/CAT.22.0214">https://doi.org/10.1056/CAT.22.0214</a> 44 </p> 45 <p> 46 At least on the surface, this looks good. The article is paywalled so 47 I haven’t gotten to dig into their methods yet, but will update when I 48 am able to read the whole thing. The approach is the thing I’m keyed in 49 on - implementation details of course matter, but the biggest problem 50 with machine learning in medicine is not technique, but angle. Too 51 often, ML is a new shiny, no meat on its bones (I’m not discounting some 52 of the delightful advances in e.g. automatic read of ophthalmologic 53 images, that’s kickass tech and truly practice-changing, but it does 54 amount to image processing. The stuff I’m talking about here is not a 55 smarter camera, but automated gestalt handed to you by a mechanical 56 Turk). 57 </p> 58 <h2> 59 ML is mostly for automation, but automating medicine is scary 60 </h2> 61 <p> 62 The big idea is that while machine learning is great for many things, 63 the most important thing in industrial machine learning is teaching a 64 machine to make independent decisions to make things easier for humans 65 who already have too much cognitive load, or not enough hands, and a 66 pesky attachment to oxygen, food, and sleep. E.g. if there’s a knob 67 someone needs to turn when a certain complex set of things happens, and 68 sometimes the human forgets because they were occupied with all the 69 switches and buttons instead, or they were on lunch, or sleeping off the 70 Superbowl party, it sure would be nice to make an algorithm that can 71 independently assess whether the complex condition has been sufficiently 72 met, and go ahead and turn the knob for you (and maybe, while it’s at 73 it, get you a few ibuprofen for that hangover headache). 74 </p> 75 <p> 76 In medicine we usually don’t want a bot making independent decisions, 77 but this paper from UPMC is a great example of the kind of independent 78 decision we could stomach, or even welcome. 79 </p> 80 <h2> 81 build a model that is high-yield and low-risk 82 </h2> 83 <p> 84 This system builds a mortality model, which is fun in itself, but 85 then goes to the next level to automate an e-consult to the palliative 86 care team for patients at highest risk of mortality after discharge. 87 </p> 88 <p> 89 (Avati et al did the same basic thing at Stanford using a neural 90 network, which is a fine technology, but their explanatory models were 91 these shitty, ugly text-based tables that make you want to stab yourself 92 in the eyes - <a href="https://doi.org/10.1186%2Fs12911-018-0677-8">https://doi.org/10.1186%2Fs12911-018-0677-8</a>). 93 </p> 94 <p> 95 It’s beautiful. Nothing bad bubbles up if the algorithm falls down 96 altogether (we’re just back where we were before the model went live). 97 Nothing horrible happens if the algorithm makes a weird claim (every 98 consult will still be triaged through a human). Possible positives are 99 numerous. The conversation itself may be one of the most pivotal in the 100 patient’s end-of-life journey, the hospital system will likely have 101 reduced readmissions for cases that should be managed at home with 102 hospice, and we will have more data to put towards figuring out how to 103 identify the modifiable risk factors for early post-discharge death. 104 </p> 105 <p> 106 This team used the same tech I did when I built a mortality model for 107 CCF, my favorite kind of algorithm, tree-based models called 108 gradient-boosting machines (GBMs). 109 </p> 110 <h2> 111 interpretability - gotta do it, maybe they did 112 </h2> 113 <p> 114 What I can’t see yet is if they took the next obvious step, which is 115 to apply interpretability models on top. The main reason to use a GBM, 116 in my mind, other than they’re fast to train compared to neural networks 117 and perform the same if there’s enough data and you tune them properly, 118 is that they’re inherently compatible with the best meta-models that 119 allow you to interrogate, both on a per-prediction and cohort level, why 120 the model is saying whatever it’s saying - they’re actually less of a 121 black box than many standard statistical models, believe it or not. 122 </p> 123 <p> 124 The best tool for doing this is called SHAP, and the output is 125 gorgeous - <a href="https://shap.readthedocs.io/en/latest/example_notebooks/overviews/An%20introduction%20to%20explainable%20AI%20with%20Shapley%20values.html">check 126 it out</a>. We used it here, I think to lovely effect: <a href="https://www.nature.com/articles/s41746-020-0249-z">https://www.nature.com/articles/s41746-020-0249-z</a>, 127 and it’s only gotten better since then) 128 </p> 129 <p> 130 The other thing I love about pairing the interpretability models with 131 the predictive models is now you have something you can really learn 132 from, and, hence, teach from. 133 </p> 134 <h2> 135 but, will this worsen burnout? 136 </h2> 137 <p> 138 The main issue, (given that the model works and has been proven 139 trustworthy) and which I don’t think I’ve heard anyone talk about in 140 great depth, is the new alert fatigue this kind of system would make, 141 for already overworked palliative care teams, and what mitigations they 142 are taking to keep the firehose of new possible consults manageable. One 143 thing we could do, and I have faith our house staff could do it well, 144 would be to implement the same system and have it first trigger an alert 145 to the primary team, with a recommendation to have the convo and reach 146 out to pall care if there is any hint of a loose end, or an automated 147 pivot to pall care if the notes don’t document GOC within a certain 148 number of days of the alert (or could pop up one of those boxes we love 149 so… much with a radio button asking you if you’ve had “the talk” 150 yet). 151 </p> 152 <p> 153 Anywho, I’m not saying I want to actually do this project, I’ve got 154 other stuff going on, but if you’re reading this you’re the kind of 155 person who is interested in what tech can do for a hospital system, and 156 this is a model (ha) combination of the very cutting edge of tech and 157 the oldest technique we have, which is to offer a hand to the dying and 158 with them face the abyss. My vision of the future is less Skynet, cold 159 and isolated, but rather humans forefront, with machines that run in the 160 background to nudge us into and help make room for more (non-digital, 161 non-Apple mediated) facetime. 162 </p> 163 </main> 164 <div id="footnotes"></div> 165 <footer></footer> 166 </div> 167 </body> 168 </html>