Monoarticular Arthritis, Questions

1.  When do you tap a painful, swollen joint? When do you obtain imaging before arthrocentesis?

EML Mono Arthritis Questions2.  In a patient with monoarticular arthritis, do you send any serum labs such as WBC, ESR, or CRP? How do they guide your management?

3.  Which synovial fluid studies do you send in order to help make the diagnosis? Which of these rule out septic arthritis?

4.  Do you inject the joint with any medication for symptomatic relief? If so, which medication?

EML Monoarticular Arthritis Questions

Posted in Uncategorized | Tagged , , , , , | 3 Comments

GI Imaging, “Answers”

1. When do you get abdominal plain films before CT in suspected SBO?
2. How do plain films guide your management in patients with suspected intraperitoneal free air?

With advances in radiologic technology and the increased availability of CT, ultrasound, and MRI, the contemporary use of plain abdominal radiographs (AXR) in the evaluation of acute abdominal pain is poorly defined (Hampson, 2010). A broad spectrum of indications are listed by the American College of Radiology. However, even for these, the accuracy of AXR is notoriously low, and it is rarely ever the ideal first-line imaging study. A prospective study of patients with non-traumatic abdominal pain presenting to the emergency department estimated the overall sensitivity, specificity and accuracy of AXR series for all pathology to be 30%, 87.8%, and 56%, respectively (MacKersie, 2005). A retrospective review by Kellow, et al. showed 72% of “normal” AXRs and 78% of “nonspecific” AXRs were actually found to have pathology on follow up imaging (Kellow, 2008).

EML GI Imaging AnswersSeveral recent studies have looked at the utility of obtaining AXR, including appropriate uses, diagnostic significance, and whether this imaging modality affects management. One study estimated that only 3% of AXRs obtained in 861 patients, significantly impacted management (Kellow, 2008).

Two diagnoses for which abdominal radiograph is still commonly used are bowel obstruction (SBO) and pneumoperitoneum.

1. Small bowel obstruction

Despite being one of the few abdominal pathologies with distinct plain film abnormalities, findings of obstruction on AXR are difficult to interpret. Markus, et al. found inter-observer agreement based on kappa values between radiologists for diagnosis of SBO was only “fair to good.” Additionally, this study showed that agreement was only “poor to fair” for determining large bowel obstruction, and location or completeness of SBO (Markus, 1989).

Several studies estimate AXR sensitivities between 45-90% and specificity of approximately 50% in diagnosing SBO. CT, in comparison, has a reported sensitivity of 93% and specificity of 100% (Suri, 1999; Frager, 1994). In patients with suspected SBO, Maglinte quotes AXR to yield accurate diagnoses in 50-60%, indifferent or nonspecific findings in 20-30%, and misleading reads in 10-20%. Identification of partial SBO lowers the sensitivity to 30% for AXR (Frager, 1994), whereas it is around 60% for CT (Maglinte, 1996).

Despite this data, bowel obstruction remains one of the most common indications for AXR ordered in the ED evaluation of abdominal pain. It is important to understand if, when, and how these images should affect patient management (Kellow, 2008).

A large study found that the addition of AXR to clinical assessment in ED evaluation of abdominal pain significantly increased the sensitivity of clinical diagnosis from 57% to 74%; the positive predictive value, however, was not significantly changed. Moreover, the addition of radiographs in suspected obstruction did not significantly change ED physicians’ initial diagnosis or confidence in their diagnosis (Van Randen, 2011).

In addition to poor diagnostic accuracy, AXR (unlike CT) lacks the ability to distinguish partial from complete obstruction, determine a transition point, or identify cause–information vital to clinical management and surgical planning. Thus, despite increasing diagnostic sensitivity, AXR is not likely sufficient to preclude further imaging. In a  retrospective study, the majority (53%) of patients with dilated loops of bowel on AXR deemed “significant” proceeded to CT scan (Jackson, 2001). Of these 47 patients, 9 had CTs without evidence of obstruction contributing to the authors’ conclusion that the yield of initial AXR in SBO was low (Jackson, 2001).  In another review, only 5% of AXRs performed to evaluate obstruction confirmed the diagnosis and were managed without obtaining further imaging (Kellow, 2008). Again, the majority of all abnormal AXRs underwent subsequent CT. In one study of patients with suspected acute SBO, CT corrected erroneous diagnoses and management in 21% of cases (Taourel, 1995).

MacKersie found nonenhanced CT scan to be more sensitive and specific compared with a three view AXR series. In equipped facilities, the time to obtain a noncontrast CT should be comparable to a 3 view AXR, suggesting that if time is critical, the test of choice is the one with diagnostic superiority (MacKersie, 2005). Given that the majority of suspected obstructions, regardless of AXR outcome, are followed by CT then radiation is rarely spared. Instead the patients ends up greater with a greater exposure than if the more definitive test was used initially.

The reasoning behind initial AXR for suspected SBO likely falls into one of two clinical scenarios. First is the situation of low clinical suspicion for obstruction and AXR acts to confirm or support a negative diagnosis.The second is one in which there is a high clinical suspicion for bowel obstruction, AXR is obtained in hopes of expediting disposition (surgical consult, operative intervention, etc.), while saving the patient time and radiation.

In both scenarios AXR is not ideal. In the first, AXR may provide a false sense of security, as its sensitivity is too low to comfortably rule out obstruction, especially early or partial. In the second scenario, AXR may support clinical suspicion, but rarely provides enough evidence to dictate management, and may actually delay treatment. Our surgical colleagues typically still want a CT even with a markedly positive AXR.

Bottom Line: AXR has a limited role in bowel obstruction. It may be useful in patients with recent surgery or known bowel adhesions, who are likely to be taken to OR for obstruction with an already known etiology,or who are too unstable to go to CT (Jackson, 2001).

2. Pneumoperitoneum

Intraabdominal free air as seen on XR has been used to dictate surgical intervention for decades. Several studies have looked at the accuracy of AXR in determination of free intraperitoneal air secondary to perforated viscus, with varied results. Sensitivities ranging from 15-83% have been reported (Gans, 2012).Among common causes of this variability are the adequacy of films, amount of air present, and the use of proper positioning techniques.

Miller and Nelson’s 1971 paper demonstrated the importance of patient positioning and compared different radiographic views. The study aimed to find the best technique to detect extraluminal air by injecting subjects with small volumes of air intraperitoneally at McBurney’s point, followed by radiographic evaluation. They found the highest sensitivity with the following sequence:First, 10-20 minutes of left lateral decubitus positioning, followed by AXR. Second, careful placement into an upright position for 10 minutes, then CXR (AP or PA) and upright AXR. Third, recumbency and AXR in supine position (Miller, 1971). This technique theoretically supports the movement of air to below the right hemidiaphragm, avoiding the superimposed gastric bubble on the left. Using this technique, it was reported “possible to consistently demonstrate as little as 1cc [of air] under the right hemidiaphragm” (Markus, 1989).

For a patient with peritonitis, transport and positioning is difficult. Utilizing the imaging views with the best diagnostic yield is crucial. One study of free air determination in various AXR views, reports the accuracy of left lateral decubitus, upright, and supine to be 96%, 60%, and 56%, respectively (Roh, 1983).Despite the lower accuracy of supine films, they are often the easiest to obtain in an unstable patient. Detection of pneumoperitoneum on supine films requires the presence of significantly more extraluminal air than other views; the most frequent findings include Rigler’s Sign (gas on both sides of the bowel wall) and linear or triangular right upper quadrant gas (Levine, 1991).

Upright CXR has been repeatedly demonstrated to be superior in free air detection to upright AXR (Flak, 1993), (Miller, 1971). Sensitivity of 85% has been reported (Gans, 2012). Although AP or PA CXR is commonly utilized, upright lateral CXR may have better sensitivity, as noted in a small retrospective review (Markowitz, 1986). Field, et al. questioned the utility of the erect AXR, claiming it added nothing to the upright CXR and supine AXR. CXR has the additional benefit of identifying diagnostically significant chest pathology (Field, 1985).

CT has revolutionized evaluation of the acute abdomen. CT has several advantages over plain film including better sensitivity and higher accuracy. In a study of trauma patients status post introduction of intraperitoneal air by diagnostic peritoneal lavage, upright CXR was only 38% sensitive, missing all patients with minimal air and most with moderate free air. CT within 24 hours of DPL was 100% senstitive for free air (Stapakis, 1992). CT has the ability to identify contained perforation and to localize the site of perforation in a majority of cases, guiding management and surgical intervention (Mindelzun, 1997).

 A recent study looking at the the value of plain radiographs in abdominal pain found that of those with confirmed perforated viscus, the sensitivity of initial AXR was determined to be only15% (Van Randen, 2011).Of thirteen perforations, four were contained and were not visible on AXR. The addition of AXR to clinical assessment did not significantly increase the sensitivity or positive predictive value, nor did it significantly change the suspected diagnosis.

Bottom Line: While AXR performs better in detecting free air than it does for detecting other pathologies, its diagnostic use is technique-dependent and is insufficient to rule out perforated viscous in patients with a moderate to high clinical suspicion. For patients too unstable to be taken to CT, upright CXR should be the test of choice for emergent determination of free intraperitoneal air. Supine or left lateral decubitus AXR may be of limited benefit.

3. Who do you CT scan in the work up of pancreatitis?

In acute pancreatitis (AP) the diagnosis of disease, identification of a treatable cause, and determination of disease severity are important parts of evaluation. CT scanning can theoretically aid in all of these.

AP is most commonly diagnosed by the presence of at least two of the following three criteria: characteristic abdominal pain (constant upper abdominal pain with radiation to the back), elevated amylase/lipase levels (> 3 times the upper limit of normal), and consistent findings on imaging (Tenner, 2013). When history and labs clearly indicate AP, CT is unlikely to add important information. However, abdominal pain and symptoms of AP may be atypical. Amylase and lipase have limited sensitivity and specificity for AP: both may be elevated in other causes of abdominal pain such appendicitis, cholecystitis, and bowel ischemia. Contrast-enhanced CT has been shown to have greater than 90% sensitivity and specificity for diagnosis of AP (Balthazar, 2002). Additionally, it provides the advantage of simultaneously ruling out other causes of abdominal pain.

Identifying the cause of pancreatitis may be crucial in guiding management. Gallstones are the leading cause of pancreatitis, and abdominal biliary ultrasound is recommended for all patients with undifferentiated AP to evaluate for gallstones (Tenner, 2013). However, ultrasound is limited in its evaluation of distal stones. Contrast CT can visualize evidence of obstruction such as biliary dilatation, however, it is only moderately sensitive for detecting gallstones and biliary stones (Anderson, 2006; Anderson, 2008).While contrast CT and MRI are comparable studies for use in early assessment of AP, MR adds sensitivity in detecting choledocolithiasis and pancreatic duct disruption (Macari, 2010). MRCP, endoscopic ultrasound, or ERCP should be considered when biliary obstruction is strongly suspected.

Because mortality increases significantly based on severity, early prediction of severe disease is important for proper management and disposition, but may be difficult on initial presentation to the ED. Severity scoring systems such as Ranson’s criteria are generally less accurate within the first 48 hours of disease, and have been routinely debunked by intensivists. Even APACHE II is only 75% sensitive on presentation (Osvaldt, 2001). AP severity is now separated into three categories after the 2012 revision of the Atlanta classifications (Banks, 2012). Mild AP is the absence of organ failure or local complications, and has expected improvement within 48 hours. Moderately severe AP includes local complications and/or <48 hours of organ failure. Severe AP is defined only by persistent organ failure >48 hours.

Two phases of disease are recognized as peaks of mortality: early (<1 week from symptom onset) and late (>1week). The early phase is characterized by systemic inflammatory response syndrome (SIRS). Morbidity and mortality reflects the presence of end-organ failure (defined as SBP<90, creatinine >2, PaO2<60%, or GI bleeding >500cc/24hr). In this phase, management is based on disease presentation and not imaging as findings on contrast CT often underestimate disease severity and rarely prompt urgent intervention. The later phase connotes development of local complications (including peripancreatic fluid collections, necrosis, and pseudocysts) and/or persistent SIRS. Infected pancreatic necrosis is associated with significant morbidity and may, like other local complications, require intervention. Contrast CT evaluation is an effective technique to detect and characterize these complications. After four days from symptom onset and with proper technique contrast CT is reported to identify necrosis with 87% accuracy, 50-100% sensitivity (depending if the patient has minor or extended areas of necrosis, respectively), and specificity nearing 100% (Balthazar, 2002).

Most cases of AP are mild, and will clinically improve with supportive care by 48 hours. For more severe cases, local complications, including necrosis, typically will not be present on initial presentation and likely not clinically important in the first week of symptoms. Given this, recent guidelines published in the American Journal of Gastroenterology regarding management of AP state that contrast CT is not recommended as part of the routine evaluation of AP.  These guidelines recommend contrast CT or MRI be limited to those in whom diagnosis is unclear on initial presentation, those who fail to improve after 2-3 days, or exhibit acute decline, in order to evaluate for local complications (Tenner, 2013).

Bottom Line: Contrast CT on initial presentation of acute pancreatitis does not routinely contribute to management and should be reserved for those who have not improved after 2-3 days or have decompensated.

Bonus: Do you use dextrose containing IV fluids in the resuscitation of kids with vomiting?

Gastroenteritis in children is a frequent cause of ED visits. Dehydration and carbohydrate depletion from vomiting, diarrhea and poor oral intake leads to decreased tissue perfusion, anaerobic metabolism, and glucagon release. Glucagon, in turn, promotes breakdown of glycogen with resultant ketogenesis. Ketones contribute to metabolic acidosis, which has been associated with oral intolerance and shown to be predictive of hospital admission (Friedman, 2005; Gorelick, 2013). Ketones themselves are postulated to be associated with persistent nausea, vomiting, and anorexia (Wheless, 2001). The idea of using dextrose-containing fluids in these dehydrated, ketotic patients makes physiologic sense. By giving carbohydrate, insulin production is stimulated, glucagon suppressed, lipolysis stops, ketosis resolves, and oral intake improves. Several studies have looked at the effects of dextrose containing solutions versus fluids without dextrose in relatively small sample populations. Two small studies showed, as one might expect, increased blood glucose in the treatment arm, but no significant clinical benefits (Juca, 2005). Rahman, et al., in a randomized control trial of 67 children, showed no adverse outcomes with dextrose-containing solution, noting comparable urine output, suggesting osmotic diuresis did not occur in the treatment arm (Rahman, 1988).

Levy and Bachur performed a non-blinded retrospective case control study in which they found a significant inverse association between the amount of IV dextrose received on initial visit for acute gastroenteritis with dehydration and return visits with admission (Levy, 2007).

Levy, et al.,in a double-blind randomized controlled trial, looked at the effects of an initial bolus of normal saline (NS) versus D5NS in children with dehydration from gastroenteritis (Levy, 2013). A significantly greater decrease in serum ketones was seen in the treatment group at 1 and 2 hours. A trend towards a lower admission rate was seen (9% absolute risk reduction in the treatment arm), however, no statistically significant decrease in hospitalization was found. Of the discharged patients reached by phone for follow up, a trend towards more unscheduled medical care was seen in the normal saline control group. This trend was exaggerated when analyzing only the discharged patients who had had an acidosis. Further research to increase the power of this study may help better determine if these trends are meaningful, and should influence general practice.

Bottom Line: Despite the logic behind the addition of dextrose to IVF in gastroenteritis-associated dehydration with ketosis, no compelling evidence exists yet to show its association with an improvement in clinically significant outcomes.

Posted in Uncategorized | Tagged , , , , , | 4 Comments

GI Imaging, Questions

Our second GI topic for April:

1. When do you get abdominal plain films before CT in suspected small bowel obstruction?

EML GI imaging questions2. How do plain films guide your management in patients with suspected intraperitoneal free air?

3. When do you order a CT scan in the work up of pancreatitis?

Bonus: Do you use dextrose-containing IV fluids in the resuscitation of kids with vomiting?

GI Imaging Questions

Posted in Uncategorized | Tagged , , , , , | Leave a comment

Upper GI Bleeding, “Answers”

1. Do you use a PPI infusion in patients with undifferentiated UGIB?

Acute upper gastrointestinal hemorrhage (UGIB) is a potentially life threatening condition caused by a number of etiologies. It is defined as any lesion causing bleeding proximal to the ligament of Trietz. This includes Mallory-Weiss tears, Boerhaave’s syndrome, esophageal varices and arteriovenous malformations. The most common cause of UGIB, however, is peptic ulcer disease (Lau, 2007). Proton-pump Inhibitors (PPIs) were first investigated for use in patients with peptic ulcer disease (PUD). PPIs act by decreasing the production and secretion of gastric acid. They irreversibly block the hydrogen/potassium ATPase in gastric parietal cells. Gastric acid has been shown in in vitro experiments to impair clot formation, promote fibrinolysis and impair platelet aggregation (Chaimoff, 1978; Green, 1978). Thus in theory, inhibition of gastric acid would allow the pH to rise, promoting clot stability and decreasing the likelihood of rebleeding. However, this benefit is only theoretical. The goal of raising the gastric pH above 6 has not been shown to be a reliable proxy for treatment efficacy (Gralnek, 2008).

EML UGIB AnswersIntravenous PPIs have become standard care post-endoscopy and post-operatively to prevent rebleeding in patients with PUD (Gralnek, 2008). Recent meta-analyses show that PPIs decrease the rate of rebleeding and surgical intervention in patients with PUD after endoscopic intervention. A pooled analysis of 16 randomized controlled trials found that a bolus dose of PPI followed by an infusion is more effective than bolus dosing alone for reducing the rebleed rate and the need for surgery, leading to the current recommendation (Morgan, 2002). However, this study clearly states: “Intravenous proton pump inhibitors appear to be useful in the prevention of rebleeding in patients with acute peptic ulcer bleeding that has been successfully treated with endoscopic hemostasis.”

A meta-analysis of 24 randomized trials (4373 patients) from the Cochrane group reached similar conclusions (Leontiadis, 2004). In patients with PUD, PPI treatment reduced rebleeding (NNT = 15), surgical intervention (NNT = 32) and repeat endoscopy (NNT = 10). However, they found no change in mortality (OR = 1.01). Overall, outcomes were modest. PPIs prevented rebleeds in 6.6% of patients, surgical interventions in 3.2% of patients and repeat endoscopy in 10% of patients. Interestingly, the Cochrane group did separate analyses of Western and Asian populations. They found that trials conducted in Asia demonstrated benefits to PPI infusions in peptic ulcer disease in terms of mortality (NNT = 34), rebleeding (NNT = 6) and surgical intervention (NNT = 23). Conversely, Western patients showed a suggestion of increased harm in PPI groups, although it was not statistically significant).

So, the current literature suggests PPI infusions in patients with known PUD offer only marginal benefits overall and possible harm in certain populations. What about in undifferentiated UGIB? Are PPIs beneficial in undifferentiated UGIB? Are they beneficial when given prior to endoscopy?

Fortunately, a Cochrane Review published in 2010 helps to address these questions (Sreedharan, 2010). This group found six randomized control trials (RCTs) relevant to the question. Of these, four compared PPI to either placebo (Daneshmend, 1992; Hawkey, 2001; Lau, 2007) or no treatment (Naumovski, 2005) and the other 2 studies compared PPI to H2 blockers. The studies comparing PPI to placebo (n = 1983) are the most relevant to our question.

The Cochrane review found no difference in mortality comparing PPI to placebo (OR = 1.19 (0.75-1.68)), no difference in rebleeding within 30 days (OR = 0.87 (95% CI 0.66-1.16)) or surgery within 30 days (OR 0.90 (0.64-1.27)). One of the limitations of this review was that PPI treatment was not the same in all studies. Only the Lau study compared a PPI bolus and infusion with placebo. In this trial, the authors reported that fewer patients in the PPI arm required intervention during endoscopy. However, there were no differences in any patient oriented outcome (death, rebleeding, or surgery within 30 days).

Bottom Line:

–PPI treatment in undifferentiated UGIB does not appear to decrease any clinically important effects including rebleeding, need for surgery, or death.

–PPI treatment prior to endoscopy for undifferentiated UGIB decreases the number of patients who require an endoscopic therapy during endoscopy.

2. Do you use octreotide in patients with bleeding varices?

There is little in the world of Emergency Medicine that gets a clinician’s pulse racing as much as the massive upper GI bleeder who has a history of esophageal or gastric varices. These patients have a high morbidity and mortality rate even with aggressive ED management, transfusion, intensive care, and gastroenterology consultation. Six week mortality is estimated between 11-20% (Dell’Era, 2008). Patients often are aware that they have varices, which aids in guiding treatment. However, in patients with no prior diagnosis or those too sick to communicate, establishing the presence of varices may be more difficult.

Octreotide is a somatostatin analog that acts by decreasing blood flow into the portal circulation, thus decreasing portal pressure, particularly postprandial flow (Abraldes, 2002). It is widely used for the treatment of variceal bleeding. In clinical trials, octreotide has only been noted to be beneficial when paired with endoscopic therapy (Dell’Era, 2008). A meta-analysis in 2001 demonstrated an improved efficacy of endoscopic therapy in terms of early rebleeding when octreotide was given concomitantly (Corley, 2001). The study found an NNT of 6 when compared to placebo for rebleeding and transfusion requirement. The reduction in transfusion was modest (about 0.7 units of PRBCs). Additionally, no study found any reduction in mortality or overall rebleeding (only benefit in early rebleeding) (Corley, 2001; Dell’Era, 2008; Longacre, 2006). A more recent randomized, placebo controlled study found that sclerotherapy plus octreotide was equal to sclerotherapy plus placebo in terms of 7-day mortality, rebleeding, transfusion requirements, and ICU stay (Morales, 2007).

In 2008, the Cochrane group performed a systematic review of all randomized trials looking at octreotide in the treatment of varices (Gøtzsche, 2008). The group found 21 randomized trials of octreotide versus placebo (or no treatment). They concluded that the use of octreotide did not reduce mortality. It did reduce the amount transfused by about ½ a unit of blood in the studies with a low risk of bias (1.5 units in those with high risk of bias) but this result was not thought to be clinically significant by a number of commentators. They found no difference in rebleed rates in the low-risk of bias studies (but a substantial reduction in the high risk of bias studies). The Cochrane group did find a lower rate of failed initial hemostasis in the octreotide treatment group.

Overall, it seems that the incremental benefit of octreotide in addition to endoscopic therapy in the treatment of variceal bleeding is only seen in surrogate, non-patient-oriented outcomes. No study, systematic review, or meta-analysis has shown a benefit in mortality. Additionally, this is for patients in whom it is known that varices are the cause of their UGIB. There is no data on the use of octreotide in undifferentiated UGIB . Additionally, there is no pathophysiologic basis for its use in these patients.

Bottom Line:

–Octreotide in combination with endoscopic therapy in patients with bleeding varices has not been shown to reduce the rate of failed hemostasis, rebleed rate, or mortality.

–Octreotide did show a modest reduction in blood transfusions required in this clinical scenario.

3. In which patients with UGIB do you place a nasogastric tube (NGT) and for what purpose (diagnostic vs. therapeutic)?

Historically, nasogastric intubation (NGI) in patients with suspected upper gastrointestinal bleed (UGIB), has served multiple roles: therapeutic, diagnostic, and prognostic. However, its utility has been controversial for years. Recognizing the severe discomfort of this basic procedure and its rare but potential complications, scrutiny of the extant data is warranted to decide what, if any, benefit NGI provides.

In a patient with GI bleeding, distinction between upper and lower source (i.e., bleeding proximal or distal to the ligament of Treitz), is essential for determining advanced management. Gross evaluation of nasogastric aspirate (NGA) is not an uncommon diagnostic test done in the ED to assess for the presence of a proximal bleed. Hematemesis is virtually always the result of briskly bleeding lesions proximal to the pylorus (Peura, 1997), making NGA obsolete in the diagnostic evaluation of patients with hematemesis. In patients with melena or hematochezia without hematemesis, the source is not as straightforward and NGA has been recommended for diagnosis. It stands to reason, then, that studies to determine the diagnostic utility of NGA should focus on patients without hematemesis. In one such study, the yield of positive (i.e., bloody) NGA was significantly lower than in prior studies that included subjects with and without hematemesis (Witting, 2004). This likely reflects that UGIB without hematemesis often results from either a slower bleed or one distal to the pylorus, such as a duodenal lesions. Both of those scenarios are much less likely to yield a positive result with NGI. A technically adequate NGI should include duodenal aspirate, as evidenced by presence of bile, but often do not. Although a positive NGA was determined to strongly predict an UGI lesion on endoscopy, with a likelihood ratio (LR) of +11 and a positive predictive value (PPV) of 92%, a negative NGA showed minimal utility with a LR of 0.6 and NPV of 64% (Witting, 2004).

Other diagnostic models have been shown to have utility without incorporation of NGI. Three strong independent risk factors (age<50, black stool, and BUN/creatinine ratio greater or equal to 30) were identified to predict an UGIB in patients without hematemesis and compared favorably to NGA. The absence of all three risk factors corresponds to a 5% risk of UGIB. Of those with 2 or more risk factors, 93% had an UGIB (Witting, 2006).

Historically, NGI has also contributed to assessing prognosis, helping to risk stratify patients, guide type and timing of intervention, as well as influence disposition. The prognostic value of NGI has been explored in several studies. A positive NGA has been associated with higher mortality rates (Leung, 2004) and worse outcomes (Stollman, 1997). Coffee-ground or bloody NGA has been shown to strongly predict the presence of high-risk endoscopic lesions (HRL), defined as an active bleed or visible vessel (Perng, 1994). Bloody NGA demonstrated an increased association with HRL when compared with clear or bilious NGA and with coffee-ground NGA, having an odds ratio (OR) of 4.8 and 2.8, respectively (Aljebreen, 2004). For a test to be clinically useful, however, it must be sensitive enough to rule out the dangerous condition. If “positive NGA” refers only to gross blood, the PPV and NPV of HRL are 45% and 78%, respectively; if defining “positive NGA” to include all aspirate except clear or bilious, the PPV and NPV are 32% and 85%, respectively (Aljebreen, 2004). Therefore, even using the most inclusive definition of a positive NGA, this technique misses a significant portion of patients with high risk lesions. If NGA is grossly positive, then such a lesion is likely. However, based on this data, a negative result has a poor negative likelihood ratio, and lacks the sensitivity to rule out a HRL.

The ability to identify patients with HRL, lesions likely to re-bleed, and those with higher mortality is important. These correlations, however, do not intrinsically translate to useful information in terms of acute management decisions. A more clinically useful endpoint would differentiate those who would benefit from urgent endoscopy from those who may safely wait. Noninvasive prognostic scales, such as the Glasgow-Blatchford and pre-endoscopic Rockall scores, have been developed and validated to accurately predict patients who require intervention and who can be safely wait (Barkun, 2010). The Glasgow-Blatchford score has been determined to have sensitivity approaching 100%, allowing the provider to rule out significant bleeding (Chen, 2007; Srygley, 2012).

No one test or scoring system has yet been devised to stratify those who require emergent versus urgent endoscopy. Likewise, the benefit of rapid endoscopy <24 hours is not clear (Targownik, 2007; Spiegel, 2009). A 2010 prospective study showed mortality benefit from endoscopy less than thirteen hours after presentation in high-risk patients (based on the Glasgow-Blatchford score) (Chin, 2011). Although a frankly positive NGA highly predicts a HRL, the clinical picture of the patient is more likely to determine the rapidity with which endoscopy should be performed.

In terms of therapeutic use, the importance of adequate gastric lavage was highlighted in a retrospective study revealing increased morbidity and mortality associated with inability to clear fundal blood during endoscopy (Stollman, 1997).These results postulate that inadequate pre-endoscopy gastric lavage may be to blame for poor visualization and thus worsen outcomes. No data exists to examine the impact of gastric lavage on the “fundal pool.” However, multiple adjunctive measures have been shown to be effective in clearance of blood such as endoscopic lavage or use of promotility agents such as metoclopramide or erythromycin (Leung, 2004). International consensus guidelines recommend the use of promotility agents only in the select group of patients where a significant amount of blood is anticipated (Barkun, 2010).

Looking at overall impact of NGI on patients, Huang, et al. observed the outcomes of clinically matched patients with UGIB, comparing those who had NGI and those who did not. Although patients with NGI obtained endoscopy more quickly on average than those without NGI, no significant difference in length of hospital stay or 30 day mortality was found (Huang, 2011).

The diagnostic utility of NGI is limited. It adds no useful data in patients with hematemesis and is not sensitive in patients without. The ability of NGI to predict patients with high-risk lesions is good but, again, lacks the sensitivity to rule them out, proving inferior to some noninvasive prediction scores. NGA alone is not adequate to risk-stratify patients and change the urgency of obtaining endoscopy. The therapeutic value of NGI for pre-endoscopic gastric lavage is unclear and reasonable alternatives exist which spare patient discomfort. NGI has not been shown to improve patient outcomes.

Bottom Line:

–A negative NGI cannot rule out UGIB or the presence of high risk lesions

–The color of NGA is inadequate data on which to base management decisions

–NGI does not clearly improve endoscopic results or patient outcomes

4. What is the utility of fecal occult blood test for patients in whom we suspect UGIB?

Fecal occult blood test (FOBT) is intended as a screening tool for lower GI malignancy but is commonly used in the Emergency Department as part of a work-up for suspected UGIB. Virtually all literature surrounding the test relates to its use in the outpatient setting. Without dedicated evidence to support the use of this test in the ED, we should take efforts to be knowledgeable about the test itself, and thoughtful regarding when to use it and how to interpret it.

Different types of FOBT are available and knowing the exact test used is important for proper interpretation. Non-guaiac based FOBT have increased specificity to LGIB as they use immunochemical assays to detect human hemoglobin, not heme. They are not useful in detection of UGIB, as most hemoglobin is digested in the small intestine and not present in rectal stool (Allison, 2007). Guaiac-based tests (gFOBT) detect heme by using it as a catalyst in the oxidizing reaction of the guaiac-impregnated card producing an immediate blue color where heme is present. The result can be altered by a variety of substances, which participate in this reaction. Heme-containing red meat or peroxidase-containing foods (turnips, radishes) can produce a false positive. Vitamin C may produce a false negative due to its antioxidant effect. The sensitivity varies depending on the specific test used. It increases with the amount of blood present. The Hemoccult II (guaiac based test) requires 10ml of fecal blood loss per day (10mg blood/gram of stool) for 50% sensitivity, but may be positive with <1mg/g (Stroehlein, 1976). In contrast, melena production requires 50mL gastric blood, based on a study by Schiff, et al. in 1942. Despite the apparent simplicity of the gFOBT, it is user-dependent. In a survey of 173 medical providers, 12% did not accurately interpret results (Selinger, 2003).

To determine whether FOBT enhances our management of a patient with suspected UGIB, certain questions should be answered. Do the results of FOBT have diagnostic value in these patients? Is FOBT sensitive enough that a negative test can rule out UGIB? Does the false positive rate lead to significant undue intervention such that the risks of the test outweigh the benefits?

Stool color is among the best clinical predictors of UGIB. According to a literature review, if a patient reports melena, the likelihood of an UGIB is increased more than five-fold; if found on exam, UGIB is 25 times more likely to exist (Srygley, 2012). Black stools were shown to be 80% sensitive and 84% specific for an UGIB (Witting, 2006). Conversely, blood clots present in stool make UGIB 20 times less likely (Srygley, 2012). Given opposing correlation to UGIB based on the color of stool, both of which would be theoretically guaiac-positive, the role of gFOBT in either black or red stool would only be to distinguish blood clot from red food particle. The significance of guaiac-positive brown stool in a patient with history concerning for UGIB, however, is not evident. Similarly enigmatic is the significance of guaiac-negative stool.

One to two liters of ingested blood may cause melena for up to five days, starting approximately four to 20 hours after its ingestion (Wilson, 1990). It can be inferred that guaiac negative stool may occur in active UGIB if the blood-containing stool has not had sufficient time to reach the rectum, or if the bleeding has been intermittent and the sample obtained represents a non-bleeding interval. Although it would be difficult to imagine a significant bleed occurring for >24 hours not resulting in a positive result, it cannot rule out the possibility.

The false positive rate of gFOBT in predicting acute UGIB is not known. A review article looked at the utility of endoscopy to detect upper GI lesions in non-emergency patients with positive screening FOBT and a negative colonoscopy. Of patients with guaiac-positive screen, 37-53% have negative colonoscopies. Of these patients, the literature review showed endoscopy to be positive for UGI cancer in <1%, positive for nonmalignant sources of bleeding 11-21%, and incidental, likely unrelated, findings in 10-36%. The review did not extrapolate data to differentiate results of patients with anemia or other significant symptomatology, which may have been interesting for purposes of this discussion. Although this data does not apply to our patient demographic, it does give some insight into the low specificity and low PPV (for this population) of guaiac positive, non-melenic stool in determining endoscopic pathology (Allard, 2010).

As with any diagnostic test, in order for it to have utility, it must have a reasonable sensitivity and specificity to avoid both missed diagnosis and excessive overtreatment. While the sensitivity of gFOBT to detect blood is high, its ability to detect UGIB is unknown. Therefore, it cannot be used to rule out UGIB if a high clinical suspicion exists. The extremely low specificity poses the dilemma of what to do with guaiac-positive brown stool. Without a sufficient amount of blood to produce melenic stool, can a positive guaiac test be discounted as clinically insignificant in the ED or does it commit the provider to pursuing medical and endoscopic management? Undoubtedly, the results of gFOBT alone should not dictate care, but the question remains as to whether or not occult blood testing should be obtained at all in ED evaluation of UGIB. Unfortunately, using this test in a setting for which it was neither intended nor researched limits our ability to interpret its results, imparting the risks associated with misinterpretation.

Bottom Line

–The role of gFOBT in evaluating acute UGIB has not been sufficiently studied

–Stool color (black or red) has more diagnostic value than gFOBT results

–Positive gFOBT does not rule in UGIB and carries the potential risk of unnecessary treatment or procedures

–Negative gFOBT does not rule out UGIB and risks a false sense of security and under-treatment of true disease

Posted in Uncategorized | Tagged , , , , , , | 7 Comments

Upper GI Bleeding, Questions

1. Do you use a proton pump inhibitor (PPI) infusion in patients with undifferentiated upper GI bleeding (UGIB)?

EML UGIB Questions2. Do you use octreotide in patients with bleeding varices?

3. In which patients with UGIB do you place a nasogastric tube and for what purpose (diagnostic vs. therapeutic)?

4. What is the utility of fecal occult blood test for patients in whom we suspect UGIB?

EML Upper GI Bleeding Questions

Posted in Uncategorized | Tagged , , , , , , | 5 Comments

Abscess, “Answers”

1.) Do you routinely pack abscesses after I & D? If so, what is your endpoint (i.e., when do you stop re-packing, and when do you stop ED follow-up)?

Incision and drainage remains the cornerstone of therapy for simple cutaneous abscesses. The procedure entails administering analgesia/anesthesia, incising the abscess, probing to break up loculations, and (for some) irrigating the abscess pocket. Many physicians place gauze strips inside the abscess pocket to keep the cavity open, the thought being that the wick facilitates further drainage and prevents premature wound closure. Little high quality evidence exists in support of routine packing of abscesses after I & D, and packing may actually be harmful due to increased patient discomfort and increased need for follow-up visits.

EML Abscess AnswersOne of the first pilot studies in the EM literature to evaluate packing of abscesses was a prospective, randomized, single blinded study which randomized 48 patients with simple cutaneous abscesses < 5 cm into packing versus no packing, and assessed pain scores and need for further intervention at 48 hour follow-up (O’Malley, 2009). Patients in the packing group reported higher pain scores and used more pain medication compared to the non-packing group, with no decrease in morbidity or requirement for further intervention. Though the study was small and only followed patients for 48 hours post-procedure, the data suggests that packing after I & D may be unnecessary for simple cutaneous abscesses < 5 cm. Further large-scale randomized studies are needed, and no recommendations can be inferred from this data for abscesses > 5cm.

Similar conclusions are seen in the pediatric literature. A randomized, single blinded, prospective study compared packing after I & D to no packing in 57 immunocompetent pediatric patients with abscesses > 1 cm (Kessler, 2012). Patients were randomized into two groups, and had follow-up at 48 hours to assess treatment failure, need for re-intervention, and pain scores. Phone interviews were conducted at 1 week and 1 month to assess abscess healing and recurrence. The study found similar rates of treatment failure/intervention, pain, and healing between the two groups.

Despite the lack of evidence regarding packing and follow-up, a recent study demonstrates that the majority of physicians still routinely pack abscesses (Schmitz, 2013). The authors analyzed results from 350 surveys of attending physicians, residents, and mid-level providers across 15 US emergency departments, and found that only 48% of providers routinely irrigated after I & D, and 91% packed abscess cavities after I & D. Follow-up visits were most often recommended at 48 hours unless the provider deemed the wound concerning enough for sooner follow-up.

Data pertaining to the follow-up care after an abscess is packed is lacking. Though no evidence exists to support the recommendation, general guidelines for abscess management suggest having the patient return within 48 hours for initial follow up, at which time the packing is either removed or changed. No evidence-based data exists to guide the duration or frequency of follow-up visits and packing changes, although it is important to advise patients to return for worsening symptoms.

2.) Do you ever use primary closure after abscess I & D? What about loop drainage?

Dating back to the 1950’s, several variations of procedural abscess management have been proposed, including primary closure after incision and drainage, and loop drainage. As opposed to secondary closure (whereby the tissue edges are left open to heal via secondary intention), primary closure entails placing sutures immediately after abscess intervention to approximate the opposing edges of the abscess pocket. This can be done using simple interrupted sutures, or more commonly by placing deeper mattress sutures in an attempt to obliterate the remaining cavity space. Primary closure is an attractive option based on the potential to speed healing, reduce pain, and improve scarring when compared to secondary closure.

Though the majority of the literature pertaining to primary versus secondary closure of abscesses comes from the surgical literature, there are some studies that have been done in ED patient populations. Adam Singer has published two studies, the first being a systematic review/meta-analysis pertaining to mostly surgical patients, followed by a randomized controlled trial which studied an ED patient population. The systematic review (Singer, 2011) searched articles from Medline, PubMed, EMBASE, CINAHL, and the Cochrane database between 1950-2009, and retrieved 7 randomized controlled trials, which collectively assessed 915 patients randomized to primary versus secondary closure. The objective of the study was to compare time to healing and recurrence rates between the two groups. The review found that primary closure resulted in faster healing (7.8 days versus 15 days) and shorter time to return to work (4.1 days versus 14.6 days) compared to secondary closure, with similar rates of abscess recurrence and complications. The review was limited by the fact that most of the included trials were older, and were conducted before the outbreak of community-acquired MRSA in the 1990’s, so results may not be applicable to today’s patient population. Also, the majority of patients received I & D under general anesthesia in the OR, and about half of cases were abscesses in the anogenital region, which is not generalizable to the ED setting. It is plausible to hypothesize that breaking up loculations while the patient is under general anesthesia is much more effective than breaking up loculations during bedside I & D under local anesthesia, which may have contributed to the low rates of recurrence and complications in the primary closure group.

Two years later, Singer published a randomized controlled trial which was more specific to the emergency department patient population (Singer, 2013). The study included 56 immunocompetent patients in two diverse academic emergency departments, and randomized patients to primary versus secondary closure after I & D of abscesses < 5 cm. Patients were assessed at 48 hours and again at 7 days for degree of wound healing, complications/treatment failure, and patient satisfaction. Results showed similar healing at seven days, similar failure rates, and similar patient satisfaction scores between the two groups. The study demonstrated non-inferiority of primary closure compared to secondary closure. Without convincing evidence to suggest superiority of primary closure of abscesses in the ED at this time, more studies should be done before generalizing this technique to the ED patient population. However, there may be a role for primary closure, especially for abscesses in areas where cosmesis is of particular concern.

Another alternative to traditional I & D is loop drainage. The loop drainage procedure (Roberts, 2013) involves making a small stab incision (approx. 5 mm) at the “head” of the abscess (either the center or any area that is spontaneously draining), inserting a hemostat to break up loculations and explore the abscess cavity, manually expressing purulent drainage, irrigating, and then using the hemostat tip to “tent” the skin from beneath at the opposite end of the abscess pocket, in order to guide placement of an additional stab incision at this point. A penrose drain or silicone vessel loop is then placed into the tip of the hemostat, pulled through both incisions, and the ends tied loosely above the skin surface. Though official recommendations are lacking, most sources recommend keeping the drain in place for 7-10 days. The appealing aspects of loop drainage include no packing changes, only one follow-up visit to remove the loop, and the potential for improved healing, better cosmetic outcome, and decreased pain compared to standard I & D with packing.

Two studies (Ladd, 2010; Tsoraides, 2010) looked at loop drainage for cutaneous abscesses in children, performed in the operating room. The Ladd study was a retrospective review focusing on larger, complex abscesses (76% were MRSA), and found that average duration of loop drain was 8 days, with need for only one follow-up visit, and no recurrences, complications, or increased morbidity. The Tsoraides study also looked retrospectively at pediatric patients undergoing OR placement of loop drains for cutaneous abscesses, and results were similar except for the fact that 5.5% of the 110 patients required re-operation. These studies were both limited by their retrospective nature, lack of control group for comparison, limitation to the pediatric population, and to the operating room setting. There are anecdotal reports of loop drainage being used successfully in emergency departments for both adult and pediatric patients, but data in this setting is lacking and further studies are needed to determine the efficacy and safety of loop drainage as an alternative to standard I & D.

3.) Which patients do you consider treating with antibiotics after I & D?

With the increasing incidence of community acquired MRSA (CA-MRSA), there has been a great deal of debate pertaining to optimal treatment strategies for simple cutaneous abscesses in ED patient populations, especially regarding the need for antibiotics after abscess I & D. Recent data suggests that for simple cutaneous abscesses, routine use of antibiotics is unnecessary. This position is further backed by recommendations of the Infectious Diseases Society of America and the Centers for Disease Control, which state that with the exception of severe, recurrent or persistent abscesses, I & D alone is sufficient for uncomplicated abscesses in immunocompetent hosts. Given the potential adverse effects of antibiotic overuse and misuse, including but not limited to allergic reactions, antibiotic associated diarrhea, and increased resistance, identification of scenarios in which antibiotic use is appropriate is of utmost importance.

The EM literature on this issue dates back to the 1980’s, when the first randomized controlled trial on the topic was published (Llera, 1985). The patient population included 50 immunocompetent adult patients randomized into two groups after I & D (Cephradine vs. placebo), and similar treatment outcomes were seen in the two groups.

In a meta-analysis published in the Annals of Emergency Medicine, five studies and one abstract spanning a thirty year period were identified which addressed clinical outcomes of I & D with and without outpatient oral antibiotics (Hankin, 2007). The majority of studies concluded that simple cutaneous abscesses without overlying cellulitis can be managed with I & D alone, with no added benefit to giving antibiotics. Three of the included studies found no difference in abscess resolution between patients with MRSA abscesses treated with I & D followed by appropriate antibiotics (active against the cultured strain) versus inappropriate/discordant antibiotics (inactive against the cultured strain), further supporting the role of I & D alone. The meta-analysis was limited by the small sample size of the studies and the fact that none of the studies looked at abscesses with overlying cellulitis.

A more recent meta-analysis (Singer, 2013) looked at four randomized controlled trials totaling 589 ED patients (428 adults, 161 children) randomized to one of three antibiotics or to placebo. Though the end points of the included studies varied, the meta-analysis concluded that when added to I & D, systemic antibiotics did not significantly improve the percentage of patients with complete resolution of their abscesses 7-10 days after treatment, and that resolution of abscesses was high in both groups after I & D (88% versus 86%). Two of the trials followed patients for 30-90 days after I & D and found no difference in recurrence between the groups who got antibiotics versus placebo.

Luckily, our practice habits coincide with these recommendations. A recent survey study evaluated 350 providers from 15 different US emergency departments and showed that most providers (68%) do not routinely prescribe antibiotics for simple cutaneous abscesses in healthy patients (Schmitz, 2013).

If the data supports not using antibiotics routinely for simple cutaneous abscesses, in which patient populations should we be prescribing antibiotics after I & D? Unfortunately, most of the available research is primarily focused on healthy, immunocompetent adults with simple, uncomplicated abscesses. Substantial data exists to recommend when to not use antibiotics, with very little data available to advise when antibiotics should be used. Though research is lacking, in patients with diabetes or other immunocompromised states, large abscesses with overlying cellulitis, or with recurrent or persistent abscesses, it may be reasonable to prescribe an antibiotic after I & D. In their 2011 guidelines, the Infectious Disease Society of America emphasizes that I & D is sufficient for most simple cutaneous abscesses, but gives level A-III recommendation for antibiotics for abscesses associated with severe or extensive disease, rapid progression with associated cellulitis, signs/symptoms of systemic illness, associated comorbidities or immunosuppression, extremes of age, abscesses in difficult to drain areas (i.e. face, hand, genitalia), septic phlebitis, and lack of response to I & D alone (Liu, 2011).

4.) What antibiotics do you select for treatment of abscesses after I & D? When do you consider sending wound cultures?

Antibiotic choice has been a topic of much debate, especially in the era of CA-MRSA. MRSA was first discovered in 1961, but it wasn’t until the 1990’s that outbreaks in the community became prevalent. Risk factors for MRSA include recent antibiotic use, contact with healthcare worker or nursing home resident, recent hospitalization, diabetes and/or immunosuppression, incarceration, intravenous drug use, indwelling catheters, daycare, contact sports, and previous history of MRSA. Though risk factors may help guide decisions regarding antibiotic coverage, many patients with cutaneous CA-MRSA do not have any risk factors, making speciation and sensitivities difficult to predict.

While I & D alone is probably sufficient for the majority of abscesses (including those caused by MRSA), in patients who require antibiotics it is reasonable to cover for MRSA. Large randomized controlled trials of antibiotic choice are lacking. According to several review articles, commonly prescribed oral antibiotics include trimethoprim-sulfamethoxazole, clindamycin, and tetracycline. (Elston, 2007; Cohen, 2007). Trimethoprim-sulfamethoxazole has good activity against MRSA but does not cover for beta-hemolytic Streptococcus, so it is often paired with a first generation cephalosporin. Some concern has been raised regarding resistance to trimethoprim-sulfamethoxazole in patients with HIV since they are frequently on this medication for pneumocystis prophylaxis. However, one large study showed that 100% of MRSA isolates in this patient population in Oakland, California were susceptible (Mathews, 2005). In patients with sulfa allergy or other contraindications to trimethoprim-sulfamethoxazole, clindamycin is another option. Some strains of S. aureus have inducible resistance to clindamycin, due to presence of the “erm” gene. This means that the infection can initially be susceptible, but then develop resistance to clindamycin during therapy. The presence of the erm gene (and thus, the ability to predict strains with the potential for inducible resistance) can be detected by a double disk diffusion test (D-test), but is not 100% sensitive. Unlike trimethoprim-sulfamethoxazole, clindamycin is effective against beta-hemolytic Streptococcal species. In addition to the above-mentioned antibiotics, tetracyclines are another option. Like trimethoprim-sulfamethoxazole, tetracyclines are questionably active against beta-hemolytic streptococcus and require a second agent for full empiric coverage.

So, how do you choose the appropriate antibiotic for your patient? One study looked retrospectively at susceptibility of CA-MRSA amongst ED patients in University of Utah affiliated hospitals, and found that 98% of isolates were susceptible to trimethoprim-sulfamethoxazole, 86% were susceptible to tetracycline, and 81% were susceptible to clindamycin (Walraven, 2011). It is important to note that local resistance patterns vary according to geographical location, patient population, and can even vary hospital to hospital within the same region. It is vital to refer to your hospital antibiogram for treatment recommendations. A recent survey study assessed ED provider practices in the management of cutaneous abscesses, and found that 68% do not routinely prescribe antibiotics after every abscess after I & D. If antibiotics were given, 33% prescribed trimethoprim-sulfamethoxazole alone, 8% prescribed cephalexin alone, 8% prescribed clindamycin alone, and 47% used a combination of two or more antibiotics (Schmitz, 2013).

In the 1990’s, wound cultures were routinely obtained after abscess I & D given the emergence of CA-MRSA. Current research, however, suggests that the routine practice of obtaining would cultures in uncomplicated cutaneous abscesses is not necessary. An article published in the Annals of Emergency Medicine emphasizes that there is a lack of data to suggest which clinical situations pose a greater chance of treatment failure (for which the addition of antibiotics would be potentially useful) (Abrahamian, 2007). From a cost consciousness perspective, there is no benefit to sending tests that won’t change management. The authors suggest that wound cultures may be reserved for immunocomprimised patients, significant surrounding cellulitis, systemic toxicity, recurrent or multiple abscesses, and those who have previously failed treatment. Even in patients who fail treatment, other scenarios besides inappropriate antibiotic choice may have contributed to the failure, including inadequate I & D, poor wound care, or noncompliance with antibiotic regimen.

Editor’s note: After we completed our “answers,” the New England Journal of Medicine published an excellent expert review article by Drs. Singer (referenced above) and Talan. It covers some of the same ground as our “answers,” as well as some pearls on diagnosis and prevention. Check it out here.


Posted in Uncategorized | Tagged , , , , , | 4 Comments

Abscess, Questions

1.) Do you routinely pack abscesses after incision and drainage (I & D)? If so, what is your endpoint (i.e. when do you stop re-packing, and when do you stop ED follow-up)?

EML Questions Abscess2.)  Do you ever use primary closure after abscess I & D? What about loop drainage?

3.) Which patients do you treat with antibiotics after I & D?

4.) Which antibiotics do you select for treatment of abscesses after I & D? When do you consider sending wound cultures?

EML Abscess Questions

Posted in Uncategorized | Tagged , , | 5 Comments

Intracranial Hemorrhage, “Answers”

1. What immediate steps in management do you take when a patient with intracranial hemorrhage (ICH) exhibits signs of elevated intracranial pressure (ICP)?

The immediate steps in the management of intracranial hypertension (ICP >20 mmHg for 5 minutes) in the setting of ICH follow the mantra of emergency medicine and include an evaluation and intervention upon airway, breathing, and circulation. In some instances (e.g. major traumas, initial GCS ≤ 8, etc.), patients found to have ICH on head CT will have previously been intubated. However, in many situations this will not be the case. In such patients with rapidly declining neurological status, intubation is crucial to protect the airway and maintain adequate oxygenation and ventilation. When possible, a moment of pause should be taken at this point to perform a rapid (1 to 2 minutes) but detailed pre-sedation/pre-intubation neurologic exam, as outlined by the Emergency Neurological Life Support (ENLS) protocol on airway, ventilation, and sedation (Seder, 2012). While we sometimes forget this step, as it may not affect our management as emergency physicians it may critically influence later neurosurgical decision-making.

EML ICH AnswersAs a brief review, the four classic indications for intubation include failure of maintenance of airway protection, failure of oxygenation, failure of ventilation, and anticipated clinical deterioration. The latter is commonly the reason for intubation in ICH.

The chosen method of airway protection in the setting of intracranial hypertension is rapid sequence intubation (RSI) as it offers protection against reflex responses to laryngoscopy that raise ICP (Sagarin, 2005; Li, 1999; Sakles, 1998; Walls, 1993). Importantly, ENLS recommends the administration of the appropriate pretreatment and induction agents even in the presence of presumed coma, as laryngoscopy may still stimulate reflexes that raise ICP (Bedford, 1980). In terms of pretreatment medications, perhaps none is more controversial than lidocaine. Proponents of its use often highlight its safety profile and its ability to blunt the direct laryngeal reflex, which otherwise raises ICP (Salhi, 2007). Detractors, on the other hand, point out that there are no human trials showing benefit, only one trial evaluating its effect on ICP at the time of intubation, and that this study was in brain tumor patients rather than traumatic brain injury (TBI) patients (Vaillancourt, 2007). The debate is likely to continue, as it would be logistically very difficult to design an outcome study. Nonetheless, if chosen, the pretreatment lidocaine dose is 1.5 mg/kg three minutes before intubation. Other options include fentanyl 2-3 mcg/kg and esmolol 1-2 mg/kg, both of which blunt the reflex sympathetic response (increase in heart rate and blood pressure). However, caution is advised in hypotension. In terms of induction agents, etomidate has minimal hemodynamic effects. Propofol is also popular, although through its vasodilatory effects, can cause hypotension. Ketamine, on the other hand, despite previously being avoided, is gaining recognition, particularly for its hemodynamic profile. When weighing all of these options, it would be prudent to remember that in head trauma patients, a single systolic blood pressure (SBP) below 90 mmHg is associated with a 150% increase in mortality (Chesnut, 1993). The choice between depolarizing (e.g. succinylcholine) and non-depolarizing (e.g. rocuronium) neuromuscular blocking agents may be similarly difficult. Succinylcholine has a rapid onset and short duration of action, allowing for a more rapid full neurological reevaluation following intubation. These benefits must be compared with the risk of hyperkalemia in patients with immobility and chronic motor deficits. Additionally, if more than one intubation attempt is made, this may require a delay for succinylcholine re-dosing. In contrast, rocuronium has a longer duration of action, which affects the timing of repeat neurological exams, however, re-dosing it is not required on repeat intubation attempts and it avoids the risk of hyperkalemia. Once intubation is achieved, the head of the bed (HOB) should be raised to 30 degrees to improve venous return in aid the reduction of ICP (Winters, 2011; Feldman, 1992; Ng, 2004; Winkelman, 2000; Moraine, 2000).

As outlined by ENLS, hyperventilation is one of a series of steps taken to acutely lower ICP and prevent infarction of neuronal tissues (Seder, 2012). Lowering the PCO2 causes alkalosis of the cerebrospinal fluid, which in turn leads to cerebral vasoconstriction. The typical goal PCO2 is 28-35 mmHg (20 breaths per minute), and end-tidal CO2 monitoring is recommended (Swadron, 2012). It is extremely important to note that hyperventilation is meant as a bridge to more definitive therapy to reduce ICP, as it reduces cerebral blood flow and can lead to additional ischemia. Furthermore, following prolonged hyperventilation, the local pH normalizes through pH buffering mechanisms. Once this occurs, it triggers vasodilation, which can cause cerebral edema. In effect, except in the event of acute brain herniation, the PCO2 goal should be between 35-45 mmHg (or end-tidal CO2 to 30-40 mmHg) (Seder, 2012).

Hyperosmolar therapy with either mannitol or hypertonic saline (HTS) is another important step in intracranial hypertension. Mannitol (20% solution, 0.25-1 g/kg IV via rapid IV infusion) works by two mechanisms (Bratton, 2007). The first, which occurs within minutes, is plasma volume expansion, which lowers the blood viscosity and improves cerebral blood flow and oxygenation. The second, and perhaps more well-known, takes 15-30 minutes, and is the creation of an osmotic gradient that drives water out of neuronal cells and into the plasma, followed by rapid diuresis. This latter effect is critical, as it may precipitate hypotension in the absence of concomitant IV fluid administration. Most experts also recommend Foley placement for careful control of volume status. HTS (in various concentrations but often 3%, 150 mL IV over 10 minutes), in contrast, is believed not to produce rapid hypotension, which may be a reason for its increasing popularity in recent years. It also creates a higher osmolality in the vasculature, and draws fluid out of the cerebrum (Bratton, 2007). While there are proponents for the selective use of either agent, there are no head-to-head trials evaluating the relative efficacy of mannitol and HTS, and both are considered appropriate therapies (Swadron, 2012).

Although much of the above literature derives from ICH as the result of trauma, the management of ICH in the non-traumatic setting, at least acutely, is generally the same.

2. In which patients with ICH do you push for invasive neurosurgical intervention?

Following ICH, the decision of whether or not to pursue surgical intervention is reliant upon the patient’s neurological exam as well as head CT findings. The most widely accepted evidence-based recommendations are the Brain Trauma Foundation Guidelines for the Surgical Management of Traumatic Brain Injury, and there are several criteria upon which they rely (Bullock, 2006a; Bullock, 2006b; Bullock, 2006c; Bullock, 2006d). While there are many niche indications for operative intervention, practically speaking, since neurosurgery is likely to be consulted in any case of ICH, it would be helpful to remember the following clear indications for surgery:

  • GCS ≤ 8 + large mass lesion
  • Any GCS + extra-axial hematoma (epidural or subdural hematoma) ≥ 1cm thick
  • Any GCS + extra-axial hematoma (epidural or subdural hematoma) ≥ 5mm midline shift
  • Intracranial hematomas >3cm in diameter (especially with mass effect)

When there is any form of deterioration on repeat examinations, a focal neurologic deficit, or pupillary changes such as anisocoria or fixation/dilation, surgery should be expedited (Swadron, 2012). Nearly all neurosurgeons agree that intervention is prudent in posterior fossa lesions given the confined space compared to supratentorial lesions. Beyond this, there is still considerable variation worldwide in surgical intervention. The STICH II trial looked at a randomized sample of patients with spontaneous, superficial supratentorial ICH, comparing early surgery with early medical management (with possible surgery after 12h). The investigators found no increase in death or disability at six months in the early surgery group, and a small survival advantage (Mendelow, 2013). A similar trial in traumatic ICH is ongoing. Importantly, while severe coagulopathy is a relative contraindication to surgery, it can be corrected intraoperatively and should not delay the patient’s course to the operating room.

In intracranial hemorrhage, much of the damage occurs through secondary injury over time. The development of intracranial hypertension is associated with an increase in mortality (Bratton, 2007). As such, the 2007 Brain Trauma Foundation Guidelines recommend (level II) ICP monitoring in the following settings:

  • GCS ≤ 8 (but salvageable) + abnormal head CT*
  • GCS ≤ 8 (but salvageable) + normal head CT + 2 of the following:
    • Age > 40 years
    • SBP < 90 mmHg
    • Motor posturing

*hematomas, contusions, swelling, herniation, compressed basal cisterns

Part of the reason this has been suggested is that diagnosing intracranial hypertension based on clinical exam alone is challenging. Furthermore, ventriculostomy (placement of an external ventricular drain, EVD) has not only diagnostic but also therapeutic potential through CSF drainage. Very recently, however, the utility of ICP monitoring has been put into question. The first randomized, controlled trial of TBI patients with and without ICP monitors was published, showing no difference in six-month clinical outcomes between the two groups (Chesnut, 2012). Importantly, intracranial hypertension (either via ICP monitoring or clinical exam and imaging) was acted upon in both groups, so this study does not address whether interventions targeting ICP lead to outcome differences.

3. What interventions do you initiate in patients with ICH on antiplatelet medications?

There are varied practices in this setting, as outcome data are highly variable. Theoretically, the use of antiplatelet therapy (e.g. aspirin, clopidogrel) leads to hematoma expansion and increased mortality, as has been found in several observational studies (Roquer, 2005; Saloheimo, 2006; Naidech, 2009; Toyoda, 2005). However, despite this being a logical conclusion, there are numerous studies, which have failed to show a clinical outcome difference between ICH patients taking antiplatelet agents and those that are not (Caso, 2007; Foerch, 2006; Sansing, 2009). In light of these conflicting results, some believe it is important to pursue antiplatelet reversal until more definitive data emerges (Campbell, 2010). However, even if it is accepted that antiplatelet therapy leads to hematoma expansion and worsened clinical outcomes, it cannot be presumed that platelet transfusion, the most common antiplatelet reversal strategy, is beneficial. None of the observational trials comparing patients taking aspirin and clopidogrel have shown a favorable impact. Further, platelet transfusion carries with it the risk of infection, transfusion-related acute lung injury, and allergic reactions.

Another option is desmopressin (DDAVP, 0.3 mcg/kg IV), which triggers the release of von Willebrand factor and factor VIII. It has been shown to reverse uremic as well as aspirin- and clopidogrel-induced platelet dysfunction (Flordal, 1993; Reiter, 2003; Leithauser, 2008). It is a popular alternative or adjunct to platelet transfusion and is considered to have a favorable side effect profile – particularly in comparison to the associated risks of platelet transfusion described above.

Overall, pending further investigation, it appears that the use of platelets and/or DDAVP at this stage is largely dependent on institutional practices.

4. What agent(s) do you use for warfarin reversal in the setting of ICH? What about other oral anticoagulants?

In the setting of ICH, the four agents considered for warfarin reversal include vitamin K, fresh frozen plasma (FFP), prothrombin complex concentrates (PCC), and recombinant activated factor VII (rFVIIa). There is also no clear evidence on the most appropriate target INRs, though many groups aim for INRs of 1.2-1.5.

In a review of the literature, Goodnough and Shander demonstrated that among guidelines for anticoagulant reversal in ICH, consensus is strongest for the use of vitamin K, which promotes hepatic synthesis of clotting factors II, VII, IX, and X (2011).  Its onset of action is between 2-6 hours but requires up to 24 hours to have full effect. As seen in the table below, it is typically given in doses of 5-10 mg IV. Vitamin K is often given alongside other agents, which have shorter half-lives.

FFP contains all of the coagulation factors and is the most common method of factor replacement in the United States (Dentali, 2006). However, as the amount of vitamin K-dependent factors per unit of FFP is variable, it is often difficult to predict the degree of INR correction that will accompany a given amount of FFP. A rough estimation for the amount of FFP required to correct a coagulopathy involves calculating the difference in the factor activity (%) between the goal INR and the current INR (readily available in chart format) and noting that each unit of FFP roughly increases the factor activity by 2.5%. Practically, for patients taking warfarin in the therapeutic range (INR 2-3), 2-4 units (10-12 ml/kg) of FFP are often needed. While FFP is commonly used for warfarin reversal in ICH, difficulty arises in patients with cardiac, renal, and hepatic disease who cannot tolerate large fluid loads. Additionally, the INR of FFP is around 1.5, which limits the ultimate reversal nadir.

In such situations, there may be a role for PCC, which contains four  factors in higher concentrations than FFP, and requires a much smaller volume than FFP to achieve coagulopathy reversal. A September 2013 trial, published in Circulation found 4-factor PCC to be similarly effective (based on clinical and lab endpoints) and as safe as FFP (Sarode, 2013). Another reason for using PCC may be speed. In a number of small prospective and retrospective studies, PCC have demonstrated significantly more rapid reversal of coagulopathy in ICH than FFP (Cartmill, 2000; Huttner, 2006). Interestingly, while previous guidelines from the American College of Chest Physicians recommended the use of vitamin K with any of the more rapidly acting agents (PCC, FFP, or rFVIIa), the newest guidelines specifically recommend 4-factor PCC over FFP to accompany vitamin K (Ansell, 2008; Holbrook, 2012). In some institutions, PCC are given as a fixed dose of 25-50 international units/kg, while at others it is an INR-based dose (Andrews, 2012). A word of mention should be made about the cost of PCC. It has been estimated to cost $2,000 to reverse an INR of 3.0 in a patient with ICH. This is in contrast to FFP, which is between $200 and $400 (Steiner, 2006). Finally, while studies have shown a rapid correction of INR with PCC administration, no study has demonstrated decreased mortality.

rFVIIa promotes factor X activation and thrombin generation on platelets at sites of injury. In recent years, it has garnered much attention for its widespread off-label uses as a hemostatic agent. One major criticism, however, is that while rFVIIa has been shown to rapidly correct supratherapeutic INRs, it may not have a true clinical benefit (Nishijima, 2010; Mayer, 2008; Ilyas, 2008). Additionally, given its short, 4-hour half-life, vitamin K and FFP are often concurrently given. One of the greatest concerns with rFVIIa is its higher risk of arterial thromboembolic events (myocardial infarctions, cerebral infarctions), which was demonstrated in a randomized trial (Mayer, 2008). Like PCC, rFVIIa has a significant price tag. For reversal of an ICH patient with an INR of 3.0, its approximate cost is $5,000 to $15,000 (Steiner, 2006). At the time of this writing there are trials underway to see if rFVIIa changes outcomes in patients with ICH and active extravasation on CT angiography.

In terms of the newer anticoagulants on the market, dabigatran is a direct thrombin inhibitor that is used to prevent arterial and venous thromboembolism. In many cases of minor bleeding, simply holding the next dose and providing supportive care is adequate, as the half-life is between 14 and 17 hours with normal renal function. However, trouble arises in the setting of ICH because there is no accepted monitoring strategy or reversal agent (Watanabe, 2012). While dabigatran does have a prolonging effect on the PT and PTT, these are only valuable as rough guides to the degree of anticoagulant activity. It has been suggested that PCC and rFVIIa may be used for reversal but their benefit has yet to be demonstrated clinically (Alberts, 2012). Hemodialysis remains another option, although it may be difficult to rapidly initiate and has shown limited benefit in case reports only.

Rivaroxaban, another new agent, is a Xa inhibitor commonly employed for stroke prevention in patients with atrial fibrillation. Similar to dabigatran, while there is no specific antidote, its half-life is also short (five to nine hours). The theoretical possibility of reversal with rFVIIa and PCC exists, though this has not been shown. Presently, the best human data has been done on non-bleeding volunteers taking rivaroxaban who showed improved PTs after receiving 4-factor PCC (Eerenberg, 2011).

Thank you to Drs. Natalie Kreitzer and Opeolu Adeoye of the University of Cincinnati Department of Emergency Medicine and the Neurosciences ICU for their expert advice on these “answers.” Please also see their excellent, recent publication on the topic “An update on surgical and medical management strategies for intracerebral hemorrhage.”

Posted in Uncategorized | Tagged , , , , | 1 Comment

Intracranial Hemorrhage, Questions

1. What immediate steps in management do you take when a patient with intracranial hemorrhage (ICH) exhibits signs of elevated intracranial pressure (ICP)?

EM Lyceum ICH Questions2. In which patients with ICH do you push for invasive neurosurgical intervention?

3. What interventions do you initiate in patients with ICH on antiplatelet medications?

4. What agent(s) do you use for warfarin reversal in the setting of ICH? What about other oral anticoagulants?

EML ICH Questions Poster

Posted in Uncategorized | 4 Comments

Medication Comparisons, “Answers”


Check out our own Dr. Anand Swaminathan discussing this topic and more on ER Cast here!

1. Acetaminophen vs Ibuprofen. Which do you prefer for analgesia? For fever reduction?

Pain and fever are among the most common chief complaints in the ED. Acetaminophen and ibuprofen are two of the most widely consumed medications on the market today. The relevance of this debate cannot be overstated, and yet it is rarely discussed. As this question is especially frequent in the pediatric population, we will start there.

OLYMPUS DIGITAL CAMERAOne of the most comprehensive studies in the pediatric literature is a 2004 meta-analysis that summarized the findings from 17 randomized, controlled trials comparing the two drugs in children <18 years of age. Three studies involved pain, 10 involved fever, and all 17 involved safety.  They found no difference in pain relief provided by ibuprofen (4-10mg/kg) and acetaminophen (7-15mg/kg); however, ibuprofen (5-10mg/kg) was superior to acetaminophen (10-15mg/kg) as an anti-pyretic. This was true at 2 hours, and even more pronounced at 4 and 6 hours. At these later markers, 15% more children were likely to have reduced fever with ibuprofen compared to acetaminophen. When selecting for studies using only the 10mg/kg dose of ibuprofen, there was a doubling of the effect size in support of ibuprofen. As for safety, there was no evidence that one drug was less safe than the other or than placebo. The authors determined that this data was inconclusive and that more large studies would be needed to identify small differences in safety (Perrott, 2004).

In 2010 an updated meta-analysis was published. The authors noted that no such meta-analysis had been conducted in adults, and therefore also sought to examine studies in this population. The article reported data from 85 studies (54 pain, 35 fever, 66 safety). Qualitative review revealed that ibuprofen was more effective than acetaminophen for pain and fever reduction, and that the two were equally safe. From the studies that provided sufficient quantitative data, the authors calculated standardized mean differences or odds ratios then averaged these data points. Here they found that for pain, ibuprofen was superior in children and adults; meanwhile, for fever, ibuprofen was superior in children, but conclusions could not be made for adults due to insufficient data. For safety, ibuprofen was favored, but there was no statistically significant difference (Pierce, 2010).

What about combining or alternating acetaminophen and ibuprofen? Despite a lack of consensus guidelines endorsing this practice, it is commonly employed by providers and caregivers for the treatment of fever in children. This is likely heavily influenced by “fever phobia,” a concept originally coined to describe the fear that caregivers have for perceived dangerous sequelae when a child is febrile (Schmitt, 1980). Regardless of the motives, these strategies pose two questions: does the combination actually reduce fever more effectively, and, is it safe?

There are a limited number of efficacy studies, with widely differing methodologies that make systematic analysis difficult. In addition, many of these studies have design flaws such as improper administration schedules and dosing, or too short durations of follow-up. A 2013 article in the Annals of Emergency Medicine identified 4 studies that the author deemed high-quality and relevant to emergency practitioners.  Three of the four found that the combination was more effective at reducing fever than either alone (Malya, 2013). However even these higher quality studies should be interpreted with caution as they also have limitations.

Safety data for combination or alternating therapy is even more limited, and the concern for safety somewhat theoretical. Dosing errors are not infrequent in the administration of acetaminophen and ibuprofen. Particularly for the former, this can easily lead to dangerous outcomes. Combining the two medications could magnify the potential for serious toxicity. Furthermore, alternating the medications can be confusing due to the recommended dosing of acetaminophen every 4 hours and ibuprofen every 6 hours in pediatric patients (Mayoral, 2000; Sarrell, 2006). One study that looked at alternating regimens over 24 hours found that 6-13% of parents exceeded the maximum number of recommended doses (Hay, 2008). Mechanisms have been suggested by which the two drugs could act synergistically to cause renal tubular injury; however, acetaminophen and ibuprofen have different pathways of metabolism, and adverse effects in patients taking both have only been described in rare case reports (Mayoral, 2000; Smith, 2012).

As a final piece in this question, is it acceptable to prescribe ibuprofen for pain relief in patients with fractures? While combination medications containing an opiate will often be necessary for patients with fractures, ibuprofen has anti-inflammatory properties that other medications lack, and its use may reduce the need for opiates. But your orthopedic surgery consult may recommend avoiding NSAIDs in fractures because they could suppress healing. What is the evidence?

The theory behind this stems from the fact that as cyclooxygenase (COX) inhibitors NSAIDs suppress production of prostaglandins, which are important mediators in bone repair. Theoretically this makes sense, but studies to support this have only been conducted in animal models. A number of human studies have suggested that NSAID use in patients with long bone fractures is associated with nonunion; however, these are largely uncontrolled retrospective studies that fail to demonstrate causality. The authors of a recent review in the orthopedic literature state, “We found no robust evidence to attest to a significant and appreciable patient detriment resulting from the short-term use of NSAIDs following a fracture” (Kurmis, 2012).

2. PPIs vs H2 Blockers. Which is your first line choice for gastritis/GERD?

Gastritis and gastroesophageal reflux disease (GERD) are pervasive medical problems.  Treatment of these diseases was revolutionized in 1979 by the introduction of the first H2 receptor antagonist (H2RA), cimetidine, then again in 1989 by the introduction of the first proton pump inhibitor (PPI), omeprazole (Sachs, 2010). These two drug classes now form the cornerstone of treatment of gastritis and GERD. But which one works better? More specifically, which will provide symptomatic relief more quickly in the ED setting, and which one should we prescribe patients upon discharge?

Answering these questions requires a brief review of the pharmacodynamics of these drugs. PPIs suppress acid secretion by binding to the H+/K+-ATPase in the parietal cells of the stomach. There are a few important aspects to this process that affect the onset and duration of action of PPIs. First, PPIs are prodrugs. Before they are able to bind to the proton pump they must diffuse into the parietal cell and be protonated to form the active drug. As a result, PPIs have a somewhat delayed onset of action. Second, binding to the proton pump is irreversible. Therefore the duration of effect is not related to the plasma drug concentration but rather the turnover rate of the proton pump. So, despite a short half-life, PPIs can be effective for up to 3-5 days. Finally, because PPIs inhibit the last step in acid production, their effect is independent of any downstream factors.

H2RAs work as competitive inhibitors at the histamine receptor, preventing histamine from binding and stimulating acid production. Onset of action is rapid (<1hr), with peak serum concentrations reached in 1-3hrs. Unlike the PPIs, binding is reversible, and duration of action is much shorter, approximately 12hrs. Because H2RAs do not block the final secretory step, some acid is still produced. H2RAs are less potent than PPIs, reducing daily acid production by about 70%, as compared to 80-95% for PPIs. One other important thing to be aware of concerning H2RAs is that they are known to demonstrate tachyphylaxis. Tolerance may be exhibited within 3 days of use (Wallace in Goodman & Gilman, 2011).

Gastritis describes a spectrum of pathologies ranging from known peptic ulcer disease to functional (endoscopy-negative) dyspepsia. Regardless of the exact entity, most disease is attributable to H. pylori, aspirin/NSAID use, or alcohol. Treatment for H. pylori is well-established and includes PPIs, as these agents have been shown to heal ulcers faster than H2RAs and also to contribute to the eradication of H. pylori. For patients on chronic NSAID therapy, PPIs have also been shown to be more effective than H2RAs in healing ulcers (Boparai, 2008). One study demonstrated 8-week healing rates of 80% for 20mg omeprazole daily versus 63% for 150mg ranitidine twice daily (Yeomans, 1998). A similar trial substituting esomeprazole showed healing rates of 88% and 74%, respectively (Goldstein, 2005).

In 2005 the Agency for Healthcare Research and Quality wrote a Comparative Effectiveness Review for the management of GERD. They identified 3 well-conducted meta-analyses of PPIs and H2RAs, and concluded that PPIs were superior to ranitidine for symptom resolution at 4wks (Ip, 2005). One of these meta-analyses examined 11 randomized controlled trials (1575 patients total) comparing a PPI to ranitidine.  At 8 weeks each of the 4 PPIs included had a higher rate of healing than ranitidine. For omeprazole, healing was 1.6 times more likely than for ranitidine (Caro, 2001). In 2011 the AHRQ updated their review. They analyzed 39 additional primary studies and did not alter their previous conclusions (Ip, 2011). One of the largest of these studies (1902 patients) found that esomeprazole 20mg once daily “significantly improved all symptoms” in 80% of patients, compared to 47% in those taking ranitidine 150mg twice daily (Hansen, 2006). A recent Cochrane Review based on 7 trials similarly found that PPIs were significantly more effective than H2RAs for remission of symptoms (RR 0.66) (van Pinxteren, 2010).

Current practice guidelines in the gastroenterology literature advocate that the therapy of choice for GERD is an 8-week course of PPIs, initiated once daily before the first meal of the day. In incomplete responders, dosing may be increased to twice daily, or, in the absence of erosive disease, H2RAs may be substituted or added at bedtime for nocturnal breakthrough symptoms (Katz, 2013).

Unfortunately, there is a lack of comparative studies or guidelines for treating gastritis or GERD in the ED setting; however, based on other studies and the known pharmacologic properties of these drugs such as onset, duration, and tolerance, it is fair to say that PPIs are the preferable first-line treatment for all patients long-term, but in the acute setting H2RAs may provide symptomatic relief more quickly.

The last question to ask is whether there are any adverse effects that may be pertinent when choosing an agent. Both drugs are very safe with few side effects. PPIs are metabolized by the liver whereas H2RAs are excreted by the kidneys, therefore care should be taken when prescribing these medications for patients with hepatic or renal insufficiency, respectively. Additionally, PPIs are metabolized by cytochrome P450 enzymes. As a result, they have the potential to interfere with elimination of other drugs such as warfarin and clopidogrel that are cleared by the same pathway.

It has been suggested that chronic PPI use may be associated with increased risk of  fractures and certain infections, such as pneumonia and clostridium difficile. To address these concerns, the most recent consensus guidelines in the American Journal of Gastroenterology state the following:

  • PPIs may be prescribed for patients with osteoporosis and should not be a concern unless a patient has other risk factors for fracture;
  • PPI use can be a risk factor for clostridium difficile and should be used with caution in patients at risk;
  • Short-term PPI use may be associated with increased risk of community-acquired pneumonia, but this is not seen with long-term use;
  • PPIs can be continued in patients taking clopidogrel (Katz, 2013)

3. Meclizine vs Benzodiazepines. Which do you prescribe for vertigo?

For peripheral vertigo (labyrinthitis, vestibular neuritis and benign paroxysmal positional vertigo (BPPV)), vestibular suppressants, and to a lesser extent antiemetics, comprise the arsenal of pharmacologic treatment. The use of these drugs most applies to labyrinthitis and vestibular neuritis as BPPV is short lived and can often be corrected by positioning.

Vestibular suppressants include 3 major classes (Hain, 2003):

  • Antihistamines (meclizine, diphenhydramine, dimenhydrinate)
  • Benzodiazepines (diazepam, lorazepam)
  • Anticholinergics (scopolamine)

There have been very few studies examining the efficacy of these drugs, and even fewer head-to-head trials. Moreover, the majority of these limited studies are decades old. In 1972, Cohen and deJong demonstrated that meclizine was superior to placebo in reducing vertigo symptoms and the frequency and severity of attacks; however, they used a sample size of only 31. Conversely, in 1980 McClure and Willet found that benzodiazepines were not superior to placebo. Again the sample size was small, with 25 patients randomized to diazepam, lorazepam, or placebo.

In a somewhat more recent study from the EM literature, dimenhydrinate was compared to IV lorazepam for the treatment of vertigo in the ED. They found that at 2 hours, dimenydrinate was more effective in relieving symptoms and less sedating than lorazepam.  This study had a sample size of 74 (Marill, 2000).

Whether based on the few studies or on anecdotal evidence, most sources seem to have a slight preference for antihistamines over benzodiazepines. But further review of the literature brings to light a more important question: whether there is a place for medication at all in the treatment of vertigo.

In 2008, the American Academy of Neurology and the American Academy of Otolaryngology both published evidence-based practice guidelines for the treatment of BPPV. The neurology recommendations state: “There is no evidence to support a recommendation of any medication in the routine treatment for BPPV” (Fife, 2008).

The ENT guidelines state, “Vestibular suppressant medications are not recommended for the treatment of BPPV, other than for the short-term management of vegetative symptoms such as nausea or vomiting in a severely symptomatic patient.” The authors justify their recommendation based on the lack of evidence for these medications, but also the potential harm associated with them. Side effects of the vestibular suppressants include drowsiness, cognitive impairment, gastrointestinal motility disorders, urinary retention, dry mouth, and visual disturbances. Some of these medications are alone significant risk factors for falls and other accidents, but become even more dangerous in patients already experiencing dizziness (Bhattacharyya, 2008).

Both academies advocate the use of particle repositioning maneuvers (PRMs), such as the Epley and Sermont, as opposed to medications. This is supported by a number of studies, one of which showed improvement rates of 79-93% for medication plus PRMs, versus 31% for medication alone (Itaya, 1997).

By definition, BPPV consists of brief episodes of vertigo triggered by movement; therefore it makes sense that medications would not be a useful management strategy. They will not prevent episodes and they should not be needed to abort very short-lived symptoms. Bhattacharyya points out that although some studies have shown improvement after vestibular suppressants, patients are followed for a duration in which symptoms should be expected to resolve spontaneously.

Many sources also point out another downside to vestibular suppressants: they delay central compensation. Compensation is an adaptive response to any vestibular stimulus, whether related to normal motion or to disease. This process is key to the recovery from vestibular diseases in the sub-acute phase. All the vestibular suppressants are thought to slow compensation, although the support for this claim comes primarily from animal studies. Nonetheless, most authors consider this further evidence for use of these medications only in the acute period and not after the initial 48 hours (Hain, 2003).

Finally, it is important to note that in the ED setting it is often difficult to diagnose BPPV with certainty. A 2009 article in the EM literature highlights the frequency with which providers misdiagnose and mistreat BPPV and acute peripheral vertigo (APV), an umbrella term describing labyrinthitis and vestibular neuritis/neuronitis. The authors caution that while BPV is more prevalent in the general population, APV is actually the more common disorder among patients presenting to EDs. They go on to explain that the distinction is important because BPPV and APV have different treatments. Despite the pervasive use of meclizine to treat BPV, it is not indicated for this diagnosis. Meanwhile, APV should be treated with steroids (Newman-Toker, 2009).

In summary, for the undifferentiated patient with symptoms of vertigo, vestibular suppressants can be used for the acute management of severe symptoms, regardless of the patient’s diagnosis, and meclizine is generally the preferred drug. However, it is important to attempt to make the correct diagnosis and guide further treatment accordingly, whether it is with PRMs, steroids, or another strategy.

4. Calcium Channel Blockers vs Beta Blockers. Which is your first line choice for rate control in atrial fibrillation? 

Atrial fibrillation (AF) is the most common dysrhythmia seen in the ED. For years, management of this disease has been rife with controversy, such as whether to anticoagulate, whether to rate control versus rhythm control, and which agents to use in each of these management strategies. This discussion, however, is limited to the most common medications used for rate control: non-dihydropyridine calcium channel blockers (CCBs) and beta blockers (BBs). Please see our previous topic on recent-onset atrial fibrillation for discussion of the other controversies:

CCBs block voltage-gated calcium channels in the heart and blood vessels. BBs competitively inhibit catecholamine binding at beta-1 and beta-2 receptors in the heart and vascular smooth muscle. Both slow conduction through the AV node and lengthen its refractory period during high rates of conduction (Demircan, 2005).

Digoxin was previously the mainstay of treatment for stable, rapid AF until the introduction and FDA approval of IV diltiazem for AF in 1992 (Schreck, 1997). A recent survey study investigated prescribing preferences in new-onset AF among nearly 2000 emergency physicians in multiple English-speaking countries. They found that in the U.S. and Canada, IV diltiazem was the most commonly preferred drug for rate control (95% and 65% of respondents, respectively). In the U.K. and Australasia, IV metoprolol was most commonly preferred (68% and 66% of respondents, respectively) (Rogenstein, 2012).

Possibly the most famous study conducted on AF is the AFFIRM trial. One arm of the analysis focused on approaches to rate control. The investigators found that 59% of patients randomized to a BB alone achieved rate control, versus 38% of those randomized to a CCB alone. They also found that more patients were switched from a CCB to a BB than vice versa (Olshansky, 2004). Of note, alteration in regimen was at the discretion of the treating cardiologist. Additionally, average follow-up in the study was 3.5 years, making the results much less applicable to the ED setting. While this study is often cited as providing the crux of available data on CCBs versus BBs for AF, there actually have been some additional small studies, including a number conducted in ED patients (!).

A Turkish study of 40 ED patients compared the efficacy of IV diltiazem versus IV metoprolol. At 20 minutes, rate control was achieved in 90% of patients randomized to diltiazem versus 80% of those randomized to metoprolol (Demircan, 2005). Another study of 52 ED patients found that diltiazem was more likely to achieve rate control at 30 minutes (Fromm, 2011). A recent, larger study compared not only efficacy but also the safety of CCBs and BBs. The primary outcome was proportion of patients requiring hospital admission: 31% for the CCB group versus 27% for the BB group. There were no significant differences in the secondary outcomes of ED length of stay, adverse events, 7- and 30-day ED revisits, stroke, and death. The authors concluded that while diltiazem has been observed to reduce heart rates more quickly than metoprolol, the two drugs are associated with similar overall outcomes (Scheuermeyer, 2013).

Are there any practice guidelines for rate control of rapid AF?  ACEP has not published any, but the AHA has. They state: “In the absence of preexcitation, intravenous administration of beta blockers (esmolol, metoprolol, or propranolol) or nondihydropyridine calcium channel antagonists (verapamil, diltiazem) is recommended to slow the ventricular response to AF in the acute setting, exercising caution in patients with hypotension or heart failure” (Fuster, 2006).

The mention of heart failure is notable. All of the above drugs should be avoided or used with caution in patients with decompensated heart failure; however, in patients with compensated heart failure or left ventricular dysfunction, BBs are the drug of choice. The same is true for patients with acute coronary syndromes or thyrotoxicosis. Conversely, CCBs are preferred in patients with obstructive pulmonary disease, which is a relative contraindication to BBs (Oishi, 2013).

Posted in Uncategorized | Tagged , , , , , , , , | 4 Comments