Abscess, Questions

1.) Do you routinely pack abscesses after incision and drainage (I & D)? If so, what is your endpoint (i.e. when do you stop re-packing, and when do you stop ED follow-up)?

EML Questions Abscess2.)  Do you ever use primary closure after abscess I & D? What about loop drainage?

3.) Which patients do you treat with antibiotics after I & D?

4.) Which antibiotics do you select for treatment of abscesses after I & D? When do you consider sending wound cultures?

EML Abscess Questions

Posted in Uncategorized | Tagged , , | 5 Comments

Intracranial Hemorrhage, “Answers”

1. What immediate steps in management do you take when a patient with intracranial hemorrhage (ICH) exhibits signs of elevated intracranial pressure (ICP)?

The immediate steps in the management of intracranial hypertension (ICP >20 mmHg for 5 minutes) in the setting of ICH follow the mantra of emergency medicine and include an evaluation and intervention upon airway, breathing, and circulation. In some instances (e.g. major traumas, initial GCS ≤ 8, etc.), patients found to have ICH on head CT will have previously been intubated. However, in many situations this will not be the case. In such patients with rapidly declining neurological status, intubation is crucial to protect the airway and maintain adequate oxygenation and ventilation. When possible, a moment of pause should be taken at this point to perform a rapid (1 to 2 minutes) but detailed pre-sedation/pre-intubation neurologic exam, as outlined by the Emergency Neurological Life Support (ENLS) protocol on airway, ventilation, and sedation (Seder, 2012). While we sometimes forget this step, as it may not affect our management as emergency physicians it may critically influence later neurosurgical decision-making.

EML ICH AnswersAs a brief review, the four classic indications for intubation include failure of maintenance of airway protection, failure of oxygenation, failure of ventilation, and anticipated clinical deterioration. The latter is commonly the reason for intubation in ICH.

The chosen method of airway protection in the setting of intracranial hypertension is rapid sequence intubation (RSI) as it offers protection against reflex responses to laryngoscopy that raise ICP (Sagarin, 2005; Li, 1999; Sakles, 1998; Walls, 1993). Importantly, ENLS recommends the administration of the appropriate pretreatment and induction agents even in the presence of presumed coma, as laryngoscopy may still stimulate reflexes that raise ICP (Bedford, 1980). In terms of pretreatment medications, perhaps none is more controversial than lidocaine. Proponents of its use often highlight its safety profile and its ability to blunt the direct laryngeal reflex, which otherwise raises ICP (Salhi, 2007). Detractors, on the other hand, point out that there are no human trials showing benefit, only one trial evaluating its effect on ICP at the time of intubation, and that this study was in brain tumor patients rather than traumatic brain injury (TBI) patients (Vaillancourt, 2007). The debate is likely to continue, as it would be logistically very difficult to design an outcome study. Nonetheless, if chosen, the pretreatment lidocaine dose is 1.5 mg/kg three minutes before intubation. Other options include fentanyl 2-3 mcg/kg and esmolol 1-2 mg/kg, both of which blunt the reflex sympathetic response (increase in heart rate and blood pressure). However, caution is advised in hypotension. In terms of induction agents, etomidate has minimal hemodynamic effects. Propofol is also popular, although through its vasodilatory effects, can cause hypotension. Ketamine, on the other hand, despite previously being avoided, is gaining recognition, particularly for its hemodynamic profile. When weighing all of these options, it would be prudent to remember that in head trauma patients, a single systolic blood pressure (SBP) below 90 mmHg is associated with a 150% increase in mortality (Chesnut, 1993). The choice between depolarizing (e.g. succinylcholine) and non-depolarizing (e.g. rocuronium) neuromuscular blocking agents may be similarly difficult. Succinylcholine has a rapid onset and short duration of action, allowing for a more rapid full neurological reevaluation following intubation. These benefits must be compared with the risk of hyperkalemia in patients with immobility and chronic motor deficits. Additionally, if more than one intubation attempt is made, this may require a delay for succinylcholine re-dosing. In contrast, rocuronium has a longer duration of action, which affects the timing of repeat neurological exams, however, re-dosing it is not required on repeat intubation attempts and it avoids the risk of hyperkalemia. Once intubation is achieved, the head of the bed (HOB) should be raised to 30 degrees to improve venous return in aid the reduction of ICP (Winters, 2011; Feldman, 1992; Ng, 2004; Winkelman, 2000; Moraine, 2000).

As outlined by ENLS, hyperventilation is one of a series of steps taken to acutely lower ICP and prevent infarction of neuronal tissues (Seder, 2012). Lowering the PCO2 causes alkalosis of the cerebrospinal fluid, which in turn leads to cerebral vasoconstriction. The typical goal PCO2 is 28-35 mmHg (20 breaths per minute), and end-tidal CO2 monitoring is recommended (Swadron, 2012). It is extremely important to note that hyperventilation is meant as a bridge to more definitive therapy to reduce ICP, as it reduces cerebral blood flow and can lead to additional ischemia. Furthermore, following prolonged hyperventilation, the local pH normalizes through pH buffering mechanisms. Once this occurs, it triggers vasodilation, which can cause cerebral edema. In effect, except in the event of acute brain herniation, the PCO2 goal should be between 35-45 mmHg (or end-tidal CO2 to 30-40 mmHg) (Seder, 2012).

Hyperosmolar therapy with either mannitol or hypertonic saline (HTS) is another important step in intracranial hypertension. Mannitol (20% solution, 0.25-1 g/kg IV via rapid IV infusion) works by two mechanisms (Bratton, 2007). The first, which occurs within minutes, is plasma volume expansion, which lowers the blood viscosity and improves cerebral blood flow and oxygenation. The second, and perhaps more well-known, takes 15-30 minutes, and is the creation of an osmotic gradient that drives water out of neuronal cells and into the plasma, followed by rapid diuresis. This latter effect is critical, as it may precipitate hypotension in the absence of concomitant IV fluid administration. Most experts also recommend Foley placement for careful control of volume status. HTS (in various concentrations but often 3%, 150 mL IV over 10 minutes), in contrast, is believed not to produce rapid hypotension, which may be a reason for its increasing popularity in recent years. It also creates a higher osmolality in the vasculature, and draws fluid out of the cerebrum (Bratton, 2007). While there are proponents for the selective use of either agent, there are no head-to-head trials evaluating the relative efficacy of mannitol and HTS, and both are considered appropriate therapies (Swadron, 2012).

Although much of the above literature derives from ICH as the result of trauma, the management of ICH in the non-traumatic setting, at least acutely, is generally the same.

2. In which patients with ICH do you push for invasive neurosurgical intervention?

Following ICH, the decision of whether or not to pursue surgical intervention is reliant upon the patient’s neurological exam as well as head CT findings. The most widely accepted evidence-based recommendations are the Brain Trauma Foundation Guidelines for the Surgical Management of Traumatic Brain Injury, and there are several criteria upon which they rely (Bullock, 2006a; Bullock, 2006b; Bullock, 2006c; Bullock, 2006d). While there are many niche indications for operative intervention, practically speaking, since neurosurgery is likely to be consulted in any case of ICH, it would be helpful to remember the following clear indications for surgery:

  • GCS ≤ 8 + large mass lesion
  • Any GCS + extra-axial hematoma (epidural or subdural hematoma) ≥ 1cm thick
  • Any GCS + extra-axial hematoma (epidural or subdural hematoma) ≥ 5mm midline shift
  • Intracranial hematomas >3cm in diameter (especially with mass effect)

When there is any form of deterioration on repeat examinations, a focal neurologic deficit, or pupillary changes such as anisocoria or fixation/dilation, surgery should be expedited (Swadron, 2012). Nearly all neurosurgeons agree that intervention is prudent in posterior fossa lesions given the confined space compared to supratentorial lesions. Beyond this, there is still considerable variation worldwide in surgical intervention. The STICH II trial looked at a randomized sample of patients with spontaneous, superficial supratentorial ICH, comparing early surgery with early medical management (with possible surgery after 12h). The investigators found no increase in death or disability at six months in the early surgery group, and a small survival advantage (Mendelow, 2013). A similar trial in traumatic ICH is ongoing. Importantly, while severe coagulopathy is a relative contraindication to surgery, it can be corrected intraoperatively and should not delay the patient’s course to the operating room.

In intracranial hemorrhage, much of the damage occurs through secondary injury over time. The development of intracranial hypertension is associated with an increase in mortality (Bratton, 2007). As such, the 2007 Brain Trauma Foundation Guidelines recommend (level II) ICP monitoring in the following settings:

  • GCS ≤ 8 (but salvageable) + abnormal head CT*
  • GCS ≤ 8 (but salvageable) + normal head CT + 2 of the following:
    • Age > 40 years
    • SBP < 90 mmHg
    • Motor posturing

*hematomas, contusions, swelling, herniation, compressed basal cisterns

Part of the reason this has been suggested is that diagnosing intracranial hypertension based on clinical exam alone is challenging. Furthermore, ventriculostomy (placement of an external ventricular drain, EVD) has not only diagnostic but also therapeutic potential through CSF drainage. Very recently, however, the utility of ICP monitoring has been put into question. The first randomized, controlled trial of TBI patients with and without ICP monitors was published, showing no difference in six-month clinical outcomes between the two groups (Chesnut, 2012). Importantly, intracranial hypertension (either via ICP monitoring or clinical exam and imaging) was acted upon in both groups, so this study does not address whether interventions targeting ICP lead to outcome differences.

3. What interventions do you initiate in patients with ICH on antiplatelet medications?

There are varied practices in this setting, as outcome data are highly variable. Theoretically, the use of antiplatelet therapy (e.g. aspirin, clopidogrel) leads to hematoma expansion and increased mortality, as has been found in several observational studies (Roquer, 2005; Saloheimo, 2006; Naidech, 2009; Toyoda, 2005). However, despite this being a logical conclusion, there are numerous studies, which have failed to show a clinical outcome difference between ICH patients taking antiplatelet agents and those that are not (Caso, 2007; Foerch, 2006; Sansing, 2009). In light of these conflicting results, some believe it is important to pursue antiplatelet reversal until more definitive data emerges (Campbell, 2010). However, even if it is accepted that antiplatelet therapy leads to hematoma expansion and worsened clinical outcomes, it cannot be presumed that platelet transfusion, the most common antiplatelet reversal strategy, is beneficial. None of the observational trials comparing patients taking aspirin and clopidogrel have shown a favorable impact. Further, platelet transfusion carries with it the risk of infection, transfusion-related acute lung injury, and allergic reactions.

Another option is desmopressin (DDAVP, 0.3 mcg/kg IV), which triggers the release of von Willebrand factor and factor VIII. It has been shown to reverse uremic as well as aspirin- and clopidogrel-induced platelet dysfunction (Flordal, 1993; Reiter, 2003; Leithauser, 2008). It is a popular alternative or adjunct to platelet transfusion and is considered to have a favorable side effect profile – particularly in comparison to the associated risks of platelet transfusion described above.

Overall, pending further investigation, it appears that the use of platelets and/or DDAVP at this stage is largely dependent on institutional practices.

4. What agent(s) do you use for warfarin reversal in the setting of ICH? What about other oral anticoagulants?

In the setting of ICH, the four agents considered for warfarin reversal include vitamin K, fresh frozen plasma (FFP), prothrombin complex concentrates (PCC), and recombinant activated factor VII (rFVIIa). There is also no clear evidence on the most appropriate target INRs, though many groups aim for INRs of 1.2-1.5.

In a review of the literature, Goodnough and Shander demonstrated that among guidelines for anticoagulant reversal in ICH, consensus is strongest for the use of vitamin K, which promotes hepatic synthesis of clotting factors II, VII, IX, and X (2011).  Its onset of action is between 2-6 hours but requires up to 24 hours to have full effect. As seen in the table below, it is typically given in doses of 5-10 mg IV. Vitamin K is often given alongside other agents, which have shorter half-lives.

FFP contains all of the coagulation factors and is the most common method of factor replacement in the United States (Dentali, 2006). However, as the amount of vitamin K-dependent factors per unit of FFP is variable, it is often difficult to predict the degree of INR correction that will accompany a given amount of FFP. A rough estimation for the amount of FFP required to correct a coagulopathy involves calculating the difference in the factor activity (%) between the goal INR and the current INR (readily available in chart format) and noting that each unit of FFP roughly increases the factor activity by 2.5%. Practically, for patients taking warfarin in the therapeutic range (INR 2-3), 2-4 units (10-12 ml/kg) of FFP are often needed. While FFP is commonly used for warfarin reversal in ICH, difficulty arises in patients with cardiac, renal, and hepatic disease who cannot tolerate large fluid loads. Additionally, the INR of FFP is around 1.5, which limits the ultimate reversal nadir.

In such situations, there may be a role for PCC, which contains four  factors in higher concentrations than FFP, and requires a much smaller volume than FFP to achieve coagulopathy reversal. A September 2013 trial, published in Circulation found 4-factor PCC to be similarly effective (based on clinical and lab endpoints) and as safe as FFP (Sarode, 2013). Another reason for using PCC may be speed. In a number of small prospective and retrospective studies, PCC have demonstrated significantly more rapid reversal of coagulopathy in ICH than FFP (Cartmill, 2000; Huttner, 2006). Interestingly, while previous guidelines from the American College of Chest Physicians recommended the use of vitamin K with any of the more rapidly acting agents (PCC, FFP, or rFVIIa), the newest guidelines specifically recommend 4-factor PCC over FFP to accompany vitamin K (Ansell, 2008; Holbrook, 2012). In some institutions, PCC are given as a fixed dose of 25-50 international units/kg, while at others it is an INR-based dose (Andrews, 2012). A word of mention should be made about the cost of PCC. It has been estimated to cost $2,000 to reverse an INR of 3.0 in a patient with ICH. This is in contrast to FFP, which is between $200 and $400 (Steiner, 2006). Finally, while studies have shown a rapid correction of INR with PCC administration, no study has demonstrated decreased mortality.

rFVIIa promotes factor X activation and thrombin generation on platelets at sites of injury. In recent years, it has garnered much attention for its widespread off-label uses as a hemostatic agent. One major criticism, however, is that while rFVIIa has been shown to rapidly correct supratherapeutic INRs, it may not have a true clinical benefit (Nishijima, 2010; Mayer, 2008; Ilyas, 2008). Additionally, given its short, 4-hour half-life, vitamin K and FFP are often concurrently given. One of the greatest concerns with rFVIIa is its higher risk of arterial thromboembolic events (myocardial infarctions, cerebral infarctions), which was demonstrated in a randomized trial (Mayer, 2008). Like PCC, rFVIIa has a significant price tag. For reversal of an ICH patient with an INR of 3.0, its approximate cost is $5,000 to $15,000 (Steiner, 2006). At the time of this writing there are trials underway to see if rFVIIa changes outcomes in patients with ICH and active extravasation on CT angiography.

In terms of the newer anticoagulants on the market, dabigatran is a direct thrombin inhibitor that is used to prevent arterial and venous thromboembolism. In many cases of minor bleeding, simply holding the next dose and providing supportive care is adequate, as the half-life is between 14 and 17 hours with normal renal function. However, trouble arises in the setting of ICH because there is no accepted monitoring strategy or reversal agent (Watanabe, 2012). While dabigatran does have a prolonging effect on the PT and PTT, these are only valuable as rough guides to the degree of anticoagulant activity. It has been suggested that PCC and rFVIIa may be used for reversal but their benefit has yet to be demonstrated clinically (Alberts, 2012). Hemodialysis remains another option, although it may be difficult to rapidly initiate and has shown limited benefit in case reports only.

Rivaroxaban, another new agent, is a Xa inhibitor commonly employed for stroke prevention in patients with atrial fibrillation. Similar to dabigatran, while there is no specific antidote, its half-life is also short (five to nine hours). The theoretical possibility of reversal with rFVIIa and PCC exists, though this has not been shown. Presently, the best human data has been done on non-bleeding volunteers taking rivaroxaban who showed improved PTs after receiving 4-factor PCC (Eerenberg, 2011).

Thank you to Drs. Natalie Kreitzer and Opeolu Adeoye of the University of Cincinnati Department of Emergency Medicine and the Neurosciences ICU for their expert advice on these “answers.” Please also see their excellent, recent publication on the topic “An update on surgical and medical management strategies for intracerebral hemorrhage.”

Posted in Uncategorized | Tagged , , , , | 1 Comment

Intracranial Hemorrhage, Questions

1. What immediate steps in management do you take when a patient with intracranial hemorrhage (ICH) exhibits signs of elevated intracranial pressure (ICP)?

EM Lyceum ICH Questions2. In which patients with ICH do you push for invasive neurosurgical intervention?

3. What interventions do you initiate in patients with ICH on antiplatelet medications?

4. What agent(s) do you use for warfarin reversal in the setting of ICH? What about other oral anticoagulants?

EML ICH Questions Poster

Posted in Uncategorized | 4 Comments

Medication Comparisons, “Answers”


Check out our own Dr. Anand Swaminathan discussing this topic and more on ER Cast here!

1. Acetaminophen vs Ibuprofen. Which do you prefer for analgesia? For fever reduction?

Pain and fever are among the most common chief complaints in the ED. Acetaminophen and ibuprofen are two of the most widely consumed medications on the market today. The relevance of this debate cannot be overstated, and yet it is rarely discussed. As this question is especially frequent in the pediatric population, we will start there.

OLYMPUS DIGITAL CAMERAOne of the most comprehensive studies in the pediatric literature is a 2004 meta-analysis that summarized the findings from 17 randomized, controlled trials comparing the two drugs in children <18 years of age. Three studies involved pain, 10 involved fever, and all 17 involved safety.  They found no difference in pain relief provided by ibuprofen (4-10mg/kg) and acetaminophen (7-15mg/kg); however, ibuprofen (5-10mg/kg) was superior to acetaminophen (10-15mg/kg) as an anti-pyretic. This was true at 2 hours, and even more pronounced at 4 and 6 hours. At these later markers, 15% more children were likely to have reduced fever with ibuprofen compared to acetaminophen. When selecting for studies using only the 10mg/kg dose of ibuprofen, there was a doubling of the effect size in support of ibuprofen. As for safety, there was no evidence that one drug was less safe than the other or than placebo. The authors determined that this data was inconclusive and that more large studies would be needed to identify small differences in safety (Perrott, 2004).

In 2010 an updated meta-analysis was published. The authors noted that no such meta-analysis had been conducted in adults, and therefore also sought to examine studies in this population. The article reported data from 85 studies (54 pain, 35 fever, 66 safety). Qualitative review revealed that ibuprofen was more effective than acetaminophen for pain and fever reduction, and that the two were equally safe. From the studies that provided sufficient quantitative data, the authors calculated standardized mean differences or odds ratios then averaged these data points. Here they found that for pain, ibuprofen was superior in children and adults; meanwhile, for fever, ibuprofen was superior in children, but conclusions could not be made for adults due to insufficient data. For safety, ibuprofen was favored, but there was no statistically significant difference (Pierce, 2010).

What about combining or alternating acetaminophen and ibuprofen? Despite a lack of consensus guidelines endorsing this practice, it is commonly employed by providers and caregivers for the treatment of fever in children. This is likely heavily influenced by “fever phobia,” a concept originally coined to describe the fear that caregivers have for perceived dangerous sequelae when a child is febrile (Schmitt, 1980). Regardless of the motives, these strategies pose two questions: does the combination actually reduce fever more effectively, and, is it safe?

There are a limited number of efficacy studies, with widely differing methodologies that make systematic analysis difficult. In addition, many of these studies have design flaws such as improper administration schedules and dosing, or too short durations of follow-up. A 2013 article in the Annals of Emergency Medicine identified 4 studies that the author deemed high-quality and relevant to emergency practitioners.  Three of the four found that the combination was more effective at reducing fever than either alone (Malya, 2013). However even these higher quality studies should be interpreted with caution as they also have limitations.

Safety data for combination or alternating therapy is even more limited, and the concern for safety somewhat theoretical. Dosing errors are not infrequent in the administration of acetaminophen and ibuprofen. Particularly for the former, this can easily lead to dangerous outcomes. Combining the two medications could magnify the potential for serious toxicity. Furthermore, alternating the medications can be confusing due to the recommended dosing of acetaminophen every 4 hours and ibuprofen every 6 hours in pediatric patients (Mayoral, 2000; Sarrell, 2006). One study that looked at alternating regimens over 24 hours found that 6-13% of parents exceeded the maximum number of recommended doses (Hay, 2008). Mechanisms have been suggested by which the two drugs could act synergistically to cause renal tubular injury; however, acetaminophen and ibuprofen have different pathways of metabolism, and adverse effects in patients taking both have only been described in rare case reports (Mayoral, 2000; Smith, 2012).

As a final piece in this question, is it acceptable to prescribe ibuprofen for pain relief in patients with fractures? While combination medications containing an opiate will often be necessary for patients with fractures, ibuprofen has anti-inflammatory properties that other medications lack, and its use may reduce the need for opiates. But your orthopedic surgery consult may recommend avoiding NSAIDs in fractures because they could suppress healing. What is the evidence?

The theory behind this stems from the fact that as cyclooxygenase (COX) inhibitors NSAIDs suppress production of prostaglandins, which are important mediators in bone repair. Theoretically this makes sense, but studies to support this have only been conducted in animal models. A number of human studies have suggested that NSAID use in patients with long bone fractures is associated with nonunion; however, these are largely uncontrolled retrospective studies that fail to demonstrate causality. The authors of a recent review in the orthopedic literature state, “We found no robust evidence to attest to a significant and appreciable patient detriment resulting from the short-term use of NSAIDs following a fracture” (Kurmis, 2012).

2. PPIs vs H2 Blockers. Which is your first line choice for gastritis/GERD?

Gastritis and gastroesophageal reflux disease (GERD) are pervasive medical problems.  Treatment of these diseases was revolutionized in 1979 by the introduction of the first H2 receptor antagonist (H2RA), cimetidine, then again in 1989 by the introduction of the first proton pump inhibitor (PPI), omeprazole (Sachs, 2010). These two drug classes now form the cornerstone of treatment of gastritis and GERD. But which one works better? More specifically, which will provide symptomatic relief more quickly in the ED setting, and which one should we prescribe patients upon discharge?

Answering these questions requires a brief review of the pharmacodynamics of these drugs. PPIs suppress acid secretion by binding to the H+/K+-ATPase in the parietal cells of the stomach. There are a few important aspects to this process that affect the onset and duration of action of PPIs. First, PPIs are prodrugs. Before they are able to bind to the proton pump they must diffuse into the parietal cell and be protonated to form the active drug. As a result, PPIs have a somewhat delayed onset of action. Second, binding to the proton pump is irreversible. Therefore the duration of effect is not related to the plasma drug concentration but rather the turnover rate of the proton pump. So, despite a short half-life, PPIs can be effective for up to 3-5 days. Finally, because PPIs inhibit the last step in acid production, their effect is independent of any downstream factors.

H2RAs work as competitive inhibitors at the histamine receptor, preventing histamine from binding and stimulating acid production. Onset of action is rapid (<1hr), with peak serum concentrations reached in 1-3hrs. Unlike the PPIs, binding is reversible, and duration of action is much shorter, approximately 12hrs. Because H2RAs do not block the final secretory step, some acid is still produced. H2RAs are less potent than PPIs, reducing daily acid production by about 70%, as compared to 80-95% for PPIs. One other important thing to be aware of concerning H2RAs is that they are known to demonstrate tachyphylaxis. Tolerance may be exhibited within 3 days of use (Wallace in Goodman & Gilman, 2011).

Gastritis describes a spectrum of pathologies ranging from known peptic ulcer disease to functional (endoscopy-negative) dyspepsia. Regardless of the exact entity, most disease is attributable to H. pylori, aspirin/NSAID use, or alcohol. Treatment for H. pylori is well-established and includes PPIs, as these agents have been shown to heal ulcers faster than H2RAs and also to contribute to the eradication of H. pylori. For patients on chronic NSAID therapy, PPIs have also been shown to be more effective than H2RAs in healing ulcers (Boparai, 2008). One study demonstrated 8-week healing rates of 80% for 20mg omeprazole daily versus 63% for 150mg ranitidine twice daily (Yeomans, 1998). A similar trial substituting esomeprazole showed healing rates of 88% and 74%, respectively (Goldstein, 2005).

In 2005 the Agency for Healthcare Research and Quality wrote a Comparative Effectiveness Review for the management of GERD. They identified 3 well-conducted meta-analyses of PPIs and H2RAs, and concluded that PPIs were superior to ranitidine for symptom resolution at 4wks (Ip, 2005). One of these meta-analyses examined 11 randomized controlled trials (1575 patients total) comparing a PPI to ranitidine.  At 8 weeks each of the 4 PPIs included had a higher rate of healing than ranitidine. For omeprazole, healing was 1.6 times more likely than for ranitidine (Caro, 2001). In 2011 the AHRQ updated their review. They analyzed 39 additional primary studies and did not alter their previous conclusions (Ip, 2011). One of the largest of these studies (1902 patients) found that esomeprazole 20mg once daily “significantly improved all symptoms” in 80% of patients, compared to 47% in those taking ranitidine 150mg twice daily (Hansen, 2006). A recent Cochrane Review based on 7 trials similarly found that PPIs were significantly more effective than H2RAs for remission of symptoms (RR 0.66) (van Pinxteren, 2010).

Current practice guidelines in the gastroenterology literature advocate that the therapy of choice for GERD is an 8-week course of PPIs, initiated once daily before the first meal of the day. In incomplete responders, dosing may be increased to twice daily, or, in the absence of erosive disease, H2RAs may be substituted or added at bedtime for nocturnal breakthrough symptoms (Katz, 2013).

Unfortunately, there is a lack of comparative studies or guidelines for treating gastritis or GERD in the ED setting; however, based on other studies and the known pharmacologic properties of these drugs such as onset, duration, and tolerance, it is fair to say that PPIs are the preferable first-line treatment for all patients long-term, but in the acute setting H2RAs may provide symptomatic relief more quickly.

The last question to ask is whether there are any adverse effects that may be pertinent when choosing an agent. Both drugs are very safe with few side effects. PPIs are metabolized by the liver whereas H2RAs are excreted by the kidneys, therefore care should be taken when prescribing these medications for patients with hepatic or renal insufficiency, respectively. Additionally, PPIs are metabolized by cytochrome P450 enzymes. As a result, they have the potential to interfere with elimination of other drugs such as warfarin and clopidogrel that are cleared by the same pathway.

It has been suggested that chronic PPI use may be associated with increased risk of  fractures and certain infections, such as pneumonia and clostridium difficile. To address these concerns, the most recent consensus guidelines in the American Journal of Gastroenterology state the following:

  • PPIs may be prescribed for patients with osteoporosis and should not be a concern unless a patient has other risk factors for fracture;
  • PPI use can be a risk factor for clostridium difficile and should be used with caution in patients at risk;
  • Short-term PPI use may be associated with increased risk of community-acquired pneumonia, but this is not seen with long-term use;
  • PPIs can be continued in patients taking clopidogrel (Katz, 2013)

3. Meclizine vs Benzodiazepines. Which do you prescribe for vertigo?

For peripheral vertigo (labyrinthitis, vestibular neuritis and benign paroxysmal positional vertigo (BPPV)), vestibular suppressants, and to a lesser extent antiemetics, comprise the arsenal of pharmacologic treatment. The use of these drugs most applies to labyrinthitis and vestibular neuritis as BPPV is short lived and can often be corrected by positioning.

Vestibular suppressants include 3 major classes (Hain, 2003):

  • Antihistamines (meclizine, diphenhydramine, dimenhydrinate)
  • Benzodiazepines (diazepam, lorazepam)
  • Anticholinergics (scopolamine)

There have been very few studies examining the efficacy of these drugs, and even fewer head-to-head trials. Moreover, the majority of these limited studies are decades old. In 1972, Cohen and deJong demonstrated that meclizine was superior to placebo in reducing vertigo symptoms and the frequency and severity of attacks; however, they used a sample size of only 31. Conversely, in 1980 McClure and Willet found that benzodiazepines were not superior to placebo. Again the sample size was small, with 25 patients randomized to diazepam, lorazepam, or placebo.

In a somewhat more recent study from the EM literature, dimenhydrinate was compared to IV lorazepam for the treatment of vertigo in the ED. They found that at 2 hours, dimenydrinate was more effective in relieving symptoms and less sedating than lorazepam.  This study had a sample size of 74 (Marill, 2000).

Whether based on the few studies or on anecdotal evidence, most sources seem to have a slight preference for antihistamines over benzodiazepines. But further review of the literature brings to light a more important question: whether there is a place for medication at all in the treatment of vertigo.

In 2008, the American Academy of Neurology and the American Academy of Otolaryngology both published evidence-based practice guidelines for the treatment of BPPV. The neurology recommendations state: “There is no evidence to support a recommendation of any medication in the routine treatment for BPPV” (Fife, 2008).

The ENT guidelines state, “Vestibular suppressant medications are not recommended for the treatment of BPPV, other than for the short-term management of vegetative symptoms such as nausea or vomiting in a severely symptomatic patient.” The authors justify their recommendation based on the lack of evidence for these medications, but also the potential harm associated with them. Side effects of the vestibular suppressants include drowsiness, cognitive impairment, gastrointestinal motility disorders, urinary retention, dry mouth, and visual disturbances. Some of these medications are alone significant risk factors for falls and other accidents, but become even more dangerous in patients already experiencing dizziness (Bhattacharyya, 2008).

Both academies advocate the use of particle repositioning maneuvers (PRMs), such as the Epley and Sermont, as opposed to medications. This is supported by a number of studies, one of which showed improvement rates of 79-93% for medication plus PRMs, versus 31% for medication alone (Itaya, 1997).

By definition, BPPV consists of brief episodes of vertigo triggered by movement; therefore it makes sense that medications would not be a useful management strategy. They will not prevent episodes and they should not be needed to abort very short-lived symptoms. Bhattacharyya points out that although some studies have shown improvement after vestibular suppressants, patients are followed for a duration in which symptoms should be expected to resolve spontaneously.

Many sources also point out another downside to vestibular suppressants: they delay central compensation. Compensation is an adaptive response to any vestibular stimulus, whether related to normal motion or to disease. This process is key to the recovery from vestibular diseases in the sub-acute phase. All the vestibular suppressants are thought to slow compensation, although the support for this claim comes primarily from animal studies. Nonetheless, most authors consider this further evidence for use of these medications only in the acute period and not after the initial 48 hours (Hain, 2003).

Finally, it is important to note that in the ED setting it is often difficult to diagnose BPPV with certainty. A 2009 article in the EM literature highlights the frequency with which providers misdiagnose and mistreat BPPV and acute peripheral vertigo (APV), an umbrella term describing labyrinthitis and vestibular neuritis/neuronitis. The authors caution that while BPV is more prevalent in the general population, APV is actually the more common disorder among patients presenting to EDs. They go on to explain that the distinction is important because BPPV and APV have different treatments. Despite the pervasive use of meclizine to treat BPV, it is not indicated for this diagnosis. Meanwhile, APV should be treated with steroids (Newman-Toker, 2009).

In summary, for the undifferentiated patient with symptoms of vertigo, vestibular suppressants can be used for the acute management of severe symptoms, regardless of the patient’s diagnosis, and meclizine is generally the preferred drug. However, it is important to attempt to make the correct diagnosis and guide further treatment accordingly, whether it is with PRMs, steroids, or another strategy.

4. Calcium Channel Blockers vs Beta Blockers. Which is your first line choice for rate control in atrial fibrillation? 

Atrial fibrillation (AF) is the most common dysrhythmia seen in the ED. For years, management of this disease has been rife with controversy, such as whether to anticoagulate, whether to rate control versus rhythm control, and which agents to use in each of these management strategies. This discussion, however, is limited to the most common medications used for rate control: non-dihydropyridine calcium channel blockers (CCBs) and beta blockers (BBs). Please see our previous topic on recent-onset atrial fibrillation for discussion of the other controversies: http://emlyceum.com/2011/08/29/acute-onset-atrial-fibrillation-answers/

CCBs block voltage-gated calcium channels in the heart and blood vessels. BBs competitively inhibit catecholamine binding at beta-1 and beta-2 receptors in the heart and vascular smooth muscle. Both slow conduction through the AV node and lengthen its refractory period during high rates of conduction (Demircan, 2005).

Digoxin was previously the mainstay of treatment for stable, rapid AF until the introduction and FDA approval of IV diltiazem for AF in 1992 (Schreck, 1997). A recent survey study investigated prescribing preferences in new-onset AF among nearly 2000 emergency physicians in multiple English-speaking countries. They found that in the U.S. and Canada, IV diltiazem was the most commonly preferred drug for rate control (95% and 65% of respondents, respectively). In the U.K. and Australasia, IV metoprolol was most commonly preferred (68% and 66% of respondents, respectively) (Rogenstein, 2012).

Possibly the most famous study conducted on AF is the AFFIRM trial. One arm of the analysis focused on approaches to rate control. The investigators found that 59% of patients randomized to a BB alone achieved rate control, versus 38% of those randomized to a CCB alone. They also found that more patients were switched from a CCB to a BB than vice versa (Olshansky, 2004). Of note, alteration in regimen was at the discretion of the treating cardiologist. Additionally, average follow-up in the study was 3.5 years, making the results much less applicable to the ED setting. While this study is often cited as providing the crux of available data on CCBs versus BBs for AF, there actually have been some additional small studies, including a number conducted in ED patients (!).

A Turkish study of 40 ED patients compared the efficacy of IV diltiazem versus IV metoprolol. At 20 minutes, rate control was achieved in 90% of patients randomized to diltiazem versus 80% of those randomized to metoprolol (Demircan, 2005). Another study of 52 ED patients found that diltiazem was more likely to achieve rate control at 30 minutes (Fromm, 2011). A recent, larger study compared not only efficacy but also the safety of CCBs and BBs. The primary outcome was proportion of patients requiring hospital admission: 31% for the CCB group versus 27% for the BB group. There were no significant differences in the secondary outcomes of ED length of stay, adverse events, 7- and 30-day ED revisits, stroke, and death. The authors concluded that while diltiazem has been observed to reduce heart rates more quickly than metoprolol, the two drugs are associated with similar overall outcomes (Scheuermeyer, 2013).

Are there any practice guidelines for rate control of rapid AF?  ACEP has not published any, but the AHA has. They state: “In the absence of preexcitation, intravenous administration of beta blockers (esmolol, metoprolol, or propranolol) or nondihydropyridine calcium channel antagonists (verapamil, diltiazem) is recommended to slow the ventricular response to AF in the acute setting, exercising caution in patients with hypotension or heart failure” (Fuster, 2006).

The mention of heart failure is notable. All of the above drugs should be avoided or used with caution in patients with decompensated heart failure; however, in patients with compensated heart failure or left ventricular dysfunction, BBs are the drug of choice. The same is true for patients with acute coronary syndromes or thyrotoxicosis. Conversely, CCBs are preferred in patients with obstructive pulmonary disease, which is a relative contraindication to BBs (Oishi, 2013).

Posted in Uncategorized | Tagged , , , , , , , , | 4 Comments

Medicaton Comparisons, Questions

1. Acetaminophen vs. ibuprofen: Which do you prefer for analgesia? For fever reduction?

EML Medication Comparisons Questions2. PPIs vs. H2 Blockers: Which is your first choice for GERD/gastritis?

3. Meclizine vs. benzodiazepenes: Which do you prescribe for vertigo?

4. Calcium channel blockers vs. beta blockers: which is your first choice for rate control in atrial fibrillation?

Medication Comparisons poster

Posted in Uncategorized | Tagged , , , , , | 5 Comments

Epistaxis, “Answers”

 1. How do you differentiate an anterior from a posterior nosebleed?

Whether you’re sitting high in the bleachers of a baseball game or treating a patient in the Emergency Department, nosebleeds can be an extremely frustrating, common, and sometimes even life-threatening disease process.  Nosebleeds have been reported to occur in over 60% of the general population at some point in their lifetime (Pollice, 1997).  One of the key aspects of the evaluation of a nosebleed, although very difficult, is differentiating between an anterior and a posterior nosebleed. They often require similar treatment principles of tamponade, but different equipment/materials, prognosis and dispositions.

EML epistaxis answersThe history and physical exam of the patient may help distinguish between an anterior and posterior nosebleed.  Anterior nosebleeds often derive from Kiesselbach’s plexus, an anastomosis of various small arteries in the anterior portion of the nose. Bleeds in this plexus are the cause of 90% of nosebleeds (Schlosser, 2009).  Dry air, nose-picking, nosebleeds secondary to rhinitis, and cocaine use tend to cause anterior rather than posterior nosebleeds.  Posterior nosebleeds often arise from the more posterior sphenopalatine arteries and, rarely, the internal carotid artery.  Delayed (5 days – 9 weeks), massive epistaxis after head and neck surgery or trauma may point more towards a posterior nosebleed from an internal carotid artery pseudoaneurysm (Chen, 1998).  All other causes of nosebleeds, including those caused by anti-platelet and anti-coagulant agents, hereditary telangiectasia, platelet disorders, nasal neoplasms, and hypertension, may cause either an anterior or a posterior nosebleed.

If history does not provide the clear answer, proper inspection with insertion of a nasal speculum can often help to define the origin when a bleeding vessel is visualized.

If both history and physical examination still leave you in the dust, anterior nosebleed treatment failure may be the only method of differentiating the two.  Although there is a paucity of evidence on this topic, it is reasonable to conclude that if brisk bleeding continues, especially into the oropharynx, despite placement of bilateral anterior nasal packing, a posterior source of bleeding is likely.

2.  What type of nasal packing do you prefer if direct pressure and silver nitrate cautery fail?  Gauze ribbons? Nasal tampons (i.e. Merocel©, Rhino Rocket©)?  Nasal balloon catheters (i.e. Rapid Rhino©)?

In the fifth century BC, Hippocrates documented the use of intranasal packing as one of the primary treatment methods for epistaxis (John, 1987).  Nowadays a variety of nasal packing options exist at the disposal of the emergency practitioner including gauze ribbons, nasal tampons, and nasal balloon catheters.

Gauze ribbons impregnated with an antiseptic are one of the oldest forms of nasal packing, still used today: practitioners insert the gauze layers in an accordion fashion to fully pack the anterior portion of the nose.  The advantages include a relatively low cost and packing that effectively fills the irregular contours of the nose.  The downside, however, is that oftentimes specialty equipment, skill, and extra time is required (Corbridge, 1995).  A study of 50 patients by Corbridge, et al., compared BIPP impregnated gauze ribbons (similar to Xeroform) to a Merocel© nasal tampon and showed no statistically significant differences in terms of bleeding control, patient discomfort and follow-up complication rates. The nasal tampon was found to be easier to insert (Cordbridge, 1995).

Nasal tampons, often made of a foam-based polymer, (i.e. Merocel©, Rhino Rocket©), as well as nasal balloon catheters (i.e. Rapid Rhino©) have the advantage of easier and more rapid placement, as they may be inserted blindly. Their convenience and speed comes at a significantly higher cost per patient.  A prospective randomized controlled trial of 40 patients by Singer, et al., compared the nasal tampon Rhino Rocket© to the nasal balloon catheter Rapid Rhino© and found that they both had similar rates of successful tamponade. The Rapid Rhino© balloon catheter, however, had a significantly lower pain with insertion score with a mean difference of 18 (95% CI 1 to 35) on a 100 point visual analog scale as well as a significantly lower pain with removal score with a mean difference of 12 (95% CI 1 to 25).  The Rapid Rhino© balloon catheter was also found to be easier to insert by the practitioner (Singer, 2005).  Another similar, prospective, randomized, controlled trial by Badran, et al., showed similar results (Badran, 2005).

Although not commonly done in the United States, a recent article supports using the injectable form of tranexemic acid topically, via saturated pledgets, to control anterior bleeding (Zahed, 2013).

Bottom line: Gauze ribbons, nasal tampons and nasal balloon catheters all appear to be equally effective in controlling epistaxis, however the nasal tampons and balloon catheters appear to be less time consuming and easier to insert. The Rapid Rhino© nasal balloon catheter appears to be the easiest to insert and the least painful to patients. 

3.  How long do you tell patients to keep nasal packing in before following up with ENT? Which patients with nasal packing need antibiotics?

Complications with anterior nasal packing, although rare, include septal hematomas, abscesses, sinusitis, neurogenic syncope and pressure necrosis (Kucik, 2005).  However, one of the most feared complications with prolonged anterior nasal packing is Toxic shock syndrome secondary to dissemination of the S. Aureus exotoxin TSST-1  from the nasal cavity into the blood stream (Tan, 1999).  S. Aureus can be isolated in 33% of the patients of whom 30% produce the exotoxin (Breda, 1987).  As a result of these possible complications most physicians entreat patients discharged with nasal packing to follow up with an ENT physician (otolaryngologist) within approximately 48-72 hours, although no exact evidence exists on this time frame. At the follow-up appointment the nasal packing can be removed and a more thorough examination of the nares can be completed.

It is well accepted that patients with posterior nasal packing need prophylactic antibiotics, however, do patients discharged with anterior nasal packing require antibiotic prophylaxis as well?  Much of the evidence suggests that prophylactic antibiotics are likely unnecessary in these patients.

In a study by Biswas, et al., researchers examined the growth of bacteria on anterior nasal packing in those given prophylactic antibiotics and those not given prophylactic antibiotics. No significant difference was found (Biswas, 2009).  A study by Biggs, et al., in the UK, examined the effect of new antibiotic guidelines that did not require physicians to prescribe antibiotics for routine anterior nasal packing. They found that after a 6 week follow-up period with the new guidelines there were no statistically significant differences between patients under the old and new guidelines in terms of symptom scores or readmission rates (Biggs, 2013).  Furthermore, Pepper, et al., conducted a prospective study of 149 patients requiring anterior nasal packing over a 6-month period. Half the group received prophylactic antibiotics and the other half did not. Neither group developed any complications from the nasal packing except for mild otalgia (Pepper, 2012).

In addition to the lack of evidence supporting its benefits, antibiotics may also result in significant side effects including diarrhea, allergic reactions, and may breed further antibiotic resistance.

Although the evidence against giving prophylactic antibiotics for routine anterior nasal packing seems strong, an overwhelming number of ENT physicians still prescribe antibiotics for the duration of the packing.  Although no evidence exists on this aspect, some argue that it may still be prudent to provide antibiotics to special populations of people including those with packing for > 48 hrs, the immunosuppressed, diabetics, and the elderly.

Bottom line: Most patients discharged with nasal packing should follow-up with an  ENT physician within 48-72 hours to reduce potential complications. Most patients with anterior nasal packing do not require antibiotic prophylaxis as the incidence of Toxic shock syndrome is very low.

4.    Which patients do you admit to the hospital?

It is generally accepted that most patients with posterior bleeds will be admitted to the hospital because of the risk of airway obstruction and subsequent hypoxemia and dysrhythmia.  Supplemental oxygen should be administered to all these patients on admission.  Rapid and profuse bleeding from a posterior site may require operative management, which occurs in about 30% of cases (Brinjikji, 2010).  Definitive treatment as an inpatient often consists of endoscopic cauterization/ligation of vessels, or angiographic embolization.

It may also be reasonable to admit patients with anterior bleeds that have not responded to nasal packing and other non-invasive measures.  Admission should also be considered for patients who are elderly or with COPD (due to lack of pulmonary reserve), hemodynamic instability, potential airway compromise, hypovolemia from acute blood loss, anemia, coagulopathy, or myocardial infarction.

Posted in Uncategorized | Tagged , , , | 3 Comments

Epistaxis, Questions

1. How do you differentiate an anterior from a posterior nosebleed?Epistaxis Questions

2. What type of nasal packing do you prefer if direct pressure and silver nitrate cautery fail? Guaze ribbons? Nasal foam tampons (i.e., Merocel, Rhino Rocket)? Nasal balloon tampons (i.e., Rapid Rhino)?

3. How long do you tell patients to keep nasal packing in before following up with ENT? Which patients with nasal packing need antibiotics?

4. Which patients do you admit to the hospital?

Epistaxis Questions Poster

Posted in Uncategorized | Tagged , , | 2 Comments

tPA in ischemic stroke, “answers”


Check out our own Dr. Anand Swaminathan discussing this topic and more on ischemic stroke on ER Cast here and here!

1. How do you control blood pressure (BP) in patients who will be or/are receiving tPA?

For patients not receiving tPA for acute ischemic stroke, allowing autoregulation of blood pressure has long been the norm. tPA has muddied the waters, somewhat, for management of blood pressure for acute ischemic stroke. In the pilot NINDS study (Haley, 1993) of tPA in acute stroke, and in previous tPA and myocardial infarction studies, there was a higher association with intracranial hemorrhage (ICH) in patients with a blood pressure greater than 185 mm Hg systolic, 110 mm Hg diastolic, or who underwent aggressive treatment to reach these levels. Thus, in the randomized NINDS tPA Stroke Study (NINDS, 1995), patients were excluded if their blood pressure did not reach these goals. In the NINDS study, the term “aggressive treatment,” was not defined prospectively in the protocol. However, it has been thought to mean by expert consensus: intravenous nitroprusside, repeated doses of IV labetalol, enalaprilat, or nifedipine.

EML tPA answers

Specifically, if patients have a diastolic blood pressure >140 on two readings, they should be started on a continuous IV infusion of an antihypertensive agent, and they are not candidates for tPA therapy (Broderick, 1996). Patients who require more than two doses of labetalol or other antihypertensive agents to decrease blood pressure to <185 systolic or 110 diastolic are typically not appropriate for thrombolytic therapy (Broderick, 1996). This is a relative contraindication to thrombolytic therapy. Some stroke physicians, however, will still treat an acute ischemic stroke with tPA after two labetolol doses have been used, and a nicardipine drip has been started despite the lack of significant evidence for this practice.

Agents used to treat blood pressure in ischemic stroke should be easily titratable, have a quick onset of action, and limited risk of excessive or sudden onset of action.

In patients who require blood pressure treatment to be at an appropriate range for thrombolytic therapy, IV labetalol is a popular first line agent. Labetalol is easily titratable and is commonly started at 10 mg IV over 1-2 minutes. This can be repeated or doubled every 10 to 20 minutes. Another choice for blood pressure control is enalaprilat, which can be given in 1.25 mg increments. Nicardipine is commonly used as a titratable continuous infusion. Agents such as nitroglycerin or sublingual nifedipine may have effects that are more unpredictable like rapid drops in pressure with reflex tachycardia, so are considered second line, and are rarely used. Any patient who receives antihypertensives for ischemic stroke requires serial neurologic exams to look for signs of deterioration (Broderick, 1996).

2. Do you treat patients with tPA up to 4.5 hours after onset of symptoms. If so, which ones?

In February 2013, the American College of Emergency Physicians (ACEP) released a clinical policy statement on acute ischemic stroke, stating that they were giving treatment between 3 and 4.5 hours a Class B recommendation (ACEP, 2013). However, at this time, the use of tPA for stroke patients between 3 and 4.5 hours is not yet FDA approved, and remains widely and passionately debated.

Evidence for thrombolytics between 3 and 4.5 hours largely comes from the ECASS III trial. The benefit of tPA between 3 and 4.5 hours was directly tested in the ECASS III randomized controlled trial (Hacke, 2008). This trial used the same dosing as well as the same inclusion/exclusion criteria as the NINDS trial. This trial also excluded patients greater than age 80, those with a baseline NIH stroke scale (NIHSS) of 25 or greater, any oral anticoagulant use, and those with the combination of a previous stroke and diabetes mellitus. The number needed to treat in ECASS III was 14 patients, a more modest number than that found in NINDS, where the NNT was 8. Although the ICH rate in ECASS III was 27% in treated patients compared to 17% in untreated patients, there was no significant difference in overall mortality at 90 days. The incidence of symptomatic intracranial hemorrhage was 2.4% in treated patients compared to 0.2% in untreated patients.  Furthermore, the rate of symptomatic ICH was not higher in ECASS III compared to that in the NINDS trial (Hacke, 2008).  These trials defined a symptomatic ICH as one in which a new ICH was found on head CT in a patient with clinical deterioration following an acute ischemic stroke (NINDS, 1995).

In 2009, a metanalysis was published to specifically look at the efficacy and safety of tPA in the 3 to 4.5 hour time frame (Landsberg, 2009). They evaluated pooled data from patients in this time frame from four fairly homogenous studies: ECASS-I (n = 234), ECASS-II (n = 265), ECASS-III (n = 821) and The Alteplase Thrombolysis for Acute Noninterventional Therapy in Ischemic Stroke (ATLANTIS) (n = 302). These authors concluded that patients treated in this time window have an increased rate of favorable outcome without adversely affecting mortality (Landsberg, 2009).

One of the concerns physicians expressed behind allowing tPA administration up to 4.5 hours is that patients and treating physicians might feel as though they had more time to treat acute ischemic strokes, which could lead to a decreased benefit of the drug. Another controversy behind administering tPA up to 4.5 hours out in acute ischemic stroke was that the ECASS III study used a slightly different definition of symptomatic ICH (sICH) than the NINDS trial did. However, even once the more conservative NINDS definition of sICH was applied in the ECASS III trial, the percent of sICH remained the same in both of the studies.

Based on these studies, tPA may be safe when administered 3 to 4.5 hours after symptom onset, as long as the specific safety criteria from the ECASS III and NINDS trials are met. However, patients still have the best outcome when tPA is administered as early as possible. The final line in the ECASS III trial states: “Having more time does not mean we should be allowed to take more time.”

3. How do you determine if an acute ischemic stroke is improving enough to not give tPA to a patient?

One of the most common reasons to withhold tPA in ischemic stroke is with mild or rapidly improving stroke symptoms (Nedeltchev, 2007).  For example, the American Heart Association stroke guidelines state that eligibility for tPA requires that “neurological signs should not be minor and isolated” (Adams, 2007). The reason behind this is that patients with rapidly improving symptoms are likely having a TIA, rather than a CVA .

The NINDS recombinant tPA Stroke Trial (NINDS, 1995) included RISS (rapidly improving stroke symptoms) as an exclusion criterion to avoid treatment of transient ischemic attacks which would have recovered completely without treatment. The NINDS trial included a very small number of RISS patients, which they defined as an NIHSS of 5 or less (58 patients with RISS were included, but 2971 were excluded due to mild symptoms). Thus, conclusions about this subgroup of stroke patients cannot be drawn from the NINDS study (Khatri, 2010).

When the FDA approved tPA in 1996, all eligibility criteria from the NINDS recombinant tPA Stroke Study were adopted (NINDS, 1995). The package insert gave contraindications and warnings directly from the study protocol, including excluding those with RISS.  The TREAT task force attempted to clarify this exclusion criteria (TREAT, 2013). They held an in-person “RISS Summit” to obtain a better understanding of this phenomenon.

The results of the TREAT task force were that, in the absence of other contraindications, patients who experience improvement of any degree, but have a persisting neurologic deficit that is potentially disabling, should be treated with IV tPA (TREAT, 2013). Improvement should be monitored for the time needed to prepare and administer the IV tPA. There was also consensus in the TREAT task force that all neurologic deficits present at the time of the treatment decision should be considered in the patient’s individual risk and benefit, as well as the patient’s baseline functional status (TREAT, 2013).

Many studies, however, have suggested that the outcome of patients with MRIS (mild and rapidly improving symptoms) who do not receive tPA is not always benign. A large study from Canada found that 32% of patients considered “too good to treat” were dependent at hospital discharge or had died (Barber, 2001). A separate study from Massachusetts General Hospital reported that patients with a high initial NIHSS, but with RISS, had a four times greater chance of neurologic worsening than patients presenting with initial mild symptoms (Adams, 2007). A third study, from UCLA, demonstrated that 10% of patients who were excluded from thrombolysis only because of their RISS status showed early neurological deterioration. Twenty percent showed a poor outcome at discharge as defined by a modified Rankin score of 3 or greater (Rajajee, 2006).

Nedelchev, et al., also found that patients with persisting proximal vessel occlusions and RISS were 7 times (95% CI: 1.1 to 45.5; P0.038) more likely to have an unfavorable outcome at three months (2007). They defined proximal occlusions as those of the internal carotid artery, M1 and M2 segments of the middle cerebral artery, A1 segment of the anterior cerebral artery, V4 segment of the vertebral artery, basilar artery, and P1 segment of the posterior cerebral artery. They also found that rapidly improving but still severe symptoms (NIHSS greater than or equal to 10 points on admission) increased the odds of unfavorable outcome 17-fold (95% – CI: 1.8 – 159.5; P = 0.013). These findings suggest that patients with persistent large-vessel occlusions and those with a NIHSS score greater than or equal to 10 points at onset of symptoms might benefit from thrombolysis despite resultant mild symptoms or rapidly improving symptoms at presentation (Nedeltchev, 2007). This study demonstrated that 75% of patients with mild or rapidly improving symptoms who were not treated had a favorable outcome at 3 months, defined by a modified Rankin score of 0 or 1, without treatment.

4. Do you use a specific age cutoff when determining whether or not a patient should or should not receive tPA?

Elderly patients with acute ischemic stroke have historically been challenging for neurologists and other stroke physicians to treat. Physicians have typically feared a higher incidence of symptomatic ICH in this group of patients. They are often excluded from trials on tPA and ischemic stroke for this reason. Thus, little data exists on the safety and efficacy of treating elderly patients with tPA for acute ischemic stroke.  This is an important topic to study, since 30% of strokes occur in patients over the age of 80 (Mishra, 2010).

In all the ECASS studies, the age restriction was set at 80. In the NINDS trial, only 44 patients older than 80 were randomized.  There had been an initial age limit of 80 years or older, but this was removed, so that some patients 80 and older were ultimately included. Their outcomes at three months were not significantly improved compared to those who did not receive tPA (NINDS, 1995). In a later subgroup analysis of this patient population, it was found that 25 of the 44 patients older than 80 had been given tPA. This group of patients were 2.87 times more likely than their younger counterparts in the study to experience a symptomatic ICH (Longstreth, 2009).

However, other studies have demonstrated more positive results of treating elderly patients with tPA. In fact, a meta-analysis of the SITS-ISTR and VISTA data (n = 29,228) revealed that increasing age is associated with a poorer outcome in general in acute ischemic stroke, but that this association was found regardless of whether or not patients were treated with tPA (Mishra, 2010). This study compared outcomes at 90 days in patients who received tPA and controls. Specifically, they examined the association of thrombolysis treatment with outcome between various age groups, with 3,439 patients aged over 80. The number needed to treat for a favorable outcome (score of 0-2 on a modified Rankin scale) was 8.2 patients. They stated that poorer outcomes are more likely to occur in the elderly due to other comorbidities rather than an increase in symptomatic ICH (Mishra, 2010). Furthermore, the tPA stroke survey experience, published in Stroke, concluded that there was no evidence to withhold tPA in patients greater than 80, as long as they were appropriately selected (Tanne, 2000). A third study, looking at stroke patients in three German stroke centers, found similar results in 228 patients, 38 of whom were 80 years or older. This study found a higher mortality in older patients (21.1% versus 5.3% at 90 days), but no difference in the rate of ICH between younger and older patients, with the authors also concluding that there is no evidence to exclude ischemic stroke patients from thrombolysis based on a predefined age threshold (Berrouschot, 2005).

At EM Lyceum we love debate, and know this is an area of particular controversy for EPs. Although our aim this month is not to rehash the controversies, we hope to add some more data to your thinking about this topic.  Even amongst our group of writers and editors we differ greatly in how we approach these questions.  We would love to hear your thoughts.

Thanks to Dr. William Knight of the University of Cincinnati for his expert thoughts on this topic.



Posted in Uncategorized | Tagged , , , , , | 11 Comments

tPA in ischemic stroke, Questions

1. How do you control blood pressure in patients who will be/are receiving tPA?EML tPA questions

2. Do you treat patients with tPA up to 4.5 hours from onset of symptoms, and if so which ones?

3. How do you determine if an acute ischemic stroke is improving enough to not give tPA to a patient?

4. Do you use a specific age cutoff when determining whether or not a patient should or should not receive tPA?

EML Ischemic Stroke and tPA Questions Poster

Posted in Uncategorized | Tagged , , , | 1 Comment

Fluid Responsiveness, “Answers”

1. How do you assess fluid responsiveness in the ED? Do you use IVC collapsibility in spontaneously breathing patients?

Although fluid resuscitation is paramount in the treatment of sepsis, volume overloading critically ill patients has been shown to worsen outcomes including length of intensive care unit (ICU) stay, days on a ventilator, and mortality (Rosenberg, 2009). Methods of assessing volume status (preload) and hemodynamic response to fluid challenges (volume responsiveness) are thus very important when managing these patients. Until recently, central venous pressure (CVP) monitoring dominated clinician guidance of fluid management and was used regularly by over 90% of intensivists (McIntyre, 2007). CVP represents the right atrial pressure and has erroneously been extrapolated to estimate left ventricular preload and thus fluid responsiveness. A recent meta-analysis found no relationship between CVP and circulating blood volume, left or right ventricular preload, or fluid responsiveness (also known as the “seven mares” article, Marik, 2008). Alternative methods of determining volume status and fluid responsiveness have subsequently been sought with greater fervor.

EML fluid responsiveness answers

Pulmonary artery occlusion pressure (PAOP) measured via a pulmonary artery catheter, like CVP, fails to reflect preload or volume responsiveness (Marik, 2010). Other static indices including left ventricular end-diastolic area (LVEDA) measured by transesophageal echocardiography and global end-diastolic volume measured through a cardiac output monitor (PiCCO), although predictive of preload, also fail to accurately predict fluid responsiveness (Marik, 2010).

Dynamic measurements perform better in predicting fluid responsiveness but generally require mechanical ventilation to control for substantial variation in respiratory cycle volumes and intrathoracic pressures characteristic of spontaneous breathing patterns. Pulse pressure variation (PPV) measured by arterial waveform and stroke volume variation (SVV) measured by arterial or pulse oximeter plethysmographic waveform, have been show to correlate very well with volume responsiveness. The sensitivity and specificity of PPV has been documented at 89% and 88% respectively, and that of SVV has been documented at 82% and 86% (Marik, 2010). Accurate measurements do require tidal volumes of 8-10L/kg and specialized analysis devices. Inferior vena cava diameter distensibility (dIVC) with respiration, although criticized by some as having limitations similar to CVP (Marik, 2010), has been studied repeatedly in mechanically ventilated patients and appears to be a valid option for predicting volume responsiveness. Barbier showed that dIVC >18% predicts volume responsiveness with a sensitivity and specificity of 90% (Barbier, 2004). Other studies, though small and observational, show a similar correlation (Machare-Delgado, 2011; Moretti, 2010).

Unfortunately, all the aforementioned techniques possess limitations that will often preclude their application in the Emergency Department. In the ED, the ideal method for measuring fluid responsiveness must be technically easy, fast, non-invasive and, importantly, reliable in spontaneously breathing patients. SVV measured by arterial waveform has been shown to be predictive of volume responsiveness in spontaneously breathing patients at a threshold of 17% (PPV 100%, NPV 82%, p=0.03) in at least one study (Lanspa, 2013). This technology, however, requires placement of an arterial line and specialized equipment not available in most EDs. Similarly, straight leg raise predicts fluid responsiveness reliably but requires invasive monitoring like an a-line or specialized equipment (Benomar, 2010). IVC collapsibility on the other hand is technically easy, non-invasive and recent studies suggest it may have a role in spontaneously breathing patients. In spontaneously breathing patients, the IVC collapses on inspiration and distends on expiration.  Upon intubation, the patient’s physiology reverses from negative pressure to positive pressure.  As a result, the IVC distends on inspiration and collapses on expiration. The best available IVC data consists of two observational studies, which ultimately offer cautious support for use of IVC collapsibility in breathing patients. The first found IVC inspiratory variation greater than 40% to predict fluid responders with a sensitivity of 70% and specificity of 80%. Values of below 40%, however, could not be used to exclude fluid responders (Muller, 2012). The second study found variations in inferior vena cava index less than 15% to have 100% negative predictive value (p=0.03) for fluid responsiveness whereas over 50% variation had a positive predictive value of 75% (p=0.09) (Lanspa, 2013). Both studies used subcostal windows to assess inferior vena cava diameter variation as it entered the right atrium. Though promising, this data should be interpreted carefully given the small size of the studies, the lack of statistical significance for some values, and the wide range of clinically indeterminate values of IVC collapsibility.

It is important to remember that all of the cited studies apply to initial resuscitation in the ICU, often after aggressive fluid resuscitation in the ED. The need for a more cautious approach to fluid resuscitation during the initial management of critically ill, particularly septic, patients in the ED is less established.

2. Which crystalloid fluid do you use to resuscitate critically ill patients?

Normal saline (NS) is traditionally the first-line fluid for resuscitation of critically ill patients in the ED. NS first came into widespread use in the 1830’s during the European cholera epidemic, saving countless lives. The actual electrolyte content of NS during its early days was likely more “normal” than it is today, with estimated levels of sodium and chloride at 134 and 118 mmol/L respectively (Yunos, 2010). Today, NS is neither normal nor physiologic, containing 154 mmol/L of both sodium and chloride. Every liter of NS administered thus delivers supra-physiologic levels of these electrolytes, which play key roles in the acid-base physiology. Alternate crystalloid solutions including Hartmann’s Lactated Ringers (LR) and balanced electrolyte solutions (BES) such as Plasma-Lyte offer more physiologic concentrations of electrolytes and may have unique advantages for resuscitation in critical care. Small variation in electrolyte content can make clinically important differences when resuscitating with large volumes or when caring for patients over extended periods in the ICU. In the current era of hospital overcrowding and extended ED stays, this concern becomes particularly relevant to all ED physicians.  See table below for details of electrolyte content of commonly used fluids (Table 1).

Table 1: Electrolyte Content of Common Crystalloid Solutions (mmol/L)

Plasma Normal Saline Hartmann’s LR Plasma-Lyte
Sodium 140 154 131 140
Potassium 5 0 5 5
Chloride 100 154 111 98
Bicarbonate 24 0 0 0
Calcium 2.2 0 2 0
Magnesium 1 0 1 1.5
Lactate 1 0 29 0
Acetate 0 0 0 27
Gluconate 0 0 0 23

Specifically, high chloride content has been targeted as a potential source of harm in large volume crystalloid resuscitation. New understanding of complex acid-base physiology, namely the Stewart physiocochemical approach, is the driving force behind recent attention given to chloride. Briefly, under this approach, chloride is the predominant negative strong ion in plasma and a key component of the strong ion difference (SID), which directly influences hydrogen ion concentration and thus acid base status (Yunos 2010). NS resuscitation has been clearly linked to hyperchloremic metabolic acidosis (HMA), but debate exists regarding its clinical significance (Yunos, 2010; Heijden, 2012). Preclinical and healthy human volunteer data provide increasing evidence for chloride-associated hypotension, reductions in renal cortical perfusion, decreased glomerular filtration rate (GRF) and pro-inflammatory states (Chowdhury, 2012; Yunos, 2010; Wilcox, 1983; Kellum, 2004; Kellum, 2006). Recently, a prospective, open-label study looked at chloride liberal vs. chloride restrictive fluid resuscitation of critically ill patients and its effect on acute kidney injury (AKI). Importantly, in this study of over 1500 patients, resuscitation with chloride restrictive fluids was associated with statistically significant lower rise in serum creatinine levels and less incidence of AKI. Though a secondary outcome, patients receiving chloride restrictive fluids also received less renal replacement therapy (Yunos, 2012). The combined existing evidence, now bolstered by a well-designed clinical trial, calls into question the routine use of potentially harmful chloride-rich fluids when alternative, equally effective options are available.

Choice of crystalloid fluid may be particularly important in conditions with disarray of electrolytes and acid-base status such as diabetic ketoacidosis (DKA). Patients in DKA are profoundly volume depleted and require large volumes of NS for resuscitation. As a result, HMA commonly occurs during treatment and complicates the management of DKA (Morgan, 2002). A blinded, randomized controlled trial compared a balanced electrolyte solution (BES), Plasma-Lyte, to NS for prevention of HMA during resuscitation of patients with DKA. Patients receiving BES were found to have significantly lower levels of chloride and higher levels of bicarbonate, consistent with prevention of HMA (Mahler, 2010). A smaller, non-randomized study found similar results (Chua, 2012). Less evidence is available for LR and DKA. A randomized controlled trial compared NS to LR for resolution of acidosis. This study was small and terminated early due to poor enrollment; there was a non-significant decrease in time to resolution of acidosis in the group receiving LR (Van Zyl, 2011) As mentioned previously, the clinical significance of HMA is still debated, but mounting evidence suggests avoidance of HMA may be beneficial to the patient.

3. Do you ever use hypertonic saline in patients with septic shock?

Through multiple inflammatory mechanisms, sepsis creates a pathophysiologic state of vasodilation and increased endothelial permeability with resultant maldistribution of blood flow. Rapid and high-volume fluid resuscitation is a key element to counter this effect and to adequately deliver oxygen to tissues in patients with septic shock. Hypertonic fluids may offer unique benefits over other crystalloids. Hypertonic saline osmotically pulls fluid from intracellular spaces into the vasculature, resulting in rapid plasma expansion that supersedes the actual volume infused. This effect permits use of smaller fluid volumes, decreasing risk of edema, further improving oxygenation of tissues. Preclinical data supports the use of hypertonics in sepsis, with cardiovascular benefits ranging from improved volume expansion to increased cardiac contractility and better splanchnic perfusion (Garrido, 2006; Ing, 1994; Oi, 2000). Additionally, enhanced immunomodulatory effects including reduced bacterial colony counts and enhanced bacterial killing have been demonstrated with hypertonics (Shields, 2003).

Good clinical data on hypertonics and sepsis, however, is limited and further studies are needed. Two small, randomized controlled trials evaluated an initial bolus of hypertonic saline with colloid compared to colloid or NS alone and found improved cardiac function with hypertonics (Oliveira, 2002; van Haren, 2012). In Oliveira’s study, the group receiving 7.5% saline/dextran was found to have significant increases in cardiac index, pulmonary artery occlusion pressure and stroke volume index without significant side effects (Oliveira, 2002). Van Haren found the 7.2% hypertonic/hydroxyethyl starch (HES) group to have increased cardiac contractility and a decreased need for further fluid resuscitation in the following 24 hours. Although these studies were randomized, both were extremely small thus preventing the evaluation of clinically important measures including mortality and potential risks including hypernatremia and acid-base effects.


4. What is your threshold for giving blood transfusions? Does this change in patients with cardiac disease or GI bleeds?

In states of high metabolic demand accompanying critical illness, oxygen requirements can outpace supply, creating an oxygen debt at the tissue level. Allogeneic red blood cell (RBC) transfusions have long been a cornerstone in critical care management to counter this imbalance and augment delivery of oxygen to tissues. Prior to the TRICC trial in 1999, a hemoglobin (Hgb) transfusion threshold of 10 g/dL was standard practice. Growing concern over the complications of RBC transfusions, including immunosuppression, inflammation, infection and transfusion reactions, particularly in the critically ill, prompted the landmark TRICC trial. This was a randomized controlled trial comparing a restrictive versus liberal (7.0 g/dL vs.10.0 g/dL) Hgb transfusion threshold. Actively bleeding patients and those with acute coronary syndrome (ACS) were excluded; patients with cardiac disease were included. The TRICC trial showed no difference in 30-day mortality for a restrictive compared to liberal transfusion threshold (18.7% vs. 23.3%, p=0.11). Additionally, fewer cardiac adverse events and smaller changes in multi-organ system dysfunction scores were seen in the restrictive group (Hebert, 1999). This trial firmly established a threshold of 7.0 g/dL as an acceptable Hgb transfusion strategy in the critically ill. Supporting this conclusion, a 2012 Cochrane review found restrictive strategies to result in a 39% reduction in blood transfused, an overall reduction in in-hospital mortality, and no difference in mortality at 30 days (Carson, 2012).

In a subgroup analysis of the TRICC trial, the restrictive arm showed no difference in 30 and 60-day mortality for patients with cardiovascular disease 20.5% vs. 22.9% (p=0.69). This finding differed significantly from preexisting observational data, which showed increased mortality with a restrictive strategy (Carson, 1996). Complicating the picture, when confirmed ischemic heart disease, severe peripheral vascular disease, and severe comorbid cardiac disease were isolated from all cardiac disease (i.e. group of most clinically relevant cardiac disease), a non-significant trend towards increased mortality was seen in the restrictive group (p=0.3) (Hebert, 2001).

To address this discrepancy, the FOCUS trial compared a liberal (Hgb <10 g/dL) vs. restrictive (Hgb <8 g/dL or symptomatic) transfusion threshold in patients with CAD or CAD risk factors undergoing hip surgery. Using a composite endpoint of death and inability to walk independently, the restrictive strategy was found to be no different. No difference was found in secondary outcomes of adverse cardiovascular events (Carson, 2011). This study was billed to be the definitive trial for restrictive transfusion thresholds in patients with CAD but it has received significant criticism. Utilization of a composite end point with components differing greatly on clinical significance (walking independently and death) can cloud results and lead to misleading interpretations. Although mortality was reduced in the restrictive strategy (6.6% vs. 7.6%), a much larger sample size would be required to draw significant conclusions (Meybohm, 2012). The American Association of Blood Banks’ (AABB) clinical practice guideline offers a weak recommendation for transfusion of hemodynamically stable patients with cardiovascular disease at Hgb concentrations of 8 g/dL or for symptoms (Carson, Grossman, 2012).

To date, no randomized controlled trial of transfusion strategies in patients with active ACS has been undertaken. A review of existing studies consisting primarily of observational data concluded that in patients admitted for ACS, transfusions at Hgb >11 g/dL increased mortality but at Hgb <8 g/dL, transfusions decreased mortality or did no harm. Given the observational nature of the studies, however, conclusions cannot be drawn (Garfinkle, 2013). The AABB does not make a recommendation for transfusion thresholds in patients with ACS, citing absence of quality data (Carson, Grossman, 2012).

A restrictive transfusion strategy appears to be safe in patients with CAD, but importantly, none of the above trials included actively bleeding patients. In 2013, Villanueva published a landmark paper in the New England Journal addressing transfusion thresholds in patients with acute upper GI bleeds. In this trial patients were randomized to transfusion Hgb thresholds of 7 g/dL vs. 9 g/dL. Patients in the restrictive group had significantly decreased bleeding, fewer adverse events and increased survival at 6 weeks. With this evidence, patients with active upper GI bleeds can now be considered prime candidates for restrictive transfusion thresholds, which may not only be safe, but beneficial (Villanueva, 2013).

Posted in Uncategorized | Tagged , , , , | 5 Comments