Spinal Cord Injury, Questions

1. What imaging do you use for patients with possible acute, traumatic spinal cord injury?

2. How do you treat neurogenic shock?

3. What is your management and disposition for elderly patients with vertebral compression fractures?

4. How do you clear a C-spine after a negative CT in a trauma patient who is awake, neuro intact, wearing a collar?

Posted in Uncategorized | Tagged , , , , , | 1 Comment

Infectious Diseases, “Answers”

1. Which patients with neutropenic fever do you consider for outpatient management?

Neutropenic fever is a common presentation to the Emergency Department, especially in tertiary hospitals where many oncology patients are undergoing chemotherapy. According to the Infectious Disease Society of America (IDSA), fever in neutropenic patients is defined as a single oral temperature of >38.3°C (101°F) or a temperature of >38.0°C (100.4°F) sustained for >1 hour. Rectal temperature measurements (and rectal exams) are not recommended by the IDSA to prevent colonizing gut organisms from entering the surrounding mucosa and soft tissues (Freifeld, 2011). The definition of neutropenia varies from institution to institution; the IDSA defines it as an (ANC) <500 cells/microL or an ANC that is expected to decrease to <500 cells/microL over the next 48 hours. Profound or severe neutropenia occurs when the ANC is < 100 cells/micromol. The National Cancer Institute defines neutropenia as an ANC < 1000 cells/micromol (HHS 2010).

Patients with neutropenic fever are usually started on broad spectrum IV antibiotics and admitted to the hospital, however there are a subgroup of patients who can be safely managed as outpatients. The official wording from the IDSA guidelines is that “Carefully selected low-risk patients may be candidates for oral and/or outpatient empirical antibiotic therapy (B-I)”. Grade B is defined as moderate evidence to support a recommendation for use, and Level I is evidence from ≥1 properly randomized, controlled trial. The data which they derived these recommendations include one large series of patients, where oral outpatient treatment for low-risk fever and neutropenia was deemed successful in 80% of patients, with 20% patient requiring readmission. Factors predicting readmission include age > 70, grade of mucositis >2, poor performance status, and ANC < 100 cells/microL at onset of fever (Escalante 2006). Klastersky et al studied 178 low-risk patients who were treated with oral antibiotics. Only 3 patients were readmitted resulting in a 96% success rate (Klastersky 2006) .

The IDSA formally risks stratifies using the Multinational Association for Supportive Care in Cancer (MASCC) scoring system. The adult guidelines from Australia, European Society for Medical Oncology (ESMO), the American Society of Clinical Oncology (ASCO) also recommend the use of the MASCC index (Gea-Banacloche 2013). Low-risk patients have a MASCC score ≥ 21.

MASCC

The index has been validated in multiple settings and performs well, although it may function better in solid tumors than in hematologic malignancies (Klastersky 2013). An issue with the major criteria of “burden of febrile neutropenia” is that there is no standardized definition for this criteria, making uniform application of the MASCC confusing (Kern 2006).

Based on the best available data prior to the 2010 guidelines release, the IDSA developed an algorithm to risk stratify neutropenic fever patient and their appropriate management:

Untitled

 

Ciprofloxacin plus amoxicillin-clavulanate is recommended for oral empirical treatment (Friefeld 2010). Other oral regimens, including levofloxacin or ciprofloxacin monotherapy or ciprofloxacin plus clindamycin, are less well studied but are commonly used. In low-risk patients, the risk of invasive fungal infection is low, and therefore routine use of empirical antifungal therapy is not recommended. Respiratory virus testing and chest radiography are indicated for patients with upper respiratory symptoms and/or cough .

A systematic review and meta-analysis of 14 RCTs was published in 2011 and not, therefore, included in the 2010 guidelines (Teuffel 2011). The meta-analysis concluded that inpatient versus outpatient management was not significantly associated with treatment failure; death occurred with no difference between the two groups; and outpatient oral versus outpatient parenteral antibiotics were similarly efficacious with no association between route of administration and treatment failure.

It must be said without questions that the patients whom you are treating as outpatients need to have easily accessible and very close follow up with their oncologists. They should be vigilantly examined for a source of infection, including a thorough skin, mucosal, and neurologic exam, as any obvious focal infection may necessitate inpatient treatment.

Bottom Line: Based on the IDSA guidelines and other meta-analyses, it is reasonable to treat a certain subset of low risk febrile neutropenic patients with oral antibiotics as an outpatient with good follow up.

2. Which patients with community-acquired pneumonia do you admit?

Community-acquired pneumonia (CAP) is defined as an acute infection of the pulmonary parenchyma in a patient who has acquired the infection in the community, as distinguished from hospital-acquired (nosocomial) pneumonia, which occurs 48 hours or more after hospital admission and is not present at time of admission. A third category of pneumonia, designated “healthcare-associated pneumonia,” is acquired in other healthcare facilities such as nursing homes, dialysis centers, and outpatient clinics or within 90 days of discharge from an acute or chronic care facility. The most common validated prediction rules for prognosis in community-acquired pneumonia include the Pneumonia Severity Index, CURB, and CURB-65 severity scores.

Question 2

 

According to the Pneumonia Severity Index (PSI), patients in risk classes I-III are defineQuestion 2 2d as low risk for short-term mortality and are considered for outpatient treatment. In the original derivation study of the PSI, Mortality ranged from 0.1 to 0.4 percent for class I patients, from 0.6 to 0.7 percent for class II, and from 0.9 to 2.8 percent for class III (Fine 1997). This older but well validated rule can be difficult to remember and apply; as a result, a relatively simpler CURB score with a total point score ranging from 0-4 was described and externally validated (Ewig 2004). A modified version, the CURB-65, which added age ≥ 65 as another positive risk factor was internally validated to stratify short term mortality for patients with CAP (Lim 2003). Patients scoring CURB < 1 and CURB-65 < 2 are considered low-risk and candidates for outpatient treatment. The CURB and CURB-65 scores are easier to remember and apply, however it does require laboratory data (BUN), whereas the lowest PSI risk class I can be attained without getting blood draws, a benefit appreciated in outpatient settings.

The three prediction rules were pitted against each other in a prospective study of over 3000 patients with community-acquired pneumonia from 32 hospital EDs to see which one was better at predicting 30-day mortality (Aujesky, 2005). Inclusion criteria included age ≥ 18 with a clinical diagnosis of pneumonia and new radiographic pulmonary infiltrate. Exclusion criteria included hospital acquired pneumonia, immunosuppression, psychosocial problems incompatible with outpatient treatment, or pregnancy.

The PSI classified a significantly greater proportion of patients as low risk (68%) than the CURB (51%) and the CURB-65 (61%). Of the low-risk by PSI (class I-III), the aggregate 30-day mortality was 1.4%, which is less than the CURB (score < 1) low-risk mortality rate of 1.7% and also 1.7% mortality for CURB-65 low-risk group (score <2). High-risk patients based on the PSI (class IV-V) had a higher mortality of 11.1% compared with high-risk CURB (≥1) and high risk CURB-65 (≥2), with respective mortality rates of 7.6% and 9.1%.

The PSI had a slightly higher sensitivity and negative predictive value across each risk cut-off point compared to the CURB and CURB-65. In addition, by comparing the areas under the receiver operating characteristic (ROC) curves, the PSI had a statistically significantly greater discriminatory power to predict 30-day mortality. CURB-65 showed a higher overall discriminatory power than the original CURB score.

Based on the estimation of 4 million annual cases of community acquired pneumonia in the USA (DeFrances, 2007), and the average cost of inpatient versus outpatient care of $7500 vs $264; using the PSI would identify an additional 650,000 low-risk patients vs. CURB and 250,000 vs. CURB-65, saving a significant amount of healthcare dollars, while still maintaining a low 30-day mortality rate (Aujesky, 2005) .

The 2007 (most current) Infectious Diseases Society of America (IDSA) recommendations on managing CAP revolve around the initial assessment of severity. “Severity-of-illness scores such as the CURB-65 criteria, or prognostic models, such as the Pneumonia Severity Index (PSI), can be used to identify patients with CAP who may be candidates for outpatient treatment. (Strong recommendation; level I evidence).” In addition to objective data, the IDSA also recommends supplementing with subjective factors, including the ability to safely and reliably take oral medications and the availability of outpatient support resources (Strong recommendation; level II evidence). (Mandell, 2007).

Bottom Line: Based on the validated prognostic scoring systems and recommendations from the IDSA, a subgroup of low-risk patients with community acquired pneumonia can safely be managed as outpatients. Their prognosis can be reliably predicted by 3 scoring systems, where the PSI performs slightly better but is more complex to apply than the CURB and CURB-65.

  1. Which patients with influenza do you treat with oseltamivir?

Neuraminidase inhibitors, specifically oseltamivir, have been increasingly used in the last 5 years for the treatment of patients with symptoms of influenza. In fact, between May and December of 2009, around 18.3 million prescriptions were written for the drug in seven countries (Australia, Canada, France, Germany, Japan, UK and USA) (Muthuri 2014). Additionally, hundreds of millions of doses have been stockpiled in various countries as a safe guard against pandemic influenza and the World Health Organization (WHO) lists oseltamivir as an essential drug. Physicians have been encouraged to prescribe osletamivir to patients by drug companies, professional society recommendations, hospitals and by patient pressure. Despite widespread use, the evidence of benefit for oseltamivir has never sat on firm evidence-based grounds. Much of this stems from the reluctance of the pharmaceutical giant Roche to release all relevant study data on the drug. This changed in 2013 when all of the data was made available for analysis.

Before we delve into the recently released data surrounding oseltamivir, let’s look at the prior recommendations and the basis for these recommendations. In 2009, the BMJ published a review of a number of observational studies looking at the effect of oseltamivir in the treatment of influenza (Freemantle 2009). This publication looked at randomized controlled studies provided by Roche pharmaceutical at that time. The limited available evidence supported the role of oseltamivir in reducing the rate of post-influenza pneumonia in otherwise healthy adults. There was no evidence of a mortality benefit and limited safety data (Freemantle 2014). Additionally, the available data supported the idea of using oseltamivir for chemoprophylaxis in patients who were at risk of exposure. A Cochrane group review in 2010 echoed these results but also stated that there was extensive bias present in the available studies and without a full disclosure of all the research, no strong recommendation could be made (Jefferson 2014). In spite of the limited evidence, broad recommendations were made including treatment of patients with multiple comorbidities, pregnant patients, those with immunocompromise and chemoprophylaxis for close contacts (Harper 2009).

In 2013, Roche pharmaceuticals released all of the study data. The Cochrane Respiratory group subsequently published an updated systematic review of all of the randomized controlled trials (Jefferson 2014) as well as a summary statement in the BMJ (Jefferson 2014). A number of statements from the prior review stand: there are minimal studies on efficacy and safety in pregnant patients, and no mortality benefit was seen. They did not find a reduction in post-influenza pneumonia and posited that this prior finding was likely due to publication bias. The below table summarizes the major outcomes of the 2014 Cochrane systematic review:

Outcome Measure Finding
Alleviation of Symptoms Shortened by 16.8 hrs with oseltamivir
Admission to Hospital No Difference
Reduction in Confirmed Pneumonia No Difference
Other Complications No Difference
Transmission in Prophylaxis Group No Reduction

Additionally, the group reported on a number of common side effects:

Side Effect Results
Nausea Increased (NNH 28)
Vomiting Increased (NNH 22)
Psychiatric Events Increased (NNH 94)
Headache Increased (NNH 32)

Overall, we see a mild shortening in the duration of symptoms with no reduction in admission, confirmed, post-influenza pneumonia or other complications. The same findings were seen in pediatric patients as well. There is minimal evidence in regards to efficacy or safety in pregnancy as pregnancy was an exclusion criteria in most of the studies. Side effects were common. Also, chemoprophylaxis did not reduce transmission of the disease. These results call into question the utility of oseltamivir for the treatment of influenza in any patient.

In June 2014, the PRIDE Consortium Investigators published a study challenging the Cochrane group findings (Muthuri 2014). In this large observational cohort (n=29,234 patients) Murthi et al found an association with decreased mortality (adjusted OR = 0.81) and an additional benefit to early (< 2 days) treatment versus later treatment (adjusted OR = 0.48). This study, however, has major flaws and biases that question the validity of their conclusions. Only 19% of centers that were contacted agreed to contribute data to the Consortium. Thus there is a high potential for bias. Additionally, the researchers do not assess the quality of the studies included in their meta-analysis (Antes 2014). Regardless, observational data should not be used to trump the RCT data included in the Cochrane review.

Bottom Line: The best available evidence demonstrates that oseltamivir leads to a mild reduction in the duration of symptoms of influenza. There is no proven benefit for mortality, hospital admissions or confirmed influenza related complications including pneumonia. The frequency of side effects may outweigh the mild symptom reduction benefit of the drug. The results of the 2014 Cochrane meta-analysis should be used to update the current CDC recommendations.

4. Which adult patients getting worked up for a urinary tract infection do you send a urine culture on?

A Urinary tract infection (UTI) is a condition in which bacteriuria is present with evidence of host invasion (presence of dysuria, frequency, flank pain, or fever). The “gold standard” in defining significant bacteriuria is the detection of any microorganisms by suprapubic aspiration. Since this method is not typically employed, many sources utilize a definition of more than 105 cfu/ml on a midstream urine culture to indicate true infection.

The American College of Emergency Physicians has recently kicked off the “Choosing Wisely” campaign (ACEP 2013) in an attempt to limit unnecessary testing in the Emergency Department. Although urine culture was not one of the five tests that was addressed, it is one of the most commonly sent laboratory tests in the ED making it a potential place to curb costs. In 1997, UTIs accounted for one million ED visits in the United States (Foxman 2002). Numerous publications and practice guidelines have recommended against the use of routine urine cultures in uncomplicated UTIs. Despite this, a 1999 survey of 269 EM physicians showed that 24% of them would order a urine culture on a 30-year-old non-pregnant woman with an uncomplicated UTI with dysuria of recent onset (Wigton 1999).

Werman et al in 1986 tackled the question of the “Utility of Urine Cultures in the Emergency Department” (Werman 1986). They concluded that urine cultures should only be obtained in patients at high risk for pyelonephritis or bacteremia/urosepsis, as well as in those expected to have uncommon or resistant organisms. This paper cited studies showing that routine urine cultures in nonpregnant women with acute cystitis do not affect management. Morrow et al showed that treatment of women seen in the ED with suspected cystitis proceeded with little attention to urine culture results, despite the fact that cultures were obtained routinely (Morrow 1976). Winickoff et al demonstrated that patients who did not have follow-up urine cultures after a UTI had no greater risk for reinfection or complications than did patients in whom follow-up cultures were obtained (Winickoff 1981). Additionally, a positive culture does not necessarily indicate the absolute need for antibiotics. In a study of 53 women with culture-proven UTIs, no patient progressed to pyelonephritis or bacteremia in spite of the fact that all were treated with placebo (Mabeck 1972).

In 2011, Johnson et al. addressed the question “Do Urine Cultures for Urinary Tract Infections Decrease Follow-up Visits?” (Johnson 2011). This retrospective cohort study looked at 779 female patients age 18-65 diagnosed with a UTI or acute cystitis treated in a family medicine clinic (exclusion criteria: pregnancy, diabetes, UTI or antibiotic use in preceding 6 weeks, other medical condition making UTI complicated). The follow-up rate for patients without urine cultures was 8.4%, which showed no statistical difference between the follow-up rate of 8.7% for patients with urine cultures. Ordering a urine culture was not associated with a decreased rate of follow-up visits (adjusted OR 1.11 [CI 0.65-1.90]). Of all 447 urine cultures ordered, only 1 grew bacteria resistant to nitrofurantoin, a common antibiotic used in the ED for uncomplicated cystitis. A 2006 UK study found that 23 women required a urine culture to prevent one follow-up visit from resistance-based failure; thus, empiric treatment with no urine culture was recommended (McNulty 2006).

Bottom Line: Although there are no prospective randomized controlled trials looking specifically at ED patients with UTI symptoms, it is safe to say that a urine culture in healthy adult non-pregnant females with new onset urinary symptoms without concern for pyelonephritis or bacteremia is unlikely to change management or outcome.

Posted in Uncategorized | Tagged , , , , | 3 Comments

Infectious Diseases, Questions

 

Screen Shot 2014-12-18 at 7.31.12 PM1. Which patients with neutropenic fever do you consider for outpatient management?

2. Which patients with community-acquired pneumonia do you admit?

3. Which patients with influenza do you treat with oseltamivir?

4. Which adult patients getting worked up for a urinary tract infection do you send a urine culture on?

Posted in Uncategorized | Tagged , , , , | 1 Comment

Airway and Sedation, “Answers”

Question #1: Do you reach for video laryngoscopy or direct laryngoscopy first for intubations?

Tracheal intubation is a fundamental skill for EM providers to master. Historically, direct laryngoscopy (DL) has been the modality of choice for endotracheal intubation, with a proven high success rate in the ED. However, video laryngoscopy (VL) devices have become increasingly popular and present. These devices havenumber of potential advantages including improved laryngeal exposure and visualization as well as allowing more experienced practitioners to observe the procedure during training. (Levitan, 2011). As VL devices are gaining wider use, some have made calls for their establishment as standard care. It is important to note that not all VL devices are equivalent. Some devices use standard geometry blades (which allow both direct and video laryngoscopy) while others have hyperangulated geometry blades which do not allow for direct laryngoscopy. Many devices interchangeably accept standard and hyperangulated blades.A full description of all of these devices is beyond the scope of this post.

Much of the early literature comparing VL to DL comes from observational studies. One prospective study at a level 1 trauma center enrolled all adult patients intubated in the ED over an 18-month span (Platts-Mills, 2009). Data collected included intubation indication, device used, and resident post-graduation year. The authors found no statistically significant difference in the primary outcome of first attempt success, but noted that VL intubation required significantly more time to complete (42 vs 30s). Another prospective study evaluating all ED intubations over a 2-year period found a statistically significant increase in first-attempt success for VL (78% vs 68%, adjusted OR 2.2), a result found more pronounced in a patient subgroup with pre-defined difficult airway predictors (OR 3.07) (Mosier, 2011).

Randomized controlled trial data is sparse. In 2013, Yeatts et al published an RCT among trauma patients at a single level 1 trauma center (Yeatts 2013). Patients requiring emergent intubation were randomized to DL or VL performed by an emergency medicine or anesthesia resident with at least one year of intubation experience. The authors found no significant difference in mortality, the primary outcome, but did observe an increased median duration of intubation in VL vs DL (56s vs 40s) with an associated increased incidence of hypoxia (50% vs 24%). The study had a number of inherent flaws including the fact that providers could selectively exclude patients at their discretion. Larger systematic reviews and meta-analyses have been limited by significant heterogeneity, and provided similarly murky results but suggest that VL may be the superior modality. One meta-analysis including only studies examining ICU intubations found that VL reduced the risk of difficult intubation, Cormack-Lehane 3 and 4 grade views, and esophageal intubations, and increased the likelihood of first-attempt success (De Jong, 2014).

Another meta-analysis included 17 trials and 1,998 patients to compare outcomes from VL vs DL (Griesdale, 2012). The authors found no significant difference in successful first-attempt intubation or time to intubation based on the use of Glidescope ® and DL. Interestingly, they did note that successful first-attempt intubation and time to intubation were improved using Glidescope ® in two studies specifically examining “non-expert” intubators, suggesting a valuable role for VL in less-experienced hands.

Further examining the potential increased efficacy of VL for less-experienced intubators, a prospective randomized control trial examined 40 fresh PGY-1s across varying disciplines (Ambrosio, 2014). All of the soon-to-be residents had not yet begun clinical duties and had individually performed no more than 5 live intubations in their training. After receiving training in both DL and VL, the participants were divided into groups and observed while intubating a difficult-airway manikin. The group using DL had significantly less successful intubations within 2 minutes (47% vs 100%) and increased overall mean time to intubation (69 vs 23s).

The skills required to use standard geometry blades with video are close to traditional direct laryngoscopy, whereas the more hyperangulated the blade, the easier is glottic visualization but the more challenging is tube delivery. Using hyperangulated blades is a somewhat different procedure, requiring a different skillset, than direct laryngoscopy or video laryngoscopy with a standard geometry blade. Many other forms of VL exist, and ultimately experience with one device does not guarantee to translate to another. (Sakles & Brown, 2012). For training purposes, a number of experts including Richard Levitan and Reuben Strayer support the use of standard geometry blades with video as they offer the benefits of video laryngoscopy while allowing training in direct laryngoscopy.

Bottom Line: Evidence suggests VL provides superior visualization in comparison to DL but improved outcomes have yet to be shown. The vast majority of airway experts support extensive training with both modalities.

Question #2: Do you use cricoid pressure during induction and paralysis?

Cricoid pressure (CP) refers to the application of firm pressure to the cricoid ring after positioning the patient’s neck in the fully extended position. It’s important to note that CP is different from external laryngeal manipulation, which acts to improve the laryngeal view during direct laryngoscopy. The pressure required to occlude the esophageal lumen is 30-44 Newtons (Wraight 1993). The goal of CP is to occlude the esophageal lumen in order to prevent regurgitation and gastric insufflation during intubation and particularly during bag mask ventilation. This maneuver is widely embraced in the anesthesiology world as standard care during induction. However, practice of routine CP has been questioned for over a decade and application in the Emergency Department setting is variable.

Although CP may have been used as far back as the 1770’s, the first published descriptions are from Sellick in 1961. Sellick applied CP during induction of anesthesia in 26 patients that were considered to be high risk for aspiration. In 3 of the patients, regurgitation occurred immediately after CP was removed (Sellick 1961). Sellick published a second article recounting a single case of a patient with CP applied who had the esophagus distended with saline solution via an esophageal tube. This patient did not regurgitate after distension (Sellick 1962). This report also contained Sellick’s personal account of 100 high-risk cases without regurgitation when CP was applied but six patients who regurgitated after CP was removed. These studies are severely flawed as there were no comparison groups, the technique’s proponent (Sellick) was the sole studied physician and it is unclear which patients had BMV prior to induction and intubation. Despite these shortcomings, CP was widely adopted after publication of Sellick’s studies.

Over the intervening decades, a significant amount of literature has emerged challenging the routine use of CP. There are four major issues with CP that should be addressed:

1) CP doesn’t occlude the esophagus as purported.

2) CP reduces airway patency.

3) CP obstructs the view of the airway.

4) CP has never been shown to prevent aspiration.

Let’s tackle each of these issues.

1) CP does not occlude the esophagus. This is the physiologic underpinning for the application of CP but was only demonstrated by Sellick in a select few cases. Subsequent literature has called this concept into question. MRI of healthy volunteers was performed with CP applied in order to better visualize the relationships of the cricoid cartilage and the esophagus (Smith 2003, Boet 2012). Both of these studies demonstrated that in many people the esophagus naturally lies lateral to the cricoid cartilage. Additionally, even those in whom the esophagus is not lateral, CP does not occlude the esophagus but rather displaces it laterally. Rice and colleagues, however, concluded that the location and movement of the esophagus was irrelevant to the efficacy of CP. They argue that the hypopharynx and cricoid move as a unit and that the esophagus becomes compressed against the longus colli muscle. Even if this is true, compression against a muscle is more likely to be overcome by the increased pressure that occurs during vomiting. In their MRI study of 24 healthy volunteers, they state that 35% of patients had obliteration of the esophageal lumen when CP was applied (Rice 2009). However, they show no data to support his claim.

Finally, ultrasound has been used in children to demonstrate that the anatomical effect of CP makes it’s utility questionable. Ultrasound was applied to 55 pediatric patients with and without application of CP. At baseline, the esophagus was lateral to the airway in 61% of patients and upon application of CP, all patients had displacement of the esophagus (Tsung 2012).

It is also important to note that the application of CP reduces esophageal sphincter tone allowing for gastric insufflation. This helps to explain why Sellick witnessed regurgitation after removal of CP. Overall, CP does not appear to cause compression of the esophagus but rather lateral displacement.

2) CP reduces airway patency and 3) CP obstructs the view of the airway. Anesthesia studies in the operating room have demonstrated the effect of CP on airway patency. Allman took 50 patients mechanically ventilated in the OR and measured expired tidal volume and peak inspiratory pressure (PIP) before and after application of CP. He found that after CP, both measures were significantly reduced reflecting increased airway obstruction (Allman 1995). Palmer and Ball went a step further. They endoscopically assessed 30 anesthetized patients for airway patency with and without variable forces applied to the cricoid cartilage. They found that as force increased, there was greater cricoid deformation, increasing likelihood of vocal cord closure and increasing likelihood of difficult ventilation (MacG Palmer 1999). At the recommended 44 N of pressure, 86% of men and 100% of women experienced difficulty with ventilation. Additionally, at this force, 26.6% of men and 78.5% of women had 100% cricoid deformation. CP additionally worsens laryngoscopic view and compromises ideal intubating conditions (Haslam 2005). In a study of 33 OR patients, full vocal cord visualization was reduced from 91% to 67% with application of CP (Smith 2002) and CP compressed 27% of patients vocal cords and impeded tracheal tube placement in 15% (Smith 2002). Finally, CP has also been shown to result in worse glottic view during video laryngoscopy (Oh 2013). Overall, CP interferes with “all aspects of airway management.” (Priebe 2012).

4) CP has never been shown to prevent aspiration. There are numerous cases reported in the literature of patients with CP in place who have aspirated. Perhaps the best literature on this comes from a retrospective, observational study in 2009 out of Africa. This study looked at 5000 patients undergoing C-sections. 61% of these patients had CP applied and 24 vomited during induction. Overall, there were 11 deaths attributed to aspiration with 10 of these coming from the CP group (Fenton 2009).

CP doesn’t do what it’s supposed to. It doesn’t occlude the esophagus to prevent aspiration but rather simply displaces the esophagus laterally. Application makes ventilation more difficult because it collapses the airway and the view of the cords is compromised. Intubating conditions are worsened by CP. Some have suggested application of CP initially and if the laryngoscopic view is poor or BMV is difficult, the CP can be removed. However, lower esophageal sphincter relaxation and gastric insuffulation during CP application increases the risk for regurgitation after removal of CP as witnessed by Sellick.

Bottom Line: In spite of over 50 years of application, there is minimal evidence to either the pathophysiologic basis or clinical utility of CP.. CP also appears to decrease the likelihood for 1st pass success. CP should not be performed routinely. External laryngeal manipulation, either by the operator or an assistant, may improve an otherwise suboptimal laryngeal view.

Question #3: How long do you keep patients NPO prior to procedural sedation?

Procedural sedation (PS) describes the use of a sedative or dissociative anesthetic to elicit a depressed level of consciousness that allows an unpleasant medical procedure to be performed with minimal patient reaction or memory. Unlike general anesthesia, PS agents and doses are chosen to maintain cardiorespiratory function and avoid endotracheal tube placement.or other advanced airway adjuncts. (Tintanelli, 2011). As the airway is not definitively protected, aspiration, or the inhalation of gastric contents into the respiratory tract, during the procedure is a potential adverse outcome with significant associated morbidity. Guidance on how to reduce aspiration risk has centered on pre-procedural fasting, though the optimal prescribed fasting times differs. Many Emergency Physicians question whether pre-procedural fasting actually provides any increased protection (Strayer, 2014).

Additionally, there are significant harms to procedural delay for fasting. Fractures and dislocations put increased risk on the neurovascular supply. Procedures may become more difficult to perform. Finally, prolonged fasting times increase ED length of stay. While fasting’s potential harms have been less studied than its efficacy, they should be kept in mind as the literature is examined (Godwin, 2014).

Much of the historical evidence regarding inter-procedural aspiration has come from the Anesthesia and Surgery literature (Green, 2002). One of the earliest reported potential cases of gastric contents as a complication of general aspiration comes from 1848, in a case in which a 15-year-old girl died 2 minutes after beginning to inhale chloroform while preparing for the removal of a toenail. This patient was sitting upright in an operating chair and was not observed vomiting, but as the autopsy revealed a food-distended stomach it was surmised that aspiration was a potential cause of death. (Maltby, 1990). Later, animal experiments involving the direct introduction of gastric aspirate into tracheas (Mendelson, 1946) suggested the danger of aspiration, and the concept of pre-procedural fasting gained acceptance.

Recent Anesthesia guidelines for preoperative fasting recommend a minimum fasting period of 2 hours following ingestion of clear liquids, 4 hours following breast milk, and 6 hours following infant formula or a light meal. (Apfelbaum, 2011). This recommendation is noted to apply to healthy patients undergoing elective procedures. It is important to note that adhering to the recommended fasting times does not guarantee the presence of an empty stomach. Underlying co-morbid conditions, pain and a number of other factors are associated with gastric emptying. As procedural sedation has become a common occurrence in the Emergency Department (ED), the question has arisen of how to translate anesthesia guidelines into Emergency Medicine practice.

Recent Emergency Medicine recommendations prescribed that maximal sedation depth be based on risk stratification of the type of liquid or food intake, the urgency of the procedure, and risk of aspiration. (Green, 2007). These authors acknowledged that their consensus recommendations stemmed in part from the general anesthesia literature. General anesthesia practice involves scenarios at higher risk for aspiration than ED PS but aspiration incidence remains low. Previously, Green et al suggested several reasons why ED PS is potentially safer than general anesthesia, including 1) not routinely placing an endotracheal tube, 2) maintenance of protective airway reflexes, 3) not using pro-emetic inhalation anesthetics. In their 2007 recommendations they suggest responsible consideration of risks/benefits of aspiration risk prior to pre-procedural fasting, though they ultimately note a paucity of literature suggesting more than a theoretical aspiration risk in ED PS.

Multiple studies in the Emergency Medicine literature have not supported the relationship between fasting state and procedural sedation-related aspiration. Agraway et al conducted a prospective case series enrolling all consecutive patients in a children’s hospital ED who underwent PS and recorded pre-procedural fasting state and adverse events (Agraway, 2003). Of the 905 patients with available data, 509 (56%) did not meet established fasting guidelines. 35 (6.9%) of these 509 patients had minor adverse effects as compared to 32 (8.1%) of the 396 patients who did meet fasting guidelines. No significant difference was found in median fasting duration between the two patient groups.

Three trials involving pediatric patients (Roback, 2004; Treston, 2004; Babi, 2005) undergoing procedural sedation with varying sedation agents examined fasting time & adverse effects. No statistically significant relationship was found between incidence of emesis or adverse effects and fasting time (Roback, Treston) or whether fasting guidelines were met (Babi). No episodes of aspiration were reported in any of the three studies.

Bell et al conducted a prospective observational series of 400 adult and pediatric patients undergoing procedural sedation with propofol and measured the percentage of patients whom met ASA fasting guidelines and looked at adverse outcomes (Bell, 2007). They found that 70.5% of those enrolled did not meet ASA fasting guidelines. There was no identified statistically significant difference between fasting status and adverse events (emesis, respiratory interventions). Additionally, there were no aspiration events in either group.

In 2014 an ACEP Clinical Policy committee reviewed these studies and ultimately questioned the utility of pre-procedural sedation fasting (Godwin, 2014). In a Level B evidence-based recommendation, they advised against delaying procedural sedation in the ED based on fasting time, as “preprocedural fasting for any duration has not demonstrated a reduction in the risk of emesis or aspiration when administering procedural sedation and analgesia.” The conclusions of the Clinical Policy recognized a dearth of study on the potential harms of delayed procedural sedation including pediatric hypoglycemia and worsening pathology.

Bottom Line: There is no evidence supporting delay of procedural sedation and analgesia based on fasting state in order to reduce the risk of vomiting and aspiration.The potential risk of aspiration involves multiple patient factors and should be considered on a case-by-case basis, and weighed against the harms associated withdelaying the sedation and procedure.

Question #4: When using ketamine for procedural sedation do you pretreat with benzodiazepines or anticholinergics?

Ketamine is a dissociative sedative-analgesic commonly used for painful or emotionally stressful procedures. When used at its dissociative dose of 1-2mg/kg IV (or 3-4 mg/kg IM), it is thought to exert its effects by effectively disconnecting the limbic and thalamocortical systems, leaving patients unaware of and unresponsive to external stimuli. Unlike other procedural sedation medications, respiratory status is maintained, making it a critical medication in the pediatric and adult Emergency Department. (Green, 2011)

As with any medication, Ketamine is not without its potential complications. Increased salivation and post-procedure emergence reactions are two concerning potential adverse outcomes, and anticholinergics and benzodiazepines, respectively, have been used as pre-treatment to blunt or prevent these effects (Haas, 1992; Strayer, 2008). Though the pharmacologic reasoning is sound for each medication and has been shown to work as treatment once patients become symptomatic, their common utility as pre-treatment is questionable.

Atropine and glycopyrrolate have commonly been administered to prevent hypersalivation and resulting adverse airway events, though their use by physicians has proven inconsistent. One prospective observational study (Brown, 2008) in a pediatric ER tracked the frequency of atropine pre-treatment and associated hypersalivation in 1,080 ketamine sedations over a 3-year period. Most (87%) of the patients in the study were not pretreated with an anticholinergic. Of the patients who received no pre-treatment, 92% were described as having no excess salivation. The authors concluded that atropine was not routinely required for prophylaxis.

A secondary analysis (Green, 2010) seemed to confirm these findings. Examining 8,282 ED ketamine sedations in pediatric patients from 32 previous series, this study found no statistically significant reduction in the number of adverse respiratory or airway events based on whether patients received atropine versus no anticholinergic drug. Interestingly, patients who received glycopyrrolate were actually found to have a significantly increased number of airway and respiratory events as defined by authors of the original studies. Taking these and other studies into account, a recent ACEP Clinical Policy on ketamine did not recommend the routine use of anticholinergics as pretreatment in adults or children. (Green, 2011)

Benzodiazepine pretreatment for the prevention of emergence reactions has been commonly recommended but erratically applied. A meta-analysis of 32 ED studies involving ketamine in pediatric patients (Green, 2009) was conducted to determine which clinical variables prevent recovery agitation. The authors found that 7.6% of patients experienced an emergency reaction though only 1.4% were judged to have “clinically significant” agitation. No apparent benefit or harm from pre-administrated benzodiazepines was found.

It has been suggested that emergence reactions are more frequent in adults than in children, and thus pre-treatment with benzodiazipines would prove more useful in this population. A double-blind randomized control trial pretreated 182 adult subjects receiving varying doses of ketamine with 0.03mg/kg IV midazoloam vs placebo (Sener, 2011).

Though the authors did not specify the intensity of the reaction that was experienced, they did find a significant decrease in recovery agitation with midazolam. An alternative to benzodiazepine prophylaxis is either pre-emergency or PRN benzodiazepine use (Strayer 2008). The current ACEP Clinical Policy recommends against the routine use of benzodiazepines in children but leaves the recommendation ambiguous for adults.

Bottom Line: Anticholinergics are not routinely needed for premedication in ketamine sedations. Benzodiazepines can be administered to adults but are not recommended routinely for children. Both medications should be available to use as PRN treatment.

References

Sellick BA. Cricoid pressure to control regurgitation of stomach contents during induction of anaesthesia. Lancet 1961; 404-6.

Sellick BA. The prevention of regurgitation during induction of anaesthesia. First Eur Congress Anaesthesiol. 1962;89:1-4.

Smith et al. Cricoid pressure displaces the esophagus: an observational study using magnetic resonance imaging. Anes 2003; 99(1): 60-4.

Boet S et al. Cricoid pressure provides incomplete esopogeal occlusion associated with lateral deviation: a MRI study. JEM 2012; 42(5): 606-11.

Rice et al. Cricoid pressure results in compression of the postcricoid hypopharynx: the esophageal position is irrelevant Anesth Analg 2009 109(5) 1546-52.

Tsung WJ et al. Dynamic anatomic relationship of the esophagus and trachea on sonography: Implications for endotracheal tube confirmation in children. J Ultrasound Med 2012; 31: 1365-70.

Allman KG. The effect of cricoid pressure application on airway patency. J Clin Anes 1995; 7: 197-9.

Palmer JH, Ball DR. The effect of cricoid pressure on the cricoid cartilage and vocal cords: an endoscopic study in anaesthetized patients. Anaesthesia 2000;55:253–8

Haslam N, Parker L, Duggan JE. Effect of cricoid pressure on the view at laryngoscopy. Anaesthesia 2005;60:41e7.

Smith CE, Boyer D. Cricoid pressure decreases ease of tracheal intubation using fiberoptic laryngoscopy. Can J Anesth 2002; 49(6): 614-9.

Oh J et al. Videographic analysis of glottic view with increasing cricoid pressure. Ann of EM 2013; 61: 407-13.

Priebe HJ. Use of cricoid pressure during rapid sequence induction: Facts and fiction. Tends in Anes Crit Care 2012: 123-7.

Fenton PM, Renolds F. Life-saving or ineffective? An observational study of the use of cricoid pressure and maternal outcome in an African setting. Int J Obstet Anes 2009; 18: 106-110

Agraway D., Manzi S.F., Gupta R, Krauss B. Preprocedural Fasting State and Adverse Events in Children Undergoing Procedural Sedation and Analgesia in a Pediatric Emergency Department. Annals of Emergency Medicine. 2003; 42 (5), 636-646

Apfelbaum, J.I., Caplan, R.A., Connis, R.T. et al. Practice guidelines for preoperative fasting and the use of pharmacologic agents to reduce the risk of pulmonary aspiration: application to healthy patients undergoing elective procedures. An updated report by the American Society of Anesthesiologists Committee on Standards and Practice Parameters. Anesthesiology. 2011; 114: 495–511

Babl FE, Puspitadewi A, Barnett P, et al. Preprocedural fasting state and adverse events in children receiving nitrous oxide for procedural sedation and analgesia. Pediatr Emerg Care. 2005;21:736-743.

Bell A, Treston G, McNabb C, et al. Profiling adverse respiratory events and vomiting when using propofol for emergency department procedural sedation. Emerg Med Australas. 2007;19:405-410.

Godwin SA, Burton JH, Gerardo CJ, et al: Clinical policy: procedural sedation and analgesia in the emergency department. Ann Emerg Med 2014 Feb; 63(2): 247-58

Green SM, Krauss Baruch. Pulmonary aspiration risk during emergency department procedural sedation–an examination of the role of fasting and sedation depth. Acad Emerg Med. 2002 Jan;9(1):35–42

Green SM, Roback MG, Miner JR, et al: Fasting and emergency department procedural sedation and analgesia: A consensus-based clinical practice advisory. Ann Emerg Med 49: 454, 2007

Maltby JR. Early reports of pulmonary aspiration during general anesthesia [letter]. Anesthesiology. 1990; 73:792–3.

Mendelson CL. The aspiration of stomach contents into the lungs during obstetric anesthesia. Am J Obstet Gynecol. 1946; 52:191 – 204.

Miner JR. Chapter 41. Procedural Sedation and Analgesia. In: Tintinalli JE, Stapczynski J, Ma O, Cline DM, Cydulka RK, Meckler GD, T. eds. Tintinalli’s Emergency Medicine: A Comprehensive Study Guide, 7e. New York, NY: McGraw-Hill; 2011

Roback MG, Bajaj L, Wathen JE, et al. Preprocedural fasting and adverse events in procedural sedation and analgesia in a pediatric emergency department: are they related? Ann Emerg Med. 2004;44:454-459.

Strayer, Reuben. “The Harms of Fasting.” EM Updates. 16 May 2014. <http://emupdates.com/2014/05/16/the-harms-of-fasting/&gt;

Treston G. Prolonged pre-procedure fasting time is unnecessary when using titrated intravenous ketamine for paediatric procedural sedation. Emerg Med Australas. 2004;16:145-150.

Brown L, Christian-Kopp S, Sherwin TS, et al. Adjunctive atropine is unnecessary during ketamine sedation in children. Acad Emerg Med. 2008;15:314-318.

Green SM, Roback MG, Krauss B, et al. Predictors of emesis and recovery agitation with emergency department ketamine sedation: an individual-patient data meta-analysis of 8,282 children. Ann Emerg Med. 2009;54:171-180

Green SM, Roback MG, Krauss B. Anticholinergics and ketamine sedation in children: a secondary analysis of atropine versus glycopyrrolate. Acad Emerg Med. 2010;17:157-162.

Green SM, Roback MG, Kennedy RM, Krauss B (2011) Clinical practice guideline for emergency department ketamine dissociative sedation: 2011 update. Ann Emerg Med 57: 449–461

Haas DA, Harper DG. Ketamine: a review of its pharmacologic pro- perties and use in ambulatory anesthesia. Anesth Prog 1992;39:61-8.

Sener S, Eken C, Schultz CH, et al. Ketamine with and without midazolam for emergency department sedation in adults: a randomized controlled trial. Ann Emerg Med. 2011;57:109-114.

Strayer RJ, Nelson LS. Adverse events associated with ketamine for procedural sedation in adults. AM J Emerg Med. 2008;26(9):985–1028

Posted in Uncategorized | Tagged , , , | 2 Comments

Questions, Airway and Sedation 2014

1. Do you reach for video laryngoscopy or direct laryngoscopy first for intubations?

eml airway 20142. Do you use cricoid pressure during induction and paralysis?

3. How long do you keep patients NPO prior to procedural sedation?

4. When using ketamine for procedural sedation do you pretreat with benzodiazepines or anticholinergics?

Posted in Uncategorized | Tagged , , , , | 8 Comments

Seizure, “Answers”

1. Which benzodiazepine do you prefer for the treatment of status epilepticus (SE)? Which do you prefer for pediatric patients?

An epileptic seizure (ES) is defined as an abrupt disruption in brain function secondary to abnormal neuronal firing, and is characterized by changes in sensory perception and or motor activity. The clinical manifestation of a seizure is vast, encompassing focal or generalized motor activity, sensory or autonomic dysfunction, and mental status changes. Numerous types of seizures exist, broadly classified as simple versus complex, partial versus generalized, and convulsive versus non-convulsive. All can progress to status epilepticus (SE); this discussion pertains to convulsive SE.

Screen Shot 2014-12-09 at 6.43.52 AM

SE was historically defined as any seizure activity lasting longer than thirty minutes, but is now more conservatively defined as a seizure lasting longer than five minutes, or consecutive seizures without a return to baseline in between seizures. It is important for emergency physicians to rapidly recognize and treat SE as studies estimate an associated mortality of 10-40%, depending on the etiology. (Shearer 2006) Initial interventions include evaluation of the airway, IV access, cardiac monitoring, and the administration of supplemental oxygen and antiepileptic agents. The goal is to terminate all seizure activity within sixty seconds.

Benzodiazepines potentiate GABA activity, thus decreasing neuronal firing, and are widely accepted as the preferred first-line treatment for SE. (Alldredge 2001, Leppik 1998, Treiman 1998, Brophy 2012, Shearer 2006) In addition to a favorable safety profile, benzodiazepines have the advantage of multiple routes of administration, including intravenous (IV), intramuscular (IM), and various mucosal routes. The IV route is generally preferred for speed of onset of action, however environmental circumstances and patient variables can complicate IV access, particularly in children. (Shearer 2006, Berg 2009)

In children (neonatal seizure not discussed), seizures are commonly treated via mucosal administration of benzodiazepines, particularly by parents and EMS in the pre-hospital setting. Mucosal routes include rectal diazepam and buccal or intranasal midazolam. Rectal diazepam has long been the favored drug in this setting and is FDA approved for such use. (Berg 2009) Rectal diazepam, however, has several limitations including social stigma, short duration of action, and risk of expulsion due to seizure induced fecal incontinence. As a consequence, administration of a second agent or repetitive diazepam dosing is often required, leading to increased risk of side effects and potential harm. Rectal diazepam has been proven to be more efficacious than placebo, but has only recently been compared to other mucosal routes of administration. (Dreifuss 1998) In 2005, a randomized controlled trial (RCT) demonstrated the superiority of buccal midazolam to rectal diazepam for seizure termination without increasing the risk of respiratory depression.(McIntyre 2005) Several RCTs comparing intranasal midazolam to rectal diazepam show superiority for intranasal midazolam when looking at time-to-seizure cessation. (Fisgin 2002, Bhattacharyya 2006, Holsti 2007, Holsti 2010) Given the disadvantages of rectal diazepam combined with the above evidence, buccal and intranasal midazolam should be considered viable alternatives for treatment of pediatric seizure. Regarding administration of IV benzodiazepines to children, IV lorazepam appears to be as effective and safer than IV diazepam. (Appleton 1995, Appleton 2008) Further studies are needed to compare non-IV and IV routes of administration in the pediatric population, particularly with comparisons to IV lorazepam. If difficult IV access is anticipated, buccal and intranasal routes should be considered. (Ulgey 2012)

Similar to the pediatric population, in adults, diazepam was historically the benzodiazepine of choice for treatment of SE. After years of research however, lorazepam has emerged as the preferred agent due to its extended duration of anticonvulsant activity and its ability to be administered via the IM route. (Treiman 1998, Leppik 1998, Walker 1979) Large RCTs comparing benzodiazepines head-to-head, however, are limited. A 2005 Cochrane review of RCTs established IV lorazepam as superior to IV diazepam for cessation of SE, evaluating three studies including 289 patients. The relative risk (RR) of non-cessation of seizures for lorazepam compared to diazepam was 0.64. The comparison to midazolam, however, was less clear. A single study found IV midazolam, when compared to IV lorazepam, to have a RR 0.2 for non-cessation of seizures. The authors concluded a non-significant trend favoring IV midazolam compared to IV lorazepam. Unfortunately much of the pediatric data is based on single studies and is not conclusive. (Prasad 2005)

In addition to a trend towards improved efficacy in SE, midazolam does not require refrigeration, lending it another advantage over lorazepam in the pre-hospital setting. To further compare these two antiepileptic agents in the pre-hospital environment, the RAMPART (Rapid Anticonvulsant Medication Prior to Arrival) study was completed in 2012. It compared 10 mg IM midazolam to 4 mg IV lorazepam in a double-blinded RCT. Children over 13 kg were included in this analysis. Upon arrival to the ED, seizures were absent in 73.4% of patients in the midazolam treatment group and in 63.4% of patients in the lorazepam group (p<0.001, primary outcome). Admission rates were also significantly lower in the midazolam treatment group (p<0.001). Although the time-to-administration of the drug was shorter in the midazolam group, the onset of action was shorter in the lorazepam group. This study showed IM midazolam to be non-inferior to IV lorazepam when given by EMS providers prior to ED arrival. (Silbergleit 2012) Important limitations of this study include use of an autoinjector for midazolam as opposed to standard IM injections, and occurrence of study in the pre-hospital environment where IV access is often more difficult to obtain. While extrapolation of this study to ED patients should be limited, IM midazolam for SE appears to be a viable option.

What do the experts say? In 2012, the Neurocritical Care Society published guidelines for the treatment of SE based on limited available evidence and consensus opinion. They recommend lorazepam as the preferred agent for IV administration, midazolam for the IM route and diazepam for the rectal route. Lorazepam, midazolam and diazepam all carry Level A recommendations for emergent treatment of SE. (Brophy 2012)

2. Which second-line agents do you use for treatment of SE?

Unless the underlying cause of SE is known and reversible by another means (i.e. metabolic, toxic ingestion), the initial benzodiazepine is immediately followed by a second anti-epileptic agent. If the seizure has already been successfully terminated, the goal of this second agent is to prevent recurrence through rapid achievement of therapeutic levels of an antiepileptic drug (AED). However, if the benzodiazepine has failed, the goal is to rapidly stop all seizure activity. While the use of benzodiazepines as the first-line treatment for SE is widely accepted, there remains a significant debate over what this second-line agent should be.

Phenobarbital, a long-acting barbiturate that potentiates GABA activity, is the oldest AED still in use today. Historically a first-line agent, it has fallen out of favor due to its significant adverse event profile, namely hypotension and respiratory depression. Currently, it is typically reserved for refractory SE. (Shearer 2006)

Phenytoin has emerged as the preferred second-line agent after benzodiazepines for the treatment of SE. Phenytoin prolongs inactivation of voltage-activated sodium channels, thus inhibiting repetitive neuronal firing. Although it is possible to rapidly achieve therapeutic levels of phenytoin, the drug is limited by side effects including ataxia, hypotension, cardiac dysrhythmias and tissue necrosis secondary to extravasation. (Shearer 2006) Fosphenytoin, a precursor of phenytoin, allows for IM administration with preserved bioavailability, but can have similar hemodynamic side effects. The combination of benzodiazepines and phenytoin is only effective in approximately 60% of patients, leaving a substantial group in SE. (Treiman 1998, Knake 2009) This, combined with its side effect profile, has led to a search for alternative second agents for the treatment of SE.

Valproic acid is an established AED used to treat many forms of seizures, and has been available for IV administration since 1996. Like phenytoin, it acts through prolonging the recovery of voltage-activated sodium channels. The efficacy of valproic acid in treating SE has been quoted ranging from 40-80%. Its primary side effect is hepatotoxicity, either from chronic use over the first six months or as an idiosyncratic reaction. Compared to phenytoin’s risk with local extravasation and significant hypotension, valproic acid is a potentially safer option for some patients. (Shearer 2006). A 2012 meta-analysis sought to compare valproic acid to other available AEDs for SE. Unfortunately, heterogeneity in defining SE and variability within the data limited the conclusions of the meta-analysis. Despite this, authors deemed valproic acid to be as effective as phenytoin in treating SE based on three randomized studies including 256 patients. (Misra 2006, Agarwal 2007, Gilad 2008, Liu 2012) An Italian meta-analysis that same year found no difference in time to seizure cessation when comparing the use of valproic acid and phenytoin, with a trend towards fewer side effects with valproic acid. The authors warn however, against over-interpretation of this data given its inherent limitations and suggest waiting for larger RCTs before changing one’s clinical practice. (Brigo 2012)

Levetiracetam is a comparatively newer medication, with an IV formulation only available since 2006. Its exact mechanism of action is unknown, but it has fewer side effects and limited drug-drug interactions when compared to the older AEDs. (Shearer 2006) Given this favorable safety profile, many have heralded levetiracetam as an ideal second agent for SE. In 2012, a review paper by Zelano et al. compared ten studies looking at levetiracetam for treatment of SE, including one prospective randomized study and a total of 334 patients. The authors found that levetiracetam had an efficacy ranging from 44% to 94%, and was not associated with any significant adverse events. Overall, however, the efficacy was significantly higher in the retrospective studies, raising concern over potential bias influencing the positive results. The single randomized study reported an efficacy of 76%, however this group received levetiracetam as primary therapy and it is unclear if this group was “less sick” and would have responded to initial benzodiazepines. (Misra 2012) Furthermore, in many of the studies, levetiracetam was used because phenytoin was contraindicated, creating another source of bias. Zelano’s review concluded that despite its favorable safety profile, there is scarce evidence to support levetiracetam as a second-line agent in the treatment of SE. (Zelano 2012) More studies are needed.

Currently, using the limited data available, the Neurocritical Care society recommends fosphenytoin as the preferred second-line agent for treatment of SE. They do allow for consideration of other agents on a case-by-case basis, including SE in patients with known epilepsy, in which valproic acid may be preferred. Additionally, an IV bolus dose of the patient’s maintenance AED is also recommended in such cases. (Brophy 2012)

If SE has not resolved after administration of the second agent, the patient is considered to have refractory SE (RSE), and should receive additional treatment immediately. Continuous infusion of an AED, typically propofol, midazolam, phenobarbital, valproic acid or high dose phenytoin, is recommended. (ACEP 2014) Bolus doses of the infusion AED can also be given for breakthrough seizures. Available data do not support the use of one agent over another. (Brophy 2012)

Other agents may soon be available for the treatment of SE and RSE. Animal data reporting decreased GABA receptors in the setting of SE has sparked interest in targeting the NMDA receptor. The theory being, if you cannot potentiate the inhibitory GABA system, perhaps antagonizing the excitatory NMDA system could have efficacy. Ketamine, an NMDA antagonist, has been discussed as a potential future direction in the treatment of RSE. (Kramer 2012)

3. In which adult patients with first-time seizure do you obtain emergent imaging?

Seizure is a common presentation to the ED and represents between 1-2% of ED visits. Although manifested by a common presentation, the etiology of seizure is incredibly broad, including trauma, hemorrhage, metabolic derangements, toxic exposures, infection, and congenital abnormalities. For adult patients with new-onset seizures, the evaluation can be tailored to the history provided by the patient. Laboratory investigation in particular should be fitted to the specific patient as multiple studies have shown the history and physical exam to predict laboratory abnormalities (Shearer 2006). Serum glucose and sodium tests, however, are recommended (Level B) in all patients who have returned to baseline. A pregnancy test is recommended in all women of childbearing age. (ACEP 2014)

When it comes to neuroimaging first-time seizure, the best course of action is less clear. Although it is established that all patients presenting with first-time seizure should receive neuroimaging, the timing and modality of that imaging is highly controversial. Neurologists prefer brain magnetic resonance imaging (MRI) for seizure work up, but it is rarely available in the ED setting. Computed tomography (CT) is the predominant test available to ED providers, however it is inferior to MRI for evaluation of seizure, with the exception of detection of acute hemorrhage. (Jagoda 2011)

Who then, needs a screening CT in the ED prior to discharge and who can wait for the definitive MRI? Experts suggest dividing patients into two groups. First are those with persistent neurologic deficits, an abnormal mental status, or evidence of medical illness, and second are those who have returned to baseline with a non-focal exam. The first group is clearly high-risk and warrants an extensive work up including an emergent head CT. In fact, abnormal head CTs have been documented in 81% of patients with neurologic deficits on exam. (Tardy 1995) The second group is more nuanced and the utility of emergent head CT is less defined. Even in patients with non-focal neurologic exams, however, the rate of CT abnormalities ranges from 17-22%. (Tardy 1995, Sempere 1992, Jagoda 2011) The clinical significance of a nonspecific abnormal head CT, the definition of which often includes simple atrophy, is uncertain in a neurologically intact patient. Furthermore, there may be elements of the presentation or history that place the patient at higher risk. Several studies have noted advanced age (Tardy 1995), HIV (Jagoda 2011, Harden 2007) and chronic alcohol abuse to be associated with increased risk of abnormal head CTs in the setting of seizure, despite normal exams. (Tardy 1995, Harden 2007, Jagoda 2011, Earnest 1988)

In 2007 a multidisciplinary committee including ED physicians, in association with the American Academy of Neurologists (AAN), updated guidelines on neuroimaging for the emergency patient with seizure. The authors specifically sought evidence for emergent neuroimaging that would change ED management to offer a clinically relevant guideline. Based on a nearly forty-year literature review, they offer a weak recommendation (Level C) for emergent CT in adults with first time seizure, noting CT to make acute management changes in 9-17% of cases. They offered a higher recommendation (Level B) for a subset of patients more likely to have significant findings on CT. In addition to patients with an abnormal neurologic exam, this subset included those with focal seizures, predisposing history such as trauma, neurocutaneous disorders, malignancy and shunt. (Harden 2007)

Due to limited data, the above recommendations and summary of evidence ultimately fail to provide a clear, universal algorithm for all cases. An abnormal mental status, focal neurologic exam, predisposing history, trauma, immunocompromised state, or focal seizures should prompt emergent imaging in the ED. Increased age and inability to obtain reliable follow up should also tip the scales in favor of obtaining a CT prior to discharge. A patient without a concerning history, at his/her baseline with a normal neurologic exam will need an outpatient MRI and EEG for definitive diagnosis. Whether that work up includes a CT in the ED will be up to the discretion of the provider. The ACEP clinical policy guideline, last updated in 2004, offer Level B recommendations as follows: 1. When feasible, perform neuroimaging of the brain in the ED on patients with a first time seizure. 2. Deferred outpatient neuroimaging may be used when reliable follow up is available. (ACEP 2014) Neither ACEP nor the AAN are able to make a comment on the use of MRI in the ED based on insufficient evidence.

4. How do you diagnose pseudoseizure?

Pseudoseizures, formally known as psychogenic nonepileptic seizures (PNES), are characterized by motor, sensory, automatic or cognitive behavior similar to epileptic seizures (ES) but without abnormal neuronal firing. PNES is often misunderstood and patients are perceived as malingering or “faking it”. PNES, however, is a defined psychoneurologic condition falling under the same umbrella as conversion and somatoform disorders. Interestingly, epilepsy and PNES frequently coexist in the same patient. It has been estimated that up 60% of patients with PNES have another seizure disorder, however more conservative studies place the estimate closer to 10%. (Benbadis 2000, Benbadis 2001, Shearer 2006) PNES is found across cultures and occurs more frequently in women in the third and fourth decades of life. (Reuber 2003, Lesser 1996)

It can be extremely difficult to distinguish PNES from ES in the ED. Video EEG is the gold standard for diagnosis of ES, however this is not typically possible in the ED setting. There is utility, however, in differentiating PNES from ES as antiepileptic treatment is not benign and creates potential for iatrogenic harm. (Reuber 2003) Many have attempted to clarify PNES semiology in studies of variable quality, including many case reports and uncontrolled studies. In 2010, Avbersek reviewed rigorous studies that included EEG to establish clinical signs distinguishing PNES from ES. A sign was considered well supported for PNES if it had positive findings in two controlled studies and the remaining studies were also supportive. Based on their findings, clinical signs suggestive of PNES, applicable to the ED setting included:

  1. Duration of event >2 minutes
  2. Fluctuating course
  3. Asynchronous movement of limbs
  4. Pelvic thrusting
  5. Side to side head or body movement
  6. Closed eyes
  7. Ictal crying
  8. Recall of event
  9. Absence of postictal confusion
  10. Absence of postictal stertorous breathing

Flailing or thrashing movements and absence of tongue biting or urinary incontinence are frequently cited as suggestive of PNES, however this study did not find sufficient evidence to support this distinction. (Avbersek 2010) It is important to remember that many of these findings apply to generalized seizures only and cannot be used to separate PNES from partial seizures. Frontal seizures, for example, often demonstrate bizarre movements and emotional displays easily mistaken for PNES. (Reuber 2003) When applying this information to ED patients, one must take the entire history and exam into account, never relying upon a single sign to rule out ES. Ultimately, PNES is not a diagnosis to make in the ED, as it requires video EEG monitoring along with the assessment of experienced epileptologists.

In addition to clinical signs, physiologic parameters including cortisol, prolactin, white blood cell count, creatine kinase and neuron-specific enolase have been investigated in PNES. Though most have met significant limitations, prolactin, a hormone secreted by the anterior pituitary, has emerged as the most promising serum marker. (Willert 2004, LaFrance 2010) In 1978, Trimbel first demonstrated prolactin elevation in ES. Many subsequent studies have replicated similar findings while showing no prolactin elevation in PNES. (Trimble, 1978, Mehta 1994, Fisher 1991, Mishra 1990)

Serum prolactin is known to peak fifteen to twenty minutes after seizure, returning to baseline at one hour. (Trimble 1978) Interestingly, however, prolactin levels do not consistently rise in all types of seizures. On average, prolactin is elevated in 88% of generalized tonic-clonic seizures, 64% of complex partial seizures and 12% of simple partial seizures. (LaFrance 2013) In one study, patients with PNES also demonstrated a statistically significant increase in prolactin from baseline. Notably, the prolactin elevation in PNES was much smaller than that in ES. Nevertheless, this study raises questions over the specificity of prolactin elevation for diagnosis of ES. (Alving 1998) To further complicate interpretation of prolactin, levels are subject to significant variations. Up to 100% fluctuations are seen prior to awakening from sleep, levels in women and men differ, and baseline prolactin levels are elevated in those with epilepsy. (Chen 2005) These factors, combined with variability in seizure classification and definition of prolactin elevation has made interpretation of the limited data difficult. Despite these limitations, The American Academy of Neurology Therapeutics and Technology Assessment Subcommittee reviewed available high quality data. They determined elevated prolactin to have a specificity of 96% for detection of ES. They conclude that a twice-normal rise in serum prolactin, drawn ten to twenty minutes after an ictal event, compared to a baseline prolactin, is useful in differentiating GTC and CPS from ES. The pooled sensitivity for this data was very poor, however, averaging 53% for all types of ES. (Chen 2005) Another review, including less rigorous data, reported an average sensitivity of 89%. (Cragar 2002) Both studies agree that absence of an elevated prolactin level should not be used to rule out ES. Additionally, baseline prolactin levels are often not available, further limiting the utility of this test.

Posted in Uncategorized | Tagged , , , , , | 2 Comments

Seizure, Questions

1.  Which benzodiazepine do you prefer for the treatment of status epilepticus (SE)? Which do you prefer for pediatric patients?

EMl Seizure questions2. Which second-line agents do you use for treatment of SE?

3. In which adult patients with first-time seizure do you obtain emergent imaging?

4. How do you diagnose pseudoseizure?

Posted in Uncategorized | Tagged , , | 6 Comments

Trauma, “Answers”

1. When do you use tranexamic acid in trauma?

Tranexamic acid (TXA) is a synthetic derivative of the amino acid lysine. It was discovered in the 1950s and has traditionally been employed in surgery to minimize blood loss. TXA works by inhibiting lysine binding sites on plasminogen, thereby preventing its conversion to plasmin and reducing fibrinolysis and clot breakdown.

EML Trauma 2014 answersTrauma is consistently in the top ten leading causes of death worldwide (WHO, 2013). TXA has been studied to see if it can be used to improve morbidity and mortality. The major randomized controlled trial (RCT) is the CRASH-2 trial, which randomly assigned over 20,000 adult trauma patients in 40 countries with, or at risk of, significant bleeding, to either TXA or placebo (Shakur, 2010). The TXA protocol entailed given a loading dose of 1g over 10 minutes then an infusion of 1g over eight hours. The primary outcome was death in hospital within four weeks of injury, and the results were favorable for TXA. All-cause mortality was significantly reduced by 1.5% (14.5% TXA vs. 16.0% placebo (RR 0.91, 95% CI 0.85-0.97; p=0.0035)). Risk of death due to bleeding, which was a secondary outcome, was also significantly reduced by 0.8% when using TXA (RR 0.85, p=0.0077). The trial was large enough for subgroup analyses, which found the group that benefited most from TXA received it less than three hours from injury (RR 0.87, 99% CI 0.75-1.00). The study also showed there was no significant difference in deaths from vascular occlusion (MI, CVA, PE), multiorgan failure, or head injury between TXA and placebo. The strength of this trial lies in the large sample from multiple settings and countries, the double blinded randomization, similar baseline factors in both groups, and minimal loss to follow up. One of the weaknesses mentioned by the authors is that the diagnosis of traumatic hemorrhage can be difficult and some included patients might not have been bleeding at the time of randomization, which could reduce the power of the trial. However using a broad clinical inclusion criteria (hypotension, tachycardia, physician judgment) and not depending on lab results or imaging also makes this study more applicable and generalizable. In addition, the study found no difference in RBC transfusion in both groups. The lack of difference may be secondary to transfusion decisions made prior to completion of TXA administration; since there were more survivors in the TXA group, they also had greater opportunity to receive RBCs.

The CRASH-2 data was subsequently reanalyzed in other studies. One such study looked at four predefined risk of mortality groups (<6%, 6-20%, 21-50%, >50%) and showed that TXA was beneficial in terms of all-cause mortality and deaths from bleeding regardless of baseline risk of death. The implication is that TXA should be considered in all comers with traumatic hemorrhage within three hours of injury (Roberts, 2012). Subsequent studies found the greatest benefit of TXA if given in the first hour. If it is given more than three hours after injury, an increase in deaths from bleeding was observed (Roberts, 2011).

In 2012 the MATTERs study, a retrospective, observational study of combat injuries in Afghanistan, was published (Morrison, 2012). The study looked at the non-randomized use of TXA in hemorrhagic trauma patients that received at least one unit of RBCs. They found decreased mortality in the patients who were given TXA (17.4% vs. 23.9%) and a more marked mortality reduction in the group receiving massive transfusion (14.4% vs. 28.1%). This study has a number of limitations including external validity (most of us aren’t treating high velocity rifle injuries from combat) and the lack of randomization and blinding.

Although TXA is not yet standard of care in traumatic hemorrhage, it appears to be safe in terms of thrombotic complications, and if given within three hours of injury, also beneficial in decreasing bleeding and mortality. The use of TXA in traumatic hemorrhage should be considered in future pre-hospital and ED trauma resuscitation protocols.

2. When you can’t get peripheral access in a trauma patient, do you prefer a subclavian, femoral or intraosseous (IO)?

Establishing IV access is a vital early step in the ATLS algorithm. The sickest patients need access the fastest, yet are often the most difficult. Whether due to intravascular depletion causing venous constriction or severe trauma limiting access to the extremities, emergency physicians should always be ready to obtain central venous access. Most trauma patients arrive in a c-collar via EMS making internal jugular access impractical and unsafe. The remaining options are subclavian, femoral or IO.

A recent prospective, observational study investigated first attempt success rates and procedure times of IO access vs. central venous catheterization (CVC) in adult resuscitation patients with inaccessible peripheral veins (Leidel, 2012). In a fairly small sample of 40 consecutive patients (73% trauma), each received IO access (55% humeral site) and a CVC (83% subclavian) simultaneously. There was a significantly higher first attempt success rate for IO [85% vs. 60% for landmark-based CVC (p=0.024)], and faster median procedure time [IO 2.0 min vs. CVC 8.0 min (p<0.001)]. The authors stated that relevant complications (infection, extravasation, compartment syndrome, cannula dislodgement, bleeding, arterial puncture, hemo/pneumothorax, venous thrombosis or vascular access related infection) were not observed. Although there are no RCTs comparing in-hospital IO vs. CVC, there are several case series and observational studies supporting higher first attempt success rates and faster access times for IOs (Valdes, 1977; Iserson, 1989; Iwama, 1996; Cooper, 2007; Ngo, 2009; Paxton, 2009; Ong, 2009). A 1996 study (Iwama, 1996) also showed similar IO (clavicular) flow rates compared to CVC (subclavian). There is also an RCT studying out-of-hospital cardiac arrests, which found tibial IO access to have the highest first-attempt success rate and the fastest time to vascular access compared to peripheral IV and humeral IO access (Reades, 2011).

When it comes to central venous access, the complication that is studied most often is catheter-related bloodstream infections (CRBI). In 2011, the CDC released a class 1A recommendation to avoid using the femoral vein for central access in adult patients (O’Grady, 2011), a view also shared by the Infectious Diseases Society of America (Marschall, 2008). Typically, a class 1A recommendation is based on multiple high quality studies. This recommendation, however, was based on a single study in Critical Care Medicine (Lorente, 2005). This was a prospective, observational study that found significant differences in CRBI between femoral (8.34%), IJ (2.99%) and subclavian (0.97%) lines. In a 2012 meta-analysis encompassing two RCTs and eight cohort studies, including over 3000 subclavians lines, 10,000 IJ lines, and 3100 femoral lines, the data against femoral access became less clear (Marik, 2012). After the authors excluded two studies that were statistical outliers (Lorente, 2005; Nagashima, 2006), they found no significant difference in the risk of CRBI between femoral and IJ routes (RR 1.35; 95% CI 0.84-2.19, p=0.2, I2=0%) or femoral and subclavian routes (RR 1.02; 95% CI 0.64-1.65, p=0.92, I2=0%). The meta-analysis also found no statistical difference in DVT complications between femoral access and the other routes combined (Marik, 2012), although a previous RCT showed increased DVT rate in the femoral site compared to subclavian alone (Merrer, 2001). The authors comment that infection rates have decreased across the board over the last 10-15 years likely due to the increased focus on sterile placement of lines. They recommend that physicians choose the site that they are most comfortable with and that is appropriate for the patient. Whether the results from this meta-analysis are applicable to the crashing trauma patient without venous access is debatable.

There are no RCTs to make a head to head comparison of these three access points in the trauma setting. At this point, a rational approach in resuscitating a sick trauma patient is to go for the quickest and easiest route, which appears to be IO, especially in EDs staffed by only one physician. There are no limitations to the medications or blood products that can be infused through an IO. At the same time, if additional personnel are available, central access, whether femoral, subclavian, or IJ can be obtained simultaneously. Practically speaking, this increases the chances of getting access quickly, and more access points may be beneficial for giving high volume and speedy fluid/blood infusions. The ultimate goal is to stabilize the patient; infection risk is not the primary concern and the lines can and should be changed in a more sterile environment.

3. Which trauma patients do you give PCC to over FFP?

It is commonly accepted that hypothermia, acidosis, and coagulopathy form a lethal triad in worsening traumatic hemorrhage. Fresh frozen plasma (FFP) is widely used to correct coagulopathies in traumatic bleeding and is an integral part of any massive transfusion protocol. With the availability of prothrombin complex concentrate (PCC) in most trauma centers, studies have arisen to determine its place in coagulopathy reversal. PCC contains coagulation factors II, VII, IX, and X. Products available in the U.S. are Kcentra™ (aka Beriplex™, Prothrombin Complex Concentrate), Profilnine SD™ (Coagulation Factor IX complex), and Bebulin VH™ (Factor IX Complex). The former contains all four factors, while the latter two contain mostly factor IX, but also factors II and X, and very low levels of factor VII. The advantage of PCC is that it can be quickly reconstituted and administered in a low volume IV bolus. FFP involves type-specific matching, thawing, longer administration times, and a larger overall volume of delivery.

Studies investigating the role of PCC in trauma have focused on reversal of elevated INR both in patients on warfarin and those not on anticoagulants. Kalina, et al. put forth a protocol at Christiana Care Hospital in Delaware to give PCC to trauma patients with an INR >1.5, history of warfarin use, and head CT showing intracranial hemorrhage (Kalina, 2008). Clinicians had the option to use the PCC protocol (54.3%) or FFP with vitamin K (35.4%). Protocol patients had improved times to INR normalization (331.3 vs. 737.8 minutes, p=0.048), number of patients with reversal of coagulopathy (73.2% vs. 50.9%, p=0.026), and time to operative intervention (222.6 vs. 351.3 mins, p=0.045). There was no difference in ICU days, hospital days, or mortality. INR reversal, however, is not a patient oriented outcome. The ability of PCC to rapidly correct the INR does not equate to an improvement in patient care. This is reflected in the lack of difference in mortality. In another study, Safaoui, et al. did a retrospective chart review of patients who presented to the ED with possible brain injury and a history of warfarin use and received FIX complex (three factor PCC) (Safaoui, 2009). Of the 28 patients who met inclusion criteria, a PCC dose of 2000 units reduced admission INR on average from 5.1 to 1.9 (p=0.008), with a mean time to correction of 116 minutes. Eleven patients who had a repeat INR drawn within 30 minutes following PCC had a mean time to INR correction of 13.5 minutes. Limitations of this study include a lack of defined target INR, heterogeneity among times to obtain INR, variation in PCC dosing, as well as variation in obtaining timely INR redraws post-treatment.

There are also a number of small, retrospective studies looking at the use of PCC in general trauma patients who are on warfarin. In 2011, a chart review of 31 patients on warfarin with trauma (13 receiving 3 factor PCC (Profilnine SD™) and 18 receiving FFP) showed a faster reversal of INR with the PCC (16:59 hours vs. 30:03 hours) (Chapman, 2011). However, there was a difference in mortality (actually, the only patient deaths were in the PCC group).

A recent prospective cohort study out of Austria looked at using fibrinogen concentrate (CF) and/or PCC alone compared with those additionally receiving FFP in 144 patients with major blunt trauma (Injury Severity Score (ISS) ≥15) (Innerhofer, 2013). Patients treated with CF alone showed sufficient hemostasis and required fewer RBCs and platelets than those also receiving FFP. They also found significantly lower rates of complications such as multiorgan failure and sepsis in the CF alone group. The limitations of this study are that it used fibrinogen concentrate (in addition to PCC) and measured hemostasis with rotational thrombelastometry, which may not be practical or obtainable in everyday ED settings. In addition, the study did not compare PCC directly to FFP.

There has not been a large meta-analysis comparing PCC to FFP and such a study may be difficult secondary to the heterogeneity in the existing studies. Differences in variables such as drug dosing, coagulation factor differences, baseline patient coagulopathy, and outcome measurements make it difficult to formulate overarching conclusive statements about PCC use. At this point, it is reasonable to treat patients with traumatic ICH on warfarin with PCC, as rapid reversal is necessary to prevent mass effect and herniation. There are no RCTs at this time to conclusively recommend the use of PCC in trauma simply for an elevated INR. Recently, data collection has been completed for a study entitled, “A Randomized, Open Label, Efficacy and Safety Study of OCTAPLEX and Fresh Frozen Plasma (FFP) in Patients Under Vitamin K Antagonist Therapy With the Need for Urgent Surgery or Invasive Procedures” (OCTAPLEX, 2013). This study is pitting Octaplex, a 4-factor PCC, head-to-head against FFP. It will be interesting to see what dose of each drug the investigators use, as PCC can be thought of as a very concentrated version of FFP, making it easier and faster to administer. The limitation, however, is that because PCC is a new drug, it is considerably more expensive than FFP.

4. In blunt abdominal/flank trauma, do you send a urinalysis or simply look for gross hematuria?

Urinalysis (UA) is traditionally performed in blunt trauma as a screening test to diagnose urogenital injuries. The most commonly injured genitourinary (GU) structure is the kidney, and the proportion of trauma patients with renal injuries ranges between 1.4-3.3% (Santucci, 2004). A retrospective observational cohort study of 1815 patients was recently undertaken to investigate whether the routine performance of UA in patients with blunt trauma is still valuable (Olthof, 2013). The main outcome measures were the presence of GU (bladder, kidney, ureter or urethral) injury, and whether the findings on urine specimen and/or imaging led to clinical consequences (additional imaging, intervention, admission for observation, or out-patient follow-up). Microscopic hematuria was defined as greater than three erythrocytes per high powered field. Macroscopic hematuria was defined as blood visible to the naked eye.

The presence of macroscopic/gross hematuria (n=16) led to clinical consequences in 73% of patients, regardless of findings on imaging. Bypassing UA and going straight to imaging resulted in clinical consequences in 1.5% (4/268) of patients, whereas performing both a UA and imaging only resulted in a 2% (22/1031) rate of clinical consequences. The authors state that the 0.5% difference in clinical consequence mostly consisted of additional imaging and outpatient follow up, indicating little added value to the initial screening UA. Limitations of this study include the retrospective design, as it was not possible to determine whether the physician performed imaging based on the UA results or independent of it. In addition, the definition of macroscopic/gross hematuria was subject to the physician’s interpretation and could be influenced by certain foods, medications, or menstruation.

An older study, from 1989, prospectively looked at 1146 consecutive patients with either blunt (1007) or penetrating (139) renal trauma (Mee, 1989). Of the 812 patients with blunt trauma and microscopic hematuria without shock (SBP >90), there were no significant injuries (significant = grade 2-5 renal injury). A related study from the same group, but using more data, found that in 1588 blunt trauma patients with microscopic hematuria and no shock, 3 out of 584 (0.5%) who had imaging had significant injuries (Miller, 1995). Of the 1004 that did not get imaging, 51% were followed up and had no significant complications. These studies support the premise that microscopic hematuria rarely picks up significant renal injuries. Of note, in the 436 patients who had gross hematuria, or microscopic hematuria plus shock, 78 significant renal injuries were identified (Miller, 1995).

In the setting of blunt trauma and hemodynamic stability, it appears reasonable to avoid screening UAs and only look for gross hematuria. The practical benefit is that one can make a disposition decision without having to wait for microscopic UA results. In addition, making decisions based on a UA can be falsely reassuring, as bleeding in the kidney parenchyma may not cause hematuria.

 

Posted in Uncategorized | Tagged , , , , , , , , | 2 Comments

Trauma, Questions

1.  When do you use tranexamic acid in trauma?

EML Trauma 20142.  When you can’t get peripheral access in a trauma patient, do you prefer subclavian, femoral, or IO?

3.  Which trauma patients do you give PCC to over FFP?

4.  In blunt abdominal/flank trauma, do you send a urinalysis or simply look for gross hematuria?

EML Trauma 2014 Poster

Posted in Uncategorized | Tagged , , , , , | 2 Comments

DKA, “Answers”

 1. When you are suspicious for DKA do you obtain a VBG or an ABG? How good is a VBG for determining acid/base status?

Diabetic ketoacidosis (DKA) is defined by five findings: acidosis (pH < 7.30, serum bicarbonate (HCO3) < 18 mEq/L, the presence of ketonuria or ketonemia, an anion gap > 10 mEq/L, and a plasma glucose concentration > 250 mg/dl. It is one of the most serious complications of diabetes seen in the emergency department. The mortality rate of hospitalized DKA patients is estimated to be between 2-10% (Lebovitz, 1995). As a result, its prompt recognition is vital to improving outcomes in these patients. As a result, emergency physicians have long relied on the combination of hyperglycemia and anion gap metabolic acidosis to help point them in the correct diagnostic direction.

EML DKA answersIn the assessment of the level of acidosis in a DKA patient, an arterial blood gas (ABG) has long been thought of as much more accurate than a venous blood gas (VBG) and thus necessary in evaluating a DKA patient’s pH and HCO3 level, two values often used to direct treatment decisions. An ABG is more painful, often time-consuming and labor intensive as it may involve multiple attempts. In addition, ABGs can be complicated by radial artery aneurysms, radial nerve injury and compromised blood supply in patients with peripheral vascular disease or inadequate ulnar circulation. A VBG is less painful, can obtained at the time of IV placement, and is therefore less time consuming. But is it good enough to estimate acid/base status in these patients?

Brandenburg, et al. compared arterial and venous blood gas samples in DKA patients taken at the exact same time prior to treatment and found a mean difference in pH between the arterial and venous samples to be only 0.03, with a Pearson’s correlation coefficient of 0.97 (Brandenburg, 1998). Gokel, et al. also demonstrated in twenty one DKA patients a mean difference in arterial and venous pH of 0.05 + 0.01 and mean difference in arterial and venous HCO3- of 1.88 + 0.4 (Gokel, 2000). A study of 195 patients in 2003 showed similar correlation between arterial and venous pH with a correlation coefficient r = 0.951(Ma, 2003). Further studies have also been published comparing ABG and VBG results in pathologically diverse groups of patients both in the ICU and the ED and achieved similar results (Malatesha, 2007; Middleton, 2006).

Ma, et al. went further and asked physicians to make diagnosis, treatment and disposition decisions without seeing the ABG results first. They found that the results affected diagnosis in only 1% of patients, and treatment in only 3.5% of patients (Ma, 2003).

As a result, the Joint British Diabetes Society 2011 Guidelines for the Management of DKA advise using a VBG in not only the initial assessment of acid/base status, but also to help monitor the progress of treatment (Savage, 2011). In summary, it appears that in patients presenting in DKA, a VBG sample is an adequate substitute for an ABG in determining a patient’s pH and HCO3- level with only a minor degree of inaccuracy that is not clinically significant enough to alter treatment decisions.

Bottom Line: A VBG is adequate for the diagnosis and ongoing management of patients with DKA. ABGs offer no added benefit and are associated with increased pain and complications.

2. Do you use serum or urine ketones to guide your diagnosis and treatment of DKA?

Although the presence of ketones is part of the DKA definition, many clinicians make the diagnosis based on acidosis, decreased serum HCO3 and the presence of an anion gap alone. The presence of ketones, however, is superior in making the diagnosis to HCO3 (Sheikh-Ali, 2008). Serum or urine samples can be used to detect ketones but urine testing is more rapid and thus, more likely to be utilized. Unfortunately, urinalysis testing may be misleading. In DKA, fatty acid breakdown results in the production of two major ketone bodies: acetoacetate and beta-hydroxybutyrate. Beta-hydroxybutyrate is the predominant ketone but urinalysis is only able to detect for acetoacetate via the nitroprusside assay (Marliss, 1970). Thus, early in DKA, the urinalysis may be negative for ketones and falsely reassuring. This has prompted many clinicians to do serum ketone testing. Serum testing also offers a quantitative measure of ketones instead of the simple qualitative measure with a urine test (Foreback, 1997). However, serum beta hydoxybutyrate testing is  unavailable in many hospital systems and may not elucidate the entire clinical picture by itself (Fulop, 1999).

Additionally, as the patient is treated for DKA, beta-hydroxybutyrate is converted to acetoacetate. Appropriate treatment may cause a stronger positive nitroprusside assay reaction for acetoacetate, misleading the physician into thinking the patient is not improving or worsening. However, following serum ketones to assess for DKA improvement has not been shown to be superior to clinical evaluation.

Where does this leave us? In patients presenting with clinical signs and symptoms of DKA, serum pH, HCO3, glucose, and anion gap should be assessed. A urine should be checked for the presence of ketones and if positive, emergency department serum ketone testing would be unnecessary. However, if urine ketones are not present and the diagnosis is unclear, the addition of serum ketones (specifically beta-hydroxybutyrate) seems reasonable. There is no evidence to suggest that following serum ketones during treatment is necessary.

Bottom Line: Patients with DKA may present with a weak or absent nitroprusside assay reaction on urinalysis for ketones as this test only checks for acetoacetate (the minor ketone body produced in DKA). Serum beta-hydroxybutyrate testing may be helpful in certain cases in making the diagnosis.

3. Do you use IV bicarbonate administration for the treatment of severe acidosis in DKA? If so, when?

The cornerstones of DKA treatment involve reversal of the effects of osmotic diuresis with fluids and electrolyte repletion as well as correcting the acidemia present in these patients. Treatment with sodium bicarbonate has frequently been recommended to assist in raising the pH to a “safer level.”

However, recent evidence shows that bicarbonate is not only ineffective in correcting acidemia but that it may be detrimental. In their study, Morris, et al. took twenty one patients with severe DKA patients (pH 6.9-7.14) and found no significant difference in the decrease in glucose concentrations, decrease in ketone levels, the rate of increase in pH, the time to reach a serum glucose of 250 or to reach a pH of 7.3 in patients treated with bicarbonate versus those treated without bicarbonate (Morris 1986). In 2013, a study of 86 patients with DKA confirmed these findings. Patients who received bicarbonate had no significant difference in time to resolution of acidosis or time to hospital discharge (Duhon, 2013). However, the insulin and fluid requirements were higher in the bicarbonate group. A pediatric study of severe DKA patients (pH < 7.15) found that 39% of patients were successfully treated without bicarbonate with a comparable number of complications (Green, 1998).

In addition to its apparent lack of efficacy, numerous studies have also pointed to its potential deleterious effects. Okuda, et al. showed with seven patients in DKA that those assigned to receiving bicarbonate as part of their treatment had a 6-hour delay in the improvement of ketosis compared to the control group (Okuda, 1995).   Bicarbonate has also been found to worsen hypokalemia and can cause paradoxical intracellular and central nervous system acidosis (Viallon, 1999). Additionally, a bicarbonate infusion shifts the oxygen dissociation curve decreasing tissue oxygen uptake and has been associated (although not shown to cause) cerebral edema in pediatric patients.

In spite of the lack of evidence, the American Diabetes Association continues to recommend the use of bicarbonate in patients with a serum pH < 7.0 (Kitabachi, 2006). However, in the face of mounting evidence and a lack of support in the literature, this recommendation should be readdressed. A systematic review of 44 studies, including three randomized clinical trials in adults found no clinical efficacy to the use of bicarbonate in DKA (Chua, 2011). Of note, none of the trials cited in the ADA recommendations or the systematic review included patients with an initial pH < 6.85, making it difficult for the clinician to know what to do in cases of such severe acidosis.

Bottom Line: There is no established role for administration of sodium bicarbonate to patients with DKA regardless of their pH. Sodium bicarbonate administration is associated with more complications including hypokalemia and cerebral edema.

4. When do you start an insulin infusion in patients with hypokalemia? Do you give a bolus followed by a drip?

Insulin administration is paramount to the successful treatment of the DKA patient as it reverses the mobilization of free fatty acids and the production of ketoacids and glucose. Prior to the isolation of insulin for medical use, the mortality of DKA was 100%. It functions by treating the acidosis and ketosis present in these patients. DKA patients, however, often have profound potassium losses secondary to the osmotic dieresis that occurs with such a hyperglycemic state. As a result, about 5-10% of patients with DKA will present with hypokalemia (Aurora, 2012). In addition to its other functions, insulin drives potassium from the serum into the cells. Thus it is vital to know the serum potassium level prior to starting insulin therapy in order to avoid a lethal hypokalemia-induced dysrhythmia. An EKG can also assist in detecting any signs of hypo- or hyperkalemia that may be seen in these patients.   The American Diabetes Association recommends beginning insulin therapy once the potassium level is repleted to > 3.3 meq/L. Below a potassium level of 5.5 meq/L, 20-30meq KCL should be added to each liter of fluids to prevent hypokalemia from occurring with insulin therapy (Kitabchi, 2006).

Traditional teaching in DKA treatment recommends starting a bolus of insulin followed by an infusion. The bolus was believed to rapidly activate the insulin receptors and lead to a resolution of hyperglycemia, ketosis, and acidosis. Recent literature, however, has shown that this initial bolus of insulin is likely unnecessary and may pose harm by creating a greater risk for hypoglycemic events. A randomized trial in 2008 demonstrated that giving patients a bolus of insulin followed by a drip (at 0.07 units/kg/hr) resulted in a brief period of supranormal insulin levels followed by a plateau at subnormal levels (Kitabchi, 2008). Providing an infusion at 0.14 units/kg/hr, however resulted in a serum insulin plateau that was more consistent with normal physiology. Goyal, et al. divided 157 patients and treated half of them with insulin bolus + drip and the other half with insulin drip only and found that there were no statistically significant differences in the rate of change of glucose (both groups with approximately 60mg/dl/hr decrease), change in anion gap, or length of stay in the ED or the hospital (Goyal, 2007). Patients treated with an insulin bolus + infusion also had more side effects including more episodes of hypoglycemia and higher potassium requirements (although these were trends seen in this small observational study, neither reached statistical significance).

Most current guidelines state the initial insulin infusion rate of 0.1 units/kg/hr is acceptable. If the insulin infusion does not cause the serum glucose level to drop by 50-70mg/dL in the first hour, the insulin infusion may be doubled until a steady decrease is achieved.

Bottom Line: Insulin should not be started in patients with DKA until the serum potassium level is confirmed to be > 3.5 mEq/L. The use of an insulin bolus prior to infusion has not been shown to improve any patient centered outcomes or surrogate markers and is associated with an increased rate of hypoglycemic episodes.

Posted in Uncategorized | Tagged , , , , , , | 4 Comments