Syncope, Answers

  1. In which patients with syncope do you get a NCHCT?

Syncope is defined as a transient loss of consciousness and postural tone. It has a rapid onset, short duration, spontaneous recovery and is due to transient global cerebral hypoperfusion. It may have a prodromal phase. In the Emergency Room, about 3% of patients present with a chief complaint of syncope. As part of the work-up, a non-contrast Head CT-scan (NCHCT) is often ordered. The question is whether such a test is necessary and who should get it?

A 2007 prospective observational study looked at 293 adult ED patients with syncope, of which 113 (39%) underwent head CT and of those, 5 patients (5%) had an abnormal head CT (Grossman 2007). These abnormal findings included 2 subarachnoid hemorrhages, 2 intracranial hemorrhages, and 1 stroke. Each of these patients either had a focal neurologic finding, headache, or signs of trauma. Of the patients who did not have a head CT, none were found to have a new neurologic disease during hospitalization or 30-day follow-up. The results from this study suggested that limiting head CT to patients with neurologic signs or symptoms, trauma above the clavicle, or use of warfarin would potentially reduce scans by over 50%.

Cartoon Faint

Several other small retrospective studies have shown a low yield for obtaining a head CT in syncope patients. In a 2005 study in Emergency Radiology, 128 patients presented to a community hospital with syncope of which 44 received a head CT scan (Giglio 2005). Only 1 CT (2%) showed acute evidence of a posterior circulation infarction. In another retrospective review of patients who got a head CT for syncope alone, none of the 117 patients had CT findings that were clinically related to the syncopal event (Goyal 2006). The authors concluded that a head CT in the absence of focal neurologic findings may not be necessary. It is important to note that the reason for obtaining head CTs in these retrospective studies is unclear. Additionally, syncope patients who did not get a head CT were not analyzed.

A more recent prospective study looked at 254 head CT patients (out of 292 with syncope) and classified them into four groups: 1) normal CT / normal neuro exam; 2) abnormal CT / abnormal neuro exam; 3) abnormal CT not related to their syncope presentation; 4) abnormal CT / normal neuro exam. The last group (abnormal head CT with normal neuro exam) which we are interested in capturing in the ED only included 2 patients (0.7%) (Al-Nsoor 2010).

Bottom Line: For patients presenting to the Emergency Department with a chief complaint of syncope, a NCHCT is of low yield and should only be considered in patients with focal neurologic deficits, complaints of headache, or signs of head trauma. This is consistent with the ACEP clinical policy for syncope, which states that no test should be routinely used in the absence of specific findings on physical exam or history (ACEP 2007).

  1. In which patients with syncope do you get a troponin?

Cardiovascular etiologies are at the top of the differential for dangerous causes of syncope. Identifying those at risk for adverse cardiac outcome after syncope is challenging.  Cardiac markers such as troponins are often sent on patients with syncope as part of a standard work up, presumably as a general screen for cardiac etiology such as acute myocardial infarction (AMI).  As with any test, its results are only relevant if interpreted correctly. To that end, the diagnostic and prognostic utility of troponin in syncope patients is examined here.

AMI is a relatively rare cause of syncope, accounting for approximately 1.4-3.5% of cases (Link 2001, Grossman 2003, Hing 2005).  The diagnosis of AMI in patients without EKG changes on presentation is even less common, likely due to the significant infarct required to induce either a non-perfusing dysrrhythmia or severely impaired cardiac output, which manifests in syncope. The utility of ED troponins for diagnosis of AMI is quite limited, especially in the patient without EKG changes.  One prospective cohort study evaluating troponin-I at 12 hours post-syncope for diagnosis of AMI diagnosed four AMIs, 1.5% of their 289 patients (Reed 2010).   Of these four patients, all had ischemic changes on their presenting EKG.

A larger prospective cohort study of 1474 patients found a 3.1% incidence of AMI within 30 days of a syncopal event (McDermott 2009).  Of these patients, 80% had abnormal EKGs on presentation (any change from baseline or any abnormality-overtly ischemic or not- if no comparison). A normal EKG showed a negative predictive value of 99%.  Of these AMI patients, only 50% had positive troponins obtained in the ED.  Along with an abnormal EKG, being male gender, having history of coronary artery disease (CAD) were both significantly more sensitive than a positive troponin in detecting AMI.

Although the utility of troponins in patients with syncope and normal EKGs looking for AMI is limited, a troponin may be useful in risk stratification of patients with alternate causes of syncope. Pulmonary embolism, type A dissection, and intracranial hemorrhage can all cause syncope and can cause type II ischemia (e.g. supply/demand ischemia).  Additionally, an elevated troponin in the setting of syncope has been associated with worse outcomes. In one study, 50% of patients with a positive troponin (at 12 hours post-syncope) had a serious outcome (not including AMI) or all-cause death at 1 month, as opposed to only 6% of patients without the positive biomarker (Reed 2010).

A similar study design performed by the same research group used a lower cutoff for a diagnostically positive troponin-I and examined the outcomes in patients with any detectable troponin level (Reed 2012).  They found the majority of syncope patients (77%) to have detectable values and 20% to have troponin levels above the new diagnostic threshold (although only 2.9% diagnosed with AMI).  At both one month and one year, patients with any detectable troponin level were at higher risk of adverse outcomes and mortality.  This risk increased with higher troponin values.

Another prospective study similarly showed a positive troponin-T (taken at least 4 hours post-syncope) to strongly predict adverse cardiac outcome (Hing 2005). However, a positive troponin proved to have no added value in predicting adverse cardiac outcomes over the OESIL* score (Colivicci 2003).  The study also reports troponin to have a sensitivity of only 13% and a low negative predictive value.

*The OESIL score is a risk stratification score to predict recurrent syncope or adverse outcome and includes the following risk factors:

  1. Age >65years old
  2. History of cardiovascular disease (CAD, CHF, cerebrovascular or peripheral vascular disease)
  3. Syncope without prodrome
  4. Abnormal EKG

The benefit of a troponin’s prognostic ability in syncope patients has not been clearly determined.  A recent study reports on the significantly improved clinical outcomes associated with using troponin-I at a lower threshold to detect positive values in patients with suspected acute coronary syndrome (Mills 2011).  While this is a different subset of patients with a clearly defined disease, the results raise the question of whether the test’s prognostic value may translate into improved outcomes in syncope patients.

Bottom Line

As a diagnostic screening test for AMI in syncope patients without chest pain or EKG changes, a single troponin is inadequate and does not appear to be helpful in risk stratification.  Admitting syncope patients for serial troponins, or ‘rule-out AMI,’ is also low-yield and should be considered only in conjunction with patients’ symptoms and significant risk factors such as known CAD or CHF, older age, syncope preceded by palpitations or without prodrome.  However, the value of a positive troponin is not limited to diagnosis of AMI.  The value of a troponin as a predictor of adverse outcome may have utility for an inpatient team and potentially in the ED as high sensitivity troponins become more ubiquitous.  Whether obtaining this prognostic data significantly improves outcomes is not clear.

  1. Do you get orthostatic measurements in patients with syncope and how do you use them?

Volume depletion (i.e. dehydration) and blood loss are two of the myriad reasons for patients to present with syncope in the Emergency Department. While it isn’t difficult for us to determine whether the patient with an active upper gastrointestinal bleed has significant volume loss, this determination can be more challenging in other patients (i.e. the elderly patient with a urinary tract infection). Thus, clinicians would benefit from having an easy bedside test that assesses volume status, particularly, one that improves our ability to pick up patients with moderate volume depletion or blood loss.

Orthostatic blood pressure measurements have historically been taught to be useful in determination of volume status. It is defined as either:

  1. A drop in systolic blood pressure (SBP) > 20 mm Hg OR
  2. Increase in heart rate (HR) by > 30 beats per minute (bpm)

when a patient stands from a supine position (McGee 1999). It is unclear from the available literature how these numbers were originally derived, but they are likely based on consensus rather than empirical data. Even available consensus statements differentiate the entities of symptomatic and asymptomatic orthostatic hypotension, bringing the overall utility of the test in to question. (Kaufmann 1996).

Despite the traditional teaching, orthostatic measurements have little if any proven utility. There are two major criticisms:

  1. Many patients without signs or symptoms of intravascular volume depletion will demonstrate orthostatic vital signs when measured


  1. Many patients with clear evidence of intravascular volume depletion will not exhibit orthostatic vital signs.

How prevalent are orthostatic vital sign measurements among asymptomatic patients? A number of studies investigated elderly patients living in nursing facilities. Results of these studies are inconsistent. Mader et al and Aronow et al found relatively low prevalence (6.4% and 8% respectively) of orthostatic vital signs in absence of symptoms (Mader 1987, Aronow 1988). These studies, however, were small and excluded elderly patients on medications that may cause hypotension reducing generalizability to the general population of elderly patients. More recent larger studies on unselected elderly patients showed higher rates ranging from 28 – 50% (Raiha 1995, Ooi 1997). Studies in adolescents show similarly poor numbers with approximately 44% of patients exhibiting orthostatic changes (Stewart 2002).

Witting et al attempted to define vital sign thresholds that would decrease false positives. They performed tilt-table testing in healthy volunteers after blood donation (moderate blood loss). In patients < 65 years of age, a change in pulse > 20 bpm or a change in SBP > 20 mm Hg had a sensitivity/specificity of 47%/84% (Witting 1994). This yields a (+) LR = 2.94 and a (-) LR = 0.63. Sensitivity and specificity were similar in patients > 65 (41%/86%) with similarly poor (+) LR = 2.93 and (-) LR = 0.69. McGee et al performed a systematic review in 1999 showing similarly dismal sensitivity for moderate blood or fluid loss (McGee 1999)

Sensitivity Specificity
Blood Loss – Pulse Change (> 30 bpm) 22% NA
Blood Loss – SBP Change (> 20 mm Hg) 7-27% NA
Fluid Loss – Pulse Change (> 30 bpm) 43% 75%
Fluid Loss – SBP Change (> 20 mm Hg) 29% 81%

Mendu et al performed a retrospective study that stands as one of the few studies supporting the use of orthostatic blood pressure management in patients with syncope (Mendu 2009). The researchers found that in 18% of patients, orthostatics affected the final diagnosis and affected management in 25% of patients. However, the study is deeply flawed. The utility of the measurements was determined by the clinician with no gold standard for diagnosis with which to compare. Additionally, 55% of patients in whom orthostatics were measured were found to have abnormal results but far less of these findings were thought to be relevant. Finally, the average age in this study was near 80 years of age, the exact population in which prior studies (previously discussed) have shown poor sensitivity and specificity of these measurements (Raiha 1995, Ooi 1997).

Based on the available literature, orthostatic vital signs do not appear to be either sensitive for screening patients for moderate blood or fluid loss or specific.

Bottom Line: Many asymptomatic patients will have positive orthostatic vital signs and many patients with moderate volume loss won’t have orthostatic vital signs. This makes checking orthostatic vital signs of questionable utility. More important is to see what the patient’s symptoms are. If the patient feels lightheaded or dizzy when they go from lying supine to sitting or from sitting to standing, they are orthostatic and this should be addressed.

  1. Do you manage patients with near-syncope differently than those with syncope?

Pre-syncope is a chief complaint commonly evaluated in the Emergency Department (ED). Defined as the sense of impending loss of consciousness, its symptoms can include lightheadedness, weakness, visual disturbances, “feeling faint”, and other nonspecific complaints. While there have been several attempts to develop and derive clinical prediction tools for syncope, most have been unsuccessful due to poor sensitivity and specificity and large performance variability (Constantino 2014, Birnbaum 2008, Serrano 2010). As one can imagine, if validation of prediction rules for the objective finding of syncope is fraught with difficulty, prediction rules for the more subjective and vague symptoms of pre-syncope would be an arduous task. As such, there is a lack of guidance when it comes to management and disposition decisions for patients who present to the ED with pre-syncope.

Though classically pre-syncope was thought to be benign with many of these patients being discharged from the ED, this may not be the case. In a study of approximately 200 patients presenting to the ED with nonspecific complaints (including weakness, dizziness, and feeling unwell) and Emergency Severity Index (ESI) scores of 2 or 3 with normal vital signs, 59% had a serious condition diagnosed within 30 days, and 30-day mortality was 6% (Nemec 2010). The median age of the study’s cohort was 82 years, and most had co-morbidities. While this study did not specifically assess patients with pre-syncope, given the overlap of the included symptoms with the symptoms seen in pre-syncope, it suggests that pre-syncope may similarly be a harbinger of serious disease.

Early syncope studies often excluded pre-syncope since its definition is poorly defined. However, recent literature corroborates the potential severity of pre-syncope. Some experts purport that the pathophysiologic mechanism for pre-syncope is the same as that for syncope except that the global cerebral hypoperfusion is not significant enough to cause complete loss of consciousness (Quinn 2014). A prospective observational pilot study of 244 patients with pre-syncope and 293 patients with syncope found similar ED hospitalization and 30-day adverse outcome rates in the two groups- 23% and 20% respectively (Grossman 2012). One of the reasons that the rates were so high may have been the broad and inclusive definition of adverse outcome (which included, amongst other conditions, cortical stroke, carotid stenosis and endarterectomy, and alterations in antidysrhythmics medications).

A larger prospective cohort study in 2014 found a significant number of adverse outcomes in pre-syncope patients. Of 881 adult patients with pre-syncope (which constituted 0.5% of total ED visits), 5.1% had serious outcomes at 30-day follow-up (Thiruganasambandamoorthy 2014). Furthermore, physicians were not accurate in predicting which patients were high risk for serious outcomes after their ED visit, with an area under the receiver operating characteristic (ROC) curve of 0.58-slightly better than a coin flip.

Should we be managing patients with pre-syncope similarly to those with syncope? Despite the paucity of literature on outcomes for ED patients presenting with pre-syncope, it appears as though the potential severity of pre-syncope has been under-appreciated. Once thought to be low-risk, recent literature challenges this dogma and suggests that a significant proportion of patients with pre-syncope suffer adverse outcomes similar to those who present with syncope. Intuitively it makes sense that true pre-syncope, syncope, and cardiac arrest exist on the same spectrum, differentiated by severity and duration of hypoperfusion, and thus should be risk stratified and managed similarly. However, to date no evidence exists on whether managing pre-syncope patients the same as syncope patients improves outcomes. As such, future studies are needed to further explore which patients with pre-syncope are at higher risk for adverse outcomes, with the ultimate goal to derive and validate a clinical decision rule for this patient population.

Bottom Line:

Previously thought to be a benign diagnosis, recent literature suggests that, like syncope, a non-insignificant proportion of patients with pre-syncope suffer serious adverse outcomes. Further studies are needed to determine which patients with pre-syncope are at higher risk for adverse outcomes, as we currently do not have clinical decision rules to guide our management for this patient population.


Grossman SA, Fisher C, Bar JL, Lipsitz LA, Mottley L, Sands K, Thompson S, Zimetbaum P, Shapiro NI. The yield of head CT in syncope: a pilot study. Intern Emerg Med. Mar 31 2007. PMID: 17551685

Giglio P, Bednarczyk EM, Weiss K, Bakshi R. Syncope and head CT scans in the emergency department. Emergency Radiology. Dec 12 2005. PMID: 16292675

Goyal N, Donnino MW, Vachhani R, Bajwa R, Ahmed T, Otero R. The utility of head computed tomography in the emergency department evaluation of syncope. Intern Emerg Med. 2006. PMID: 17111790

Al-Nsoor NM, Marat AS. Brain computed tomography in patients with syncope. Neurosciences (Riyadh). 2010 April 15. PMID: 20672498

ACEP Clinical Policy Subcommittee on Syncope. Clinical Policy: Critical Issues in the Evaluation and Management of Adult Patients Presenting to the Emergency Department with Syncope. Annals of Emerg Med. April 2007. PMID: 18035161

Link MS, Lauer EP, Homoud MK et al. Low yield of rule-out myocardial infarction protocol in patients presenting with syncope. Am J Cardiol 2001;88:7067. PMID: 11566406

Grossman SA, Van Epp S, Arnold R et al. The value of cardiac enzymes in elderly patients presenting to the emergency department with syncope. J Gerontol A Biol Sci Med Sci 2003;58:1055–8. PMID: 14630890

Hing R, Harris R. Relative utility of serum troponin and the OESIL score in sycnope. Emerg Med Australas 2005;17:31-8. PMID: 15675902

Reed MJ, Newby ED, Coull AJ, et al. Diagnostic and prognostic utility of troponin estimation in patients presenting with syncope: a prospective cohort study. Emerg Med J. 2010;27:272-276. PMID: 20385677

Colivicchi F, Ammirati F, Melina D, et al. Development and prospective validation of a risk stratification system for patients with syncope in the emergency department: the OESIL risk score.  Eur Heart J. 2003 May;24(9):811-9. PMID: 12727148

McDermott D, Quinn JV, Murphy CE. Acute myocardial infarction in patients with syncope. CJEM. 2009 Mar;11(2):156-60. PMID: 19272217

Reed MJ, Mills NL, Weir CJ. Sensitive troponin assay predicts outcome in syncope. Emerg Med J. 2012;29:1001-1003. PMID: 22962048

Mills NL, Churchouse AM, Lee KK, et al. Implementation of a sensitive troponin I assay and risk of recurrent myocardial infarction and death in patients with suspected acute coronary syndrome. JAMA. 2011 Mar 23;305(12):1210-6. PMID: 21427373

McGee S, Abernethy WB, Simel DL. The rational clinical examination. Is this patient hypovolemic. JAMA 1999; 281(11): 1022-9. PMID: 10086438

Kaufmann H. Consensus statement on the definition of orthostatic hypotension, pure autonomic failure and multiple system atrophy. Clin Auto res 1996; 6: 125-6. PMID: 8726100

Witting MD, Wears RL, Li S. Defining the positive tilt test: a study of healthy adults with moderate acute blood loss. Ann Emerg Med 1994; 23(6): 1320-3. PMID: 8198307

Mader SL, Josephson KR, Rubenstein LZ. Low prevalence of postural hypotension among community-dwelling elderly. JAMA 1987; 258: 1511-14. PMID: 3625952

Aronow WS, Lee NH, Sales FF, Etienne F. Prevalence of postural hypotension in elderly patients in a long-term health care facility. Am J Cardiology 1988; 62(4): 336. PMID: 3135742

Raiha I, Luutonen S, Piha J, Seppanen A et al. Prevalence, predisposing factors and prognostic importance of postural hypotension. Arch Intern Med 1995; 155: 930-935. PMID: 7726701

Ooi WL, Barrett S, Hossain M, Kelley-Gagnon M, Lipsitz LA. Patterns of orthostatic blood pressure change and the clinical correlates in a frail, elderly population. JAMA 1997; 277: 1299-1304. PMID: 9109468

Stewart JM. Transient orthostatic hypotension is common in adolescents. J Pediatr 2002; 140: 418-24. PMID: 12006955

Mendu ML et al. Yield of diagnostic tests in evaluating syncopal episodes in Older patients. Arch Intern Med 2009; 169: 15: 1299-1305. PMID: 1963031

Birnbaum A, et al. Failure to Validate the San Francisco Syncope Rule in an Independent Emergency Department Population. Ann Emerg Med. 2008 Aug;52(2):151-9. PMID: 18282636

Constantino G, et al. Syncope Risk Stratification Tools vs Clinical Judgement: An Individual Patient Data Meta-analysis. Am J Med. 2014 Nov;127(11):1126.e13-25. PMID: 24862309

Grossman SA, et al. Do Outcomes of Near Syncope Parallel Syncope? Am J Emerg Med. 2012 Jan;30(1):203-6. PMID: 21185670

Nemec M et al. Patients Presenting to the Emergency Department with Non-specific Complaints: The Basel Non-specific Complaints (BANC) Study. Acad Emerg Med. 2010 Mar;17(3):284-92. PMID: 20370761

Quinn, JV. Syncope and Presyncope: Same Mechanism, Causes, and Concern. Ann Emerg Med. 2014 Oct 31. pii: S0196-0644(14)01257-8. PMID: 25441246

Serrano LA, et al. Accuracy and Quality of Clinical Decision Rules for Syncope in the Emergency Department: A Systematic Review and Meta-analysis. Ann Emerg Med. 2010 Oct;56(4):362-373. PMID: PMC2946941/

Thiruganasambandamoorthy V, et al. Outcomes in Presyncope Patients: A Prospective Cohort Study. Ann Emerg Med. 2014 Aug 30. pii: S0196-0644(14) 1115-9. PMID: 25182542

Answers Created By:

Posted in Uncategorized | Leave a comment

Syncope, Questions

Syncope1. In which patients presenting with syncope do you get a Non-Contrast Head CT (NCHCT)?

2. In which patients presenting with syncope do you get a troponin?

3. Do you get orthostatic vital sign measurements in patients presenting with syncope? How do you use them?

4. Do you manage patients presenting with near-syncope differently than those presenting with syncope?

Posted in Uncategorized | Tagged , , , , | 7 Comments

Spinal Cord Injury, “Answers”

1. What imaging do you use for patients with possible acute, traumatic spinal cord injury?

Patients who can be cleared using the Nexus or Canadian C-spine criteria should be cleared clinically. However, those with moderate to high risk of a cervical spine (C-spine) injury should have cross sectional imaging, based on substantial amounts of data. The Eastern Association for the Surgery of Trauma (EAST) referenced 52 articles to construct guidelines recommending against plain radiographs in the assessment of potential C-spine injuries (Como, 2009). Although the C-spine is actually only injured in approximately 3% of all major trauma patients, (Crim, 2001) these tend to be some of the most disabling injuries. There is less data regarding the type of imaging preferred for patients with possible thoracic and lumbar spine injuries.

EML Spinal Cord AnswersIn patients in whom you cannot clear their C-spine clinically and you suspect an acute, traumatic spinal cord injury, computerized tomography (CT) scanning is the most appropriate initial imaging. There is a growing body of literature stating that plain films miss many clinically significant injuries, and have little to no role in evaluating spinal fractures, particularly those of the C-spine.

The argument against this is that some patients may actually be appropriate for plain film imaging of the C-spine. These are patients who are deemed low risk mechanism, younger, and in whom good views can be obtained. There will still be some fractures that are missed, but these have a low chance of being clinically significant, and in a patient with low pre-test probability, this may be appropriate utilization of plain film imaging. The most compelling argument for this comes from the original NEXUS study. Of 34,069 patients with blunt trauma, 1,496 had C-spine injuries. Plain films missed 564 injuries in 320 patients. However, for 436 of these injuries, 0.80% of all patients, the plain films were interpreted as abnormal (although non diagnostic) or abnormal. Only 23 (0.07%) of all patients in the studies had injuries that were not seen on plain films that were also read as negative. Three of these patients had unstable c-spine injuries (Mower, 2001). In a retrospective review of a trauma database of 3,018 patients, 116 (9.5%) had a C-spine fracture. The injury was only seen in 75 of these patients on plain film. In the remaining 41 patients (3.2%), the injury was detected on CT scan, and in all cases these injuries required treatment. It is important to recognize that the mean Glasgow coma score (GCS) of this patient population was 13, thus, they were likely a high acuity patient population overall. These authors concluded that there was no role for plain imaging in patients in whom a C-spine injury is suspected in the Emergency Department population (Griffen, 2003).  Most recently, Mathen, et al. performed a prospective cohort study of 667 patients who required C-spine imaging. They found that plain radiography missed 15 of 27 (55.5%) clinically significant c-spine injuries (Mathen, 2007).

In a cost effectiveness analysis, CT was found to be the preferred modality for imaging in moderate to high risk patients, given that missed C spine fractures and resultant paralysis may be devastating to both the patient and society (Blackmore, 1999).

Unlike C-spine fractures, we have no clinical decision rules to help guide our care of patients with potential thoracolumbar fractures. There is considerably less data on thoracolumbar injuries. According to a literature review published in the Journal of the American College of Radiology, which looked at studies on TL spine imaging comprising several thousand patients, those who should be evaluated for thoracic or lumbar spine injuries are those with high force mechanism and any of the following findings: back pain or midline tenderness, local signs of thoracolumbar injury, abnormal neurologic signs, C-spine fracture, GCS < 15, major distracting injury, or drug or alcohol intoxication (Daffner, 2007).

There is evidence to suggest that many injuries may be missed by plain films of the thoracolumbar regions. One study of seventy intubated trauma patients found that thin slice CT discovered 100% of unstable fractures in comparison with 56-80% (depending on spinal level) seen by conventional radiographs (Herzog, 2004). A prospective evaluation of 1.915 trauma patients presenting to a Level I trauma center compared the sensitivity of CT versus plain film in the detection of the 78 thoracic or lumbar spine fractures sustained by the group. CT sensitivity was 97% and 95% for thoracic and lumbar injuries, respectively. For plain films those numbers were 62% and 82% (Sheridan, 2003). Yet another study in high acuity trauma patients found the sensitivity of CT to be 97% and that for plain films to be an abysmal 33.3% (Wintermark, 2003). In a retrospective review of 3,537 patients, the only fractures missed by CT scan were a cervical compression fracture identified on MRI, and a thoracic compression fracture identified by plain films. This study recommended that plain films of the spine are unnecessary in the evaluation of blunt trauma patients. Furthermore, after a panel reviewed all literature on the topic, the American College of Radiology Appropriateness Criteria recommended that patients with potential thoracic or lumbar spine injury undergo CT scan, as opposed to plain films (Daffner, 2007). The ACR grades on a scale of 1-9, with 1-3 being usually not appropriate, 4-6 may be appropriate, and 7-9 usually appropriate. Their level of recommendation in this indication is a 9.

Due to the data regarding imaging, most patients should undergo CT imaging for possible spinal trauma. Only the lowest risk patients who have adequate plain films should be cleared without CT imaging.

2. How do you treat neurogenic shock?

Neurogenic shock is a form of distributive shock unique to patients with spinal cord injuries. Fewer than 20% of patients with a cervical cord injury have the classic diagnosis of neurogenic shock upon arrival to the emergency department, and it is a relatively uncommon form of shock overall (Guly, 2008). Patients with injuries at T4 or higher are most likely to be affected by neurogenic shock (Wing, 2008). It is caused by the loss of sympathetic tone to the nervous system, ultimately leading to an unopposed vagal tone (Stein, 2012). Many times the terms “spinal shock” and “neurogenic shock” are used interchangeably, although they are two separate entities. Spinal shock consists of the loss of sensation and motor function immediately following a spinal cord injury (Nacimiento, 1999). During this period of spinal shock, reflexes are depressed or absent distal to the site of the injury. Spinal shock may last for several hours to several weeks post injury (Nacimiento, 1999).

Symptoms of neurogenic shock consist of bradycardia and hypotension (Grigorean, 2009). Bradycardia is typically not present in other forms of shock, and may provide a clue to clinicians that a patient has sustained a spinal cord injury. However, emergency physicians should recognize that hemorrhagic shock needs to be first ruled out, even in patients with bradycardia, many patients with hemorrhagic shock are not tachycardic (Stein, 2012). Cardiac dysfunction is another feature of neurogenic shock, and patients may present with dysrhythmias following injury to the spinal cord (Grigorean, 2009).

The American Spinal Injury Association (ASIA) has classified injuries based on motor and sensory findings at the time of injury. ASIA A and B injuries are the worst; with A being a complete motor and sensory loss with no preserved function in the sacral segments S4-S5. ASIA B includes patients who have sacral sparing, meaning that they have function of S4 and S5 (Marino, 2003). Neurogenic shock is rarely encountered in the emergency department, however, it is important to recognize that almost 100% of patients who sustain complete motor cervical ASIA A or ASIA B injuries develop bradycardia. Thirty five percent of these patients ultimately require vasopressors, so management of neurogenic shock is imperative for emergency physicians (McKinley, 2006). There is no conclusive data regarding the optimal time to start vasopressors, however, it is important to maintain appropriate hemodynamic goals in patients with spinal cord injuries.

Hemodynamic goals in patients with spinal cord injuries are unique. A systolic blood pressure <90 mmHg must be corrected immediately (Muzevich, 2009). The American Association of Neurological Surgeons and the Congress of Neurological Surgeons Guidelines for the Acute Management of Spinal Cord Injuries both recommend a MAP at 85 to 90 mm Hg for the first seven days following a spinal cord injury based on observational descriptions of the hemodynamics in spinal cord injured patients (Levi, 1993; Licina, 2005).

Patients who are suspected of being in neurogenic shock should receive adequate fluid resuscitation prior to initiating vasopressors (Wing, 2008). However, there are no current recommendations regarding the first line vasopressor for neurogenic shock (Stein, 2012). Depending on a patient’s hemodynamics, this vasopressor will likely be norepinephrine, phenylephrine, or dopamine.

Norepinephrine is an excellent first line vasopressor in neurogenic shock due to its alpha and some beta activity, thus leading to its ability to improve blood pressure and heart rate (Stein, 2012). Phenylephrine is another common choice because it is easy to titrate and can be given through a peripheral line. A disadvantage of phenylephrine is the fact that it can lead to reflex bradycardia due to its lack of beta agonism. This drug may be most appropriate in patients who are not bradycardic (Wing, 2008). Dopamine is another option, however, it may lead to diuresis and ultimately worsened hypovolemia (Stein, 2012). It does have beta agonism, and in bradycardic patients may be favored over phenylephrine (Muzevich, 2009). Dopamine is unlikely to be tolerated in patients who are experiencing dysrhythmias.

3. What is your management and disposition for elderly patients with vertebral compression fractures?

Vertebral compression fractures of the thoracic and lumbar vertebrae are extremely common in the elderly population, with an annual incidence of 1.5 million vertebral compression fractures per year (Barr, 2000). They are most commonly seen in patients with osteoporosis, although may be seen in younger patients, particularly those with malignancy. The majority of patients are treated non-surgically, usually with bed rest and hyperextension bracing (Gardner, 2006). Pain is typically the presenting symptom, and neurologic deficits are rare unless there is retropulsion of bone into the vertebral canal. This presentation is rare in compression fractures, but it does constitute a surgical emergency (Kavanagh, 2013).

Although some patients may experience mild or minor symptoms related to a vertebral compression fractures, many patients will have a significant degree of pain and decreased quality of life associated with their fracture (Adachi, 2002). At a minimum, patients with very mild symptoms and a normal neurologic exam may be discharged home with adequate pain control and spine surgery follow up established.

Therapy should be tailored toward avoiding a prolonged period of bed rest as well as adequate pain control (Wong, 2013). Prolonged immobilization may lead to poor pulmonary toilet, venous thromboembolism, and deconditioning, especially in elderly patients. Non-steroidal anti-inflammatory drugs (NSAIDS) are first line therapy since they are non-sedating, but may be poorly tolerated in certain groups of patients such as the elderly or those with underlying peptic ulcer disease (Wong, 2013). Opiates and muscle relaxers may be necessary for pain control, but should be used with caution in the geriatric population, especially those at risk of falls.

Although admission of elderly patients with vertebral compression fractures may not result in surgical management, it may provide other avenues of therapy that are unavailable or difficult to arrange in the ED setting. One of these therapies includes physical therapy, which may help patients regain early mobility if introduced appropriately. Physical therapy is also helpful in training patients to strengthen other extra-axial muscles, particularly the spine extensors (Wong, 2013). Several trials have demonstrated effectiveness of physical therapy in patients with vertebral compression fractures. Malmros et al evaluated a 10-week physical therapy program in a placebo-controlled, randomized, single-blinded study that demonstrated improved quality of life and reduction in pain and analgesic use (Malmros, 1998). Papioannou, et al. conducted a randomized controlled trial consisting of a 6 month home exercise program and found that patients in the physical therapy arm had significant improvement of quality of life scores and improved balance at one year (Papaioannou, 2003).

The decision of when to use thoracolumbosacral orthosis (TLSO) brace in a patient with a vertebral compression fracture is somewhat controversial. Pfeifer, et al. demonstrated in a randomized trial that the use of a brace increased trunk muscle strength and was associated with an improved quality of life, decreased pain, and improved daily functioning in patients with compression fractures (Pfeifer, 2004). However, an electromyelography study demonstrated increased muscle spasming in patients with brace placement (Lantz, 1986). Furthermore, braces may contribute to skin breakdown, especially in geriatric patients. If a TLSO brace is given to patients, it should be done in consultation with a spine surgeon.

Surgical management options for vertebral compression fractures include kyphoplasty and vertebroplasty. Both procedures are minimally invasive, but are traditionally only performed if patients are in pain several weeks following diagnosis of a compression fracture (Wong, 2013).

Patients who are discharged from the ED with compression fractures need to be able to ambulate and perform activities of daily living prior to discharge. If pain control limits these activities, they will likely require admission for pain control, physical therapy, and potentially rehabilitation.

4. How do you clear a C-spine after a negative CT in a trauma patient who is awake, neuro intact, wearing a collar?

According to EAST guidelines, there are multiple appropriate options in patients who are awake, neurologically intact, and still have midline tenderness after a negative CT (Como, 2009). Although CT scans will pick up the majority of injuries, it is well documented that they specifically may miss ligamentous injuries, subluxations, and dislocations (Woodring, 1992).

The first option is to obtain an MRI within 72 hours post injury. Very little data exists in the literature regarding this option. Schuster, et al. evaluated prospectively collected registry data for 2854 blunt trauma patients, 93 of whom had a normal neurologic exam at admission, a negative CT result, and persistent C-spine pain. These patients all had an MRI. In all 93 of these patients, the MRI was negative for clinically significant injury. However, the argument could also be made that since no clinically significant injury was detected by MRI in this case, that there was no need for any further imaging (Schuster, 2005).

The second option is to continue the C-spine immobilization until there is no midline tenderness and the patient has been followed up as an outpatient. This is not ideal in centers where there is no trauma team to assist in outpatient management of these patients. Furthermore, the collar itself poses a risk of skin breakdown and decubitus ulcers when worn for a prolonged period of time. This option may work best for patients who can rapidly be seen in a trauma clinic.

The third option is to obtain flexion-extension films in patients with a negative CT of the C-spine. Although studies have evaluated the utility of flexion-extension films in patients with negative plain films of the C spine, no study has completely evaluated flexion-extension films following a negative CT of the C-spine. Insko et al reviewed 106 patients with negative plain films or negative CT imaging in areas that were not visualized by plain films. This study demonstrated a false negative rate of zero in diagnosing C spine fractures when flexion extension films were performed in patients who were persistently tender (Insko, 2002).

Ultimately, more information is needed to determine the best course of action to take in a patient with persistent pain following negative CT imaging of the C-spine (Como, 2009). However, at this time, there are three potential options in ruling out a C-spine injury in these patients. The decision may largely depend on local practice patterns, clinical suspicion for injury, as well as a patient’s ability to follow up.

Posted in Uncategorized | Tagged , , , , , , | 7 Comments

Spinal Cord Injury, Questions

1. What imaging do you use for patients with possible acute, traumatic spinal cord injury?

2. How do you treat neurogenic shock?

3. What is your management and disposition for elderly patients with vertebral compression fractures?

4. How do you clear a C-spine after a negative CT in a trauma patient who is awake, neuro intact, wearing a collar?

Posted in Uncategorized | Tagged , , , , , | 2 Comments

Infectious Diseases, “Answers”

1. Which patients with neutropenic fever do you consider for outpatient management?

Neutropenic fever is a common presentation to the Emergency Department, especially in tertiary hospitals where many oncology patients are undergoing chemotherapy. According to the Infectious Disease Society of America (IDSA), fever in neutropenic patients is defined as a single oral temperature of >38.3°C (101°F) or a temperature of >38.0°C (100.4°F) sustained for >1 hour. Rectal temperature measurements (and rectal exams) are not recommended by the IDSA to prevent colonizing gut organisms from entering the surrounding mucosa and soft tissues (Freifeld, 2011). The definition of neutropenia varies from institution to institution; the IDSA defines it as an (ANC) <500 cells/microL or an ANC that is expected to decrease to <500 cells/microL over the next 48 hours. Profound or severe neutropenia occurs when the ANC is < 100 cells/micromol. The National Cancer Institute defines neutropenia as an ANC < 1000 cells/micromol (HHS 2010).

Patients with neutropenic fever are usually started on broad spectrum IV antibiotics and admitted to the hospital, however there are a subgroup of patients who can be safely managed as outpatients. The official wording from the IDSA guidelines is that “Carefully selected low-risk patients may be candidates for oral and/or outpatient empirical antibiotic therapy (B-I)”. Grade B is defined as moderate evidence to support a recommendation for use, and Level I is evidence from ≥1 properly randomized, controlled trial. The data which they derived these recommendations include one large series of patients, where oral outpatient treatment for low-risk fever and neutropenia was deemed successful in 80% of patients, with 20% patient requiring readmission. Factors predicting readmission include age > 70, grade of mucositis >2, poor performance status, and ANC < 100 cells/microL at onset of fever (Escalante 2006). Klastersky et al studied 178 low-risk patients who were treated with oral antibiotics. Only 3 patients were readmitted resulting in a 96% success rate (Klastersky 2006) .

The IDSA formally risks stratifies using the Multinational Association for Supportive Care in Cancer (MASCC) scoring system. The adult guidelines from Australia, European Society for Medical Oncology (ESMO), the American Society of Clinical Oncology (ASCO) also recommend the use of the MASCC index (Gea-Banacloche 2013). Low-risk patients have a MASCC score ≥ 21.


The index has been validated in multiple settings and performs well, although it may function better in solid tumors than in hematologic malignancies (Klastersky 2013). An issue with the major criteria of “burden of febrile neutropenia” is that there is no standardized definition for this criteria, making uniform application of the MASCC confusing (Kern 2006).

Based on the best available data prior to the 2010 guidelines release, the IDSA developed an algorithm to risk stratify neutropenic fever patient and their appropriate management:



Ciprofloxacin plus amoxicillin-clavulanate is recommended for oral empirical treatment (Friefeld 2010). Other oral regimens, including levofloxacin or ciprofloxacin monotherapy or ciprofloxacin plus clindamycin, are less well studied but are commonly used. In low-risk patients, the risk of invasive fungal infection is low, and therefore routine use of empirical antifungal therapy is not recommended. Respiratory virus testing and chest radiography are indicated for patients with upper respiratory symptoms and/or cough .

A systematic review and meta-analysis of 14 RCTs was published in 2011 and not, therefore, included in the 2010 guidelines (Teuffel 2011). The meta-analysis concluded that inpatient versus outpatient management was not significantly associated with treatment failure; death occurred with no difference between the two groups; and outpatient oral versus outpatient parenteral antibiotics were similarly efficacious with no association between route of administration and treatment failure.

It must be said without questions that the patients whom you are treating as outpatients need to have easily accessible and very close follow up with their oncologists. They should be vigilantly examined for a source of infection, including a thorough skin, mucosal, and neurologic exam, as any obvious focal infection may necessitate inpatient treatment.

Bottom Line: Based on the IDSA guidelines and other meta-analyses, it is reasonable to treat a certain subset of low risk febrile neutropenic patients with oral antibiotics as an outpatient with good follow up.

2. Which patients with community-acquired pneumonia do you admit?

Community-acquired pneumonia (CAP) is defined as an acute infection of the pulmonary parenchyma in a patient who has acquired the infection in the community, as distinguished from hospital-acquired (nosocomial) pneumonia, which occurs 48 hours or more after hospital admission and is not present at time of admission. A third category of pneumonia, designated “healthcare-associated pneumonia,” is acquired in other healthcare facilities such as nursing homes, dialysis centers, and outpatient clinics or within 90 days of discharge from an acute or chronic care facility. The most common validated prediction rules for prognosis in community-acquired pneumonia include the Pneumonia Severity Index, CURB, and CURB-65 severity scores.

Question 2


According to the Pneumonia Severity Index (PSI), patients in risk classes I-III are defineQuestion 2 2d as low risk for short-term mortality and are considered for outpatient treatment. In the original derivation study of the PSI, Mortality ranged from 0.1 to 0.4 percent for class I patients, from 0.6 to 0.7 percent for class II, and from 0.9 to 2.8 percent for class III (Fine 1997). This older but well validated rule can be difficult to remember and apply; as a result, a relatively simpler CURB score with a total point score ranging from 0-4 was described and externally validated (Ewig 2004). A modified version, the CURB-65, which added age ≥ 65 as another positive risk factor was internally validated to stratify short term mortality for patients with CAP (Lim 2003). Patients scoring CURB < 1 and CURB-65 < 2 are considered low-risk and candidates for outpatient treatment. The CURB and CURB-65 scores are easier to remember and apply, however it does require laboratory data (BUN), whereas the lowest PSI risk class I can be attained without getting blood draws, a benefit appreciated in outpatient settings.

The three prediction rules were pitted against each other in a prospective study of over 3000 patients with community-acquired pneumonia from 32 hospital EDs to see which one was better at predicting 30-day mortality (Aujesky, 2005). Inclusion criteria included age ≥ 18 with a clinical diagnosis of pneumonia and new radiographic pulmonary infiltrate. Exclusion criteria included hospital acquired pneumonia, immunosuppression, psychosocial problems incompatible with outpatient treatment, or pregnancy.

The PSI classified a significantly greater proportion of patients as low risk (68%) than the CURB (51%) and the CURB-65 (61%). Of the low-risk by PSI (class I-III), the aggregate 30-day mortality was 1.4%, which is less than the CURB (score < 1) low-risk mortality rate of 1.7% and also 1.7% mortality for CURB-65 low-risk group (score <2). High-risk patients based on the PSI (class IV-V) had a higher mortality of 11.1% compared with high-risk CURB (≥1) and high risk CURB-65 (≥2), with respective mortality rates of 7.6% and 9.1%.

The PSI had a slightly higher sensitivity and negative predictive value across each risk cut-off point compared to the CURB and CURB-65. In addition, by comparing the areas under the receiver operating characteristic (ROC) curves, the PSI had a statistically significantly greater discriminatory power to predict 30-day mortality. CURB-65 showed a higher overall discriminatory power than the original CURB score.

Based on the estimation of 4 million annual cases of community acquired pneumonia in the USA (DeFrances, 2007), and the average cost of inpatient versus outpatient care of $7500 vs $264; using the PSI would identify an additional 650,000 low-risk patients vs. CURB and 250,000 vs. CURB-65, saving a significant amount of healthcare dollars, while still maintaining a low 30-day mortality rate (Aujesky, 2005) .

The 2007 (most current) Infectious Diseases Society of America (IDSA) recommendations on managing CAP revolve around the initial assessment of severity. “Severity-of-illness scores such as the CURB-65 criteria, or prognostic models, such as the Pneumonia Severity Index (PSI), can be used to identify patients with CAP who may be candidates for outpatient treatment. (Strong recommendation; level I evidence).” In addition to objective data, the IDSA also recommends supplementing with subjective factors, including the ability to safely and reliably take oral medications and the availability of outpatient support resources (Strong recommendation; level II evidence). (Mandell, 2007).

Bottom Line: Based on the validated prognostic scoring systems and recommendations from the IDSA, a subgroup of low-risk patients with community acquired pneumonia can safely be managed as outpatients. Their prognosis can be reliably predicted by 3 scoring systems, where the PSI performs slightly better but is more complex to apply than the CURB and CURB-65.

  1. Which patients with influenza do you treat with oseltamivir?

Neuraminidase inhibitors, specifically oseltamivir, have been increasingly used in the last 5 years for the treatment of patients with symptoms of influenza. In fact, between May and December of 2009, around 18.3 million prescriptions were written for the drug in seven countries (Australia, Canada, France, Germany, Japan, UK and USA) (Muthuri 2014). Additionally, hundreds of millions of doses have been stockpiled in various countries as a safe guard against pandemic influenza and the World Health Organization (WHO) lists oseltamivir as an essential drug. Physicians have been encouraged to prescribe osletamivir to patients by drug companies, professional society recommendations, hospitals and by patient pressure. Despite widespread use, the evidence of benefit for oseltamivir has never sat on firm evidence-based grounds. Much of this stems from the reluctance of the pharmaceutical giant Roche to release all relevant study data on the drug. This changed in 2013 when all of the data was made available for analysis.

Before we delve into the recently released data surrounding oseltamivir, let’s look at the prior recommendations and the basis for these recommendations. In 2009, the BMJ published a review of a number of observational studies looking at the effect of oseltamivir in the treatment of influenza (Freemantle 2009). This publication looked at randomized controlled studies provided by Roche pharmaceutical at that time. The limited available evidence supported the role of oseltamivir in reducing the rate of post-influenza pneumonia in otherwise healthy adults. There was no evidence of a mortality benefit and limited safety data (Freemantle 2014). Additionally, the available data supported the idea of using oseltamivir for chemoprophylaxis in patients who were at risk of exposure. A Cochrane group review in 2010 echoed these results but also stated that there was extensive bias present in the available studies and without a full disclosure of all the research, no strong recommendation could be made (Jefferson 2014). In spite of the limited evidence, broad recommendations were made including treatment of patients with multiple comorbidities, pregnant patients, those with immunocompromise and chemoprophylaxis for close contacts (Harper 2009).

In 2013, Roche pharmaceuticals released all of the study data. The Cochrane Respiratory group subsequently published an updated systematic review of all of the randomized controlled trials (Jefferson 2014) as well as a summary statement in the BMJ (Jefferson 2014). A number of statements from the prior review stand: there are minimal studies on efficacy and safety in pregnant patients, and no mortality benefit was seen. They did not find a reduction in post-influenza pneumonia and posited that this prior finding was likely due to publication bias. The below table summarizes the major outcomes of the 2014 Cochrane systematic review:

Outcome Measure Finding
Alleviation of Symptoms Shortened by 16.8 hrs with oseltamivir
Admission to Hospital No Difference
Reduction in Confirmed Pneumonia No Difference
Other Complications No Difference
Transmission in Prophylaxis Group No Reduction

Additionally, the group reported on a number of common side effects:

Side Effect Results
Nausea Increased (NNH 28)
Vomiting Increased (NNH 22)
Psychiatric Events Increased (NNH 94)
Headache Increased (NNH 32)

Overall, we see a mild shortening in the duration of symptoms with no reduction in admission, confirmed, post-influenza pneumonia or other complications. The same findings were seen in pediatric patients as well. There is minimal evidence in regards to efficacy or safety in pregnancy as pregnancy was an exclusion criteria in most of the studies. Side effects were common. Also, chemoprophylaxis did not reduce transmission of the disease. These results call into question the utility of oseltamivir for the treatment of influenza in any patient.

In June 2014, the PRIDE Consortium Investigators published a study challenging the Cochrane group findings (Muthuri 2014). In this large observational cohort (n=29,234 patients) Murthi et al found an association with decreased mortality (adjusted OR = 0.81) and an additional benefit to early (< 2 days) treatment versus later treatment (adjusted OR = 0.48). This study, however, has major flaws and biases that question the validity of their conclusions. Only 19% of centers that were contacted agreed to contribute data to the Consortium. Thus there is a high potential for bias. Additionally, the researchers do not assess the quality of the studies included in their meta-analysis (Antes 2014). Regardless, observational data should not be used to trump the RCT data included in the Cochrane review.

Bottom Line: The best available evidence demonstrates that oseltamivir leads to a mild reduction in the duration of symptoms of influenza. There is no proven benefit for mortality, hospital admissions or confirmed influenza related complications including pneumonia. The frequency of side effects may outweigh the mild symptom reduction benefit of the drug. The results of the 2014 Cochrane meta-analysis should be used to update the current CDC recommendations.

4. Which adult patients getting worked up for a urinary tract infection do you send a urine culture on?

A Urinary tract infection (UTI) is a condition in which bacteriuria is present with evidence of host invasion (presence of dysuria, frequency, flank pain, or fever). The “gold standard” in defining significant bacteriuria is the detection of any microorganisms by suprapubic aspiration. Since this method is not typically employed, many sources utilize a definition of more than 105 cfu/ml on a midstream urine culture to indicate true infection.

The American College of Emergency Physicians has recently kicked off the “Choosing Wisely” campaign (ACEP 2013) in an attempt to limit unnecessary testing in the Emergency Department. Although urine culture was not one of the five tests that was addressed, it is one of the most commonly sent laboratory tests in the ED making it a potential place to curb costs. In 1997, UTIs accounted for one million ED visits in the United States (Foxman 2002). Numerous publications and practice guidelines have recommended against the use of routine urine cultures in uncomplicated UTIs. Despite this, a 1999 survey of 269 EM physicians showed that 24% of them would order a urine culture on a 30-year-old non-pregnant woman with an uncomplicated UTI with dysuria of recent onset (Wigton 1999).

Werman et al in 1986 tackled the question of the “Utility of Urine Cultures in the Emergency Department” (Werman 1986). They concluded that urine cultures should only be obtained in patients at high risk for pyelonephritis or bacteremia/urosepsis, as well as in those expected to have uncommon or resistant organisms. This paper cited studies showing that routine urine cultures in nonpregnant women with acute cystitis do not affect management. Morrow et al showed that treatment of women seen in the ED with suspected cystitis proceeded with little attention to urine culture results, despite the fact that cultures were obtained routinely (Morrow 1976). Winickoff et al demonstrated that patients who did not have follow-up urine cultures after a UTI had no greater risk for reinfection or complications than did patients in whom follow-up cultures were obtained (Winickoff 1981). Additionally, a positive culture does not necessarily indicate the absolute need for antibiotics. In a study of 53 women with culture-proven UTIs, no patient progressed to pyelonephritis or bacteremia in spite of the fact that all were treated with placebo (Mabeck 1972).

In 2011, Johnson et al. addressed the question “Do Urine Cultures for Urinary Tract Infections Decrease Follow-up Visits?” (Johnson 2011). This retrospective cohort study looked at 779 female patients age 18-65 diagnosed with a UTI or acute cystitis treated in a family medicine clinic (exclusion criteria: pregnancy, diabetes, UTI or antibiotic use in preceding 6 weeks, other medical condition making UTI complicated). The follow-up rate for patients without urine cultures was 8.4%, which showed no statistical difference between the follow-up rate of 8.7% for patients with urine cultures. Ordering a urine culture was not associated with a decreased rate of follow-up visits (adjusted OR 1.11 [CI 0.65-1.90]). Of all 447 urine cultures ordered, only 1 grew bacteria resistant to nitrofurantoin, a common antibiotic used in the ED for uncomplicated cystitis. A 2006 UK study found that 23 women required a urine culture to prevent one follow-up visit from resistance-based failure; thus, empiric treatment with no urine culture was recommended (McNulty 2006).

Bottom Line: Although there are no prospective randomized controlled trials looking specifically at ED patients with UTI symptoms, it is safe to say that a urine culture in healthy adult non-pregnant females with new onset urinary symptoms without concern for pyelonephritis or bacteremia is unlikely to change management or outcome.

Posted in Uncategorized | Tagged , , , , | 3 Comments

Infectious Diseases, Questions


Screen Shot 2014-12-18 at 7.31.12 PM1. Which patients with neutropenic fever do you consider for outpatient management?

2. Which patients with community-acquired pneumonia do you admit?

3. Which patients with influenza do you treat with oseltamivir?

4. Which adult patients getting worked up for a urinary tract infection do you send a urine culture on?

Posted in Uncategorized | Tagged , , , , | 1 Comment

Airway and Sedation, “Answers”

Question #1: Do you reach for video laryngoscopy or direct laryngoscopy first for intubations?

Tracheal intubation is a fundamental skill for EM providers to master. Historically, direct laryngoscopy (DL) has been the modality of choice for endotracheal intubation, with a proven high success rate in the ED. However, video laryngoscopy (VL) devices have become increasingly popular and present. These devices havenumber of potential advantages including improved laryngeal exposure and visualization as well as allowing more experienced practitioners to observe the procedure during training. (Levitan, 2011). As VL devices are gaining wider use, some have made calls for their establishment as standard care. It is important to note that not all VL devices are equivalent. Some devices use standard geometry blades (which allow both direct and video laryngoscopy) while others have hyperangulated geometry blades which do not allow for direct laryngoscopy. Many devices interchangeably accept standard and hyperangulated blades.A full description of all of these devices is beyond the scope of this post.

Much of the early literature comparing VL to DL comes from observational studies. One prospective study at a level 1 trauma center enrolled all adult patients intubated in the ED over an 18-month span (Platts-Mills, 2009). Data collected included intubation indication, device used, and resident post-graduation year. The authors found no statistically significant difference in the primary outcome of first attempt success, but noted that VL intubation required significantly more time to complete (42 vs 30s). Another prospective study evaluating all ED intubations over a 2-year period found a statistically significant increase in first-attempt success for VL (78% vs 68%, adjusted OR 2.2), a result found more pronounced in a patient subgroup with pre-defined difficult airway predictors (OR 3.07) (Mosier, 2011).

Randomized controlled trial data is sparse. In 2013, Yeatts et al published an RCT among trauma patients at a single level 1 trauma center (Yeatts 2013). Patients requiring emergent intubation were randomized to DL or VL performed by an emergency medicine or anesthesia resident with at least one year of intubation experience. The authors found no significant difference in mortality, the primary outcome, but did observe an increased median duration of intubation in VL vs DL (56s vs 40s) with an associated increased incidence of hypoxia (50% vs 24%). The study had a number of inherent flaws including the fact that providers could selectively exclude patients at their discretion. Larger systematic reviews and meta-analyses have been limited by significant heterogeneity, and provided similarly murky results but suggest that VL may be the superior modality. One meta-analysis including only studies examining ICU intubations found that VL reduced the risk of difficult intubation, Cormack-Lehane 3 and 4 grade views, and esophageal intubations, and increased the likelihood of first-attempt success (De Jong, 2014).

Another meta-analysis included 17 trials and 1,998 patients to compare outcomes from VL vs DL (Griesdale, 2012). The authors found no significant difference in successful first-attempt intubation or time to intubation based on the use of Glidescope ® and DL. Interestingly, they did note that successful first-attempt intubation and time to intubation were improved using Glidescope ® in two studies specifically examining “non-expert” intubators, suggesting a valuable role for VL in less-experienced hands.

Further examining the potential increased efficacy of VL for less-experienced intubators, a prospective randomized control trial examined 40 fresh PGY-1s across varying disciplines (Ambrosio, 2014). All of the soon-to-be residents had not yet begun clinical duties and had individually performed no more than 5 live intubations in their training. After receiving training in both DL and VL, the participants were divided into groups and observed while intubating a difficult-airway manikin. The group using DL had significantly less successful intubations within 2 minutes (47% vs 100%) and increased overall mean time to intubation (69 vs 23s).

The skills required to use standard geometry blades with video are close to traditional direct laryngoscopy, whereas the more hyperangulated the blade, the easier is glottic visualization but the more challenging is tube delivery. Using hyperangulated blades is a somewhat different procedure, requiring a different skillset, than direct laryngoscopy or video laryngoscopy with a standard geometry blade. Many other forms of VL exist, and ultimately experience with one device does not guarantee to translate to another. (Sakles & Brown, 2012). For training purposes, a number of experts including Richard Levitan and Reuben Strayer support the use of standard geometry blades with video as they offer the benefits of video laryngoscopy while allowing training in direct laryngoscopy.

Bottom Line: Evidence suggests VL provides superior visualization in comparison to DL but improved outcomes have yet to be shown. The vast majority of airway experts support extensive training with both modalities.

Question #2: Do you use cricoid pressure during induction and paralysis?

Cricoid pressure (CP) refers to the application of firm pressure to the cricoid ring after positioning the patient’s neck in the fully extended position. It’s important to note that CP is different from external laryngeal manipulation, which acts to improve the laryngeal view during direct laryngoscopy. The pressure required to occlude the esophageal lumen is 30-44 Newtons (Wraight 1993). The goal of CP is to occlude the esophageal lumen in order to prevent regurgitation and gastric insufflation during intubation and particularly during bag mask ventilation. This maneuver is widely embraced in the anesthesiology world as standard care during induction. However, practice of routine CP has been questioned for over a decade and application in the Emergency Department setting is variable.

Although CP may have been used as far back as the 1770’s, the first published descriptions are from Sellick in 1961. Sellick applied CP during induction of anesthesia in 26 patients that were considered to be high risk for aspiration. In 3 of the patients, regurgitation occurred immediately after CP was removed (Sellick 1961). Sellick published a second article recounting a single case of a patient with CP applied who had the esophagus distended with saline solution via an esophageal tube. This patient did not regurgitate after distension (Sellick 1962). This report also contained Sellick’s personal account of 100 high-risk cases without regurgitation when CP was applied but six patients who regurgitated after CP was removed. These studies are severely flawed as there were no comparison groups, the technique’s proponent (Sellick) was the sole studied physician and it is unclear which patients had BMV prior to induction and intubation. Despite these shortcomings, CP was widely adopted after publication of Sellick’s studies.

Over the intervening decades, a significant amount of literature has emerged challenging the routine use of CP. There are four major issues with CP that should be addressed:

1) CP doesn’t occlude the esophagus as purported.

2) CP reduces airway patency.

3) CP obstructs the view of the airway.

4) CP has never been shown to prevent aspiration.

Let’s tackle each of these issues.

1) CP does not occlude the esophagus. This is the physiologic underpinning for the application of CP but was only demonstrated by Sellick in a select few cases. Subsequent literature has called this concept into question. MRI of healthy volunteers was performed with CP applied in order to better visualize the relationships of the cricoid cartilage and the esophagus (Smith 2003, Boet 2012). Both of these studies demonstrated that in many people the esophagus naturally lies lateral to the cricoid cartilage. Additionally, even those in whom the esophagus is not lateral, CP does not occlude the esophagus but rather displaces it laterally. Rice and colleagues, however, concluded that the location and movement of the esophagus was irrelevant to the efficacy of CP. They argue that the hypopharynx and cricoid move as a unit and that the esophagus becomes compressed against the longus colli muscle. Even if this is true, compression against a muscle is more likely to be overcome by the increased pressure that occurs during vomiting. In their MRI study of 24 healthy volunteers, they state that 35% of patients had obliteration of the esophageal lumen when CP was applied (Rice 2009). However, they show no data to support his claim.

Finally, ultrasound has been used in children to demonstrate that the anatomical effect of CP makes it’s utility questionable. Ultrasound was applied to 55 pediatric patients with and without application of CP. At baseline, the esophagus was lateral to the airway in 61% of patients and upon application of CP, all patients had displacement of the esophagus (Tsung 2012).

It is also important to note that the application of CP reduces esophageal sphincter tone allowing for gastric insufflation. This helps to explain why Sellick witnessed regurgitation after removal of CP. Overall, CP does not appear to cause compression of the esophagus but rather lateral displacement.

2) CP reduces airway patency and 3) CP obstructs the view of the airway. Anesthesia studies in the operating room have demonstrated the effect of CP on airway patency. Allman took 50 patients mechanically ventilated in the OR and measured expired tidal volume and peak inspiratory pressure (PIP) before and after application of CP. He found that after CP, both measures were significantly reduced reflecting increased airway obstruction (Allman 1995). Palmer and Ball went a step further. They endoscopically assessed 30 anesthetized patients for airway patency with and without variable forces applied to the cricoid cartilage. They found that as force increased, there was greater cricoid deformation, increasing likelihood of vocal cord closure and increasing likelihood of difficult ventilation (MacG Palmer 1999). At the recommended 44 N of pressure, 86% of men and 100% of women experienced difficulty with ventilation. Additionally, at this force, 26.6% of men and 78.5% of women had 100% cricoid deformation. CP additionally worsens laryngoscopic view and compromises ideal intubating conditions (Haslam 2005). In a study of 33 OR patients, full vocal cord visualization was reduced from 91% to 67% with application of CP (Smith 2002) and CP compressed 27% of patients vocal cords and impeded tracheal tube placement in 15% (Smith 2002). Finally, CP has also been shown to result in worse glottic view during video laryngoscopy (Oh 2013). Overall, CP interferes with “all aspects of airway management.” (Priebe 2012).

4) CP has never been shown to prevent aspiration. There are numerous cases reported in the literature of patients with CP in place who have aspirated. Perhaps the best literature on this comes from a retrospective, observational study in 2009 out of Africa. This study looked at 5000 patients undergoing C-sections. 61% of these patients had CP applied and 24 vomited during induction. Overall, there were 11 deaths attributed to aspiration with 10 of these coming from the CP group (Fenton 2009).

CP doesn’t do what it’s supposed to. It doesn’t occlude the esophagus to prevent aspiration but rather simply displaces the esophagus laterally. Application makes ventilation more difficult because it collapses the airway and the view of the cords is compromised. Intubating conditions are worsened by CP. Some have suggested application of CP initially and if the laryngoscopic view is poor or BMV is difficult, the CP can be removed. However, lower esophageal sphincter relaxation and gastric insuffulation during CP application increases the risk for regurgitation after removal of CP as witnessed by Sellick.

Bottom Line: In spite of over 50 years of application, there is minimal evidence to either the pathophysiologic basis or clinical utility of CP.. CP also appears to decrease the likelihood for 1st pass success. CP should not be performed routinely. External laryngeal manipulation, either by the operator or an assistant, may improve an otherwise suboptimal laryngeal view.

Question #3: How long do you keep patients NPO prior to procedural sedation?

Procedural sedation (PS) describes the use of a sedative or dissociative anesthetic to elicit a depressed level of consciousness that allows an unpleasant medical procedure to be performed with minimal patient reaction or memory. Unlike general anesthesia, PS agents and doses are chosen to maintain cardiorespiratory function and avoid endotracheal tube placement.or other advanced airway adjuncts. (Tintanelli, 2011). As the airway is not definitively protected, aspiration, or the inhalation of gastric contents into the respiratory tract, during the procedure is a potential adverse outcome with significant associated morbidity. Guidance on how to reduce aspiration risk has centered on pre-procedural fasting, though the optimal prescribed fasting times differs. Many Emergency Physicians question whether pre-procedural fasting actually provides any increased protection (Strayer, 2014).

Additionally, there are significant harms to procedural delay for fasting. Fractures and dislocations put increased risk on the neurovascular supply. Procedures may become more difficult to perform. Finally, prolonged fasting times increase ED length of stay. While fasting’s potential harms have been less studied than its efficacy, they should be kept in mind as the literature is examined (Godwin, 2014).

Much of the historical evidence regarding inter-procedural aspiration has come from the Anesthesia and Surgery literature (Green, 2002). One of the earliest reported potential cases of gastric contents as a complication of general aspiration comes from 1848, in a case in which a 15-year-old girl died 2 minutes after beginning to inhale chloroform while preparing for the removal of a toenail. This patient was sitting upright in an operating chair and was not observed vomiting, but as the autopsy revealed a food-distended stomach it was surmised that aspiration was a potential cause of death. (Maltby, 1990). Later, animal experiments involving the direct introduction of gastric aspirate into tracheas (Mendelson, 1946) suggested the danger of aspiration, and the concept of pre-procedural fasting gained acceptance.

Recent Anesthesia guidelines for preoperative fasting recommend a minimum fasting period of 2 hours following ingestion of clear liquids, 4 hours following breast milk, and 6 hours following infant formula or a light meal. (Apfelbaum, 2011). This recommendation is noted to apply to healthy patients undergoing elective procedures. It is important to note that adhering to the recommended fasting times does not guarantee the presence of an empty stomach. Underlying co-morbid conditions, pain and a number of other factors are associated with gastric emptying. As procedural sedation has become a common occurrence in the Emergency Department (ED), the question has arisen of how to translate anesthesia guidelines into Emergency Medicine practice.

Recent Emergency Medicine recommendations prescribed that maximal sedation depth be based on risk stratification of the type of liquid or food intake, the urgency of the procedure, and risk of aspiration. (Green, 2007). These authors acknowledged that their consensus recommendations stemmed in part from the general anesthesia literature. General anesthesia practice involves scenarios at higher risk for aspiration than ED PS but aspiration incidence remains low. Previously, Green et al suggested several reasons why ED PS is potentially safer than general anesthesia, including 1) not routinely placing an endotracheal tube, 2) maintenance of protective airway reflexes, 3) not using pro-emetic inhalation anesthetics. In their 2007 recommendations they suggest responsible consideration of risks/benefits of aspiration risk prior to pre-procedural fasting, though they ultimately note a paucity of literature suggesting more than a theoretical aspiration risk in ED PS.

Multiple studies in the Emergency Medicine literature have not supported the relationship between fasting state and procedural sedation-related aspiration. Agraway et al conducted a prospective case series enrolling all consecutive patients in a children’s hospital ED who underwent PS and recorded pre-procedural fasting state and adverse events (Agraway, 2003). Of the 905 patients with available data, 509 (56%) did not meet established fasting guidelines. 35 (6.9%) of these 509 patients had minor adverse effects as compared to 32 (8.1%) of the 396 patients who did meet fasting guidelines. No significant difference was found in median fasting duration between the two patient groups.

Three trials involving pediatric patients (Roback, 2004; Treston, 2004; Babi, 2005) undergoing procedural sedation with varying sedation agents examined fasting time & adverse effects. No statistically significant relationship was found between incidence of emesis or adverse effects and fasting time (Roback, Treston) or whether fasting guidelines were met (Babi). No episodes of aspiration were reported in any of the three studies.

Bell et al conducted a prospective observational series of 400 adult and pediatric patients undergoing procedural sedation with propofol and measured the percentage of patients whom met ASA fasting guidelines and looked at adverse outcomes (Bell, 2007). They found that 70.5% of those enrolled did not meet ASA fasting guidelines. There was no identified statistically significant difference between fasting status and adverse events (emesis, respiratory interventions). Additionally, there were no aspiration events in either group.

In 2014 an ACEP Clinical Policy committee reviewed these studies and ultimately questioned the utility of pre-procedural sedation fasting (Godwin, 2014). In a Level B evidence-based recommendation, they advised against delaying procedural sedation in the ED based on fasting time, as “preprocedural fasting for any duration has not demonstrated a reduction in the risk of emesis or aspiration when administering procedural sedation and analgesia.” The conclusions of the Clinical Policy recognized a dearth of study on the potential harms of delayed procedural sedation including pediatric hypoglycemia and worsening pathology.

Bottom Line: There is no evidence supporting delay of procedural sedation and analgesia based on fasting state in order to reduce the risk of vomiting and aspiration.The potential risk of aspiration involves multiple patient factors and should be considered on a case-by-case basis, and weighed against the harms associated withdelaying the sedation and procedure.

Question #4: When using ketamine for procedural sedation do you pretreat with benzodiazepines or anticholinergics?

Ketamine is a dissociative sedative-analgesic commonly used for painful or emotionally stressful procedures. When used at its dissociative dose of 1-2mg/kg IV (or 3-4 mg/kg IM), it is thought to exert its effects by effectively disconnecting the limbic and thalamocortical systems, leaving patients unaware of and unresponsive to external stimuli. Unlike other procedural sedation medications, respiratory status is maintained, making it a critical medication in the pediatric and adult Emergency Department. (Green, 2011)

As with any medication, Ketamine is not without its potential complications. Increased salivation and post-procedure emergence reactions are two concerning potential adverse outcomes, and anticholinergics and benzodiazepines, respectively, have been used as pre-treatment to blunt or prevent these effects (Haas, 1992; Strayer, 2008). Though the pharmacologic reasoning is sound for each medication and has been shown to work as treatment once patients become symptomatic, their common utility as pre-treatment is questionable.

Atropine and glycopyrrolate have commonly been administered to prevent hypersalivation and resulting adverse airway events, though their use by physicians has proven inconsistent. One prospective observational study (Brown, 2008) in a pediatric ER tracked the frequency of atropine pre-treatment and associated hypersalivation in 1,080 ketamine sedations over a 3-year period. Most (87%) of the patients in the study were not pretreated with an anticholinergic. Of the patients who received no pre-treatment, 92% were described as having no excess salivation. The authors concluded that atropine was not routinely required for prophylaxis.

A secondary analysis (Green, 2010) seemed to confirm these findings. Examining 8,282 ED ketamine sedations in pediatric patients from 32 previous series, this study found no statistically significant reduction in the number of adverse respiratory or airway events based on whether patients received atropine versus no anticholinergic drug. Interestingly, patients who received glycopyrrolate were actually found to have a significantly increased number of airway and respiratory events as defined by authors of the original studies. Taking these and other studies into account, a recent ACEP Clinical Policy on ketamine did not recommend the routine use of anticholinergics as pretreatment in adults or children. (Green, 2011)

Benzodiazepine pretreatment for the prevention of emergence reactions has been commonly recommended but erratically applied. A meta-analysis of 32 ED studies involving ketamine in pediatric patients (Green, 2009) was conducted to determine which clinical variables prevent recovery agitation. The authors found that 7.6% of patients experienced an emergency reaction though only 1.4% were judged to have “clinically significant” agitation. No apparent benefit or harm from pre-administrated benzodiazepines was found.

It has been suggested that emergence reactions are more frequent in adults than in children, and thus pre-treatment with benzodiazipines would prove more useful in this population. A double-blind randomized control trial pretreated 182 adult subjects receiving varying doses of ketamine with 0.03mg/kg IV midazoloam vs placebo (Sener, 2011).

Though the authors did not specify the intensity of the reaction that was experienced, they did find a significant decrease in recovery agitation with midazolam. An alternative to benzodiazepine prophylaxis is either pre-emergency or PRN benzodiazepine use (Strayer 2008). The current ACEP Clinical Policy recommends against the routine use of benzodiazepines in children but leaves the recommendation ambiguous for adults.

Bottom Line: Anticholinergics are not routinely needed for premedication in ketamine sedations. Benzodiazepines can be administered to adults but are not recommended routinely for children. Both medications should be available to use as PRN treatment.


Sellick BA. Cricoid pressure to control regurgitation of stomach contents during induction of anaesthesia. Lancet 1961; 404-6.

Sellick BA. The prevention of regurgitation during induction of anaesthesia. First Eur Congress Anaesthesiol. 1962;89:1-4.

Smith et al. Cricoid pressure displaces the esophagus: an observational study using magnetic resonance imaging. Anes 2003; 99(1): 60-4.

Boet S et al. Cricoid pressure provides incomplete esopogeal occlusion associated with lateral deviation: a MRI study. JEM 2012; 42(5): 606-11.

Rice et al. Cricoid pressure results in compression of the postcricoid hypopharynx: the esophageal position is irrelevant Anesth Analg 2009 109(5) 1546-52.

Tsung WJ et al. Dynamic anatomic relationship of the esophagus and trachea on sonography: Implications for endotracheal tube confirmation in children. J Ultrasound Med 2012; 31: 1365-70.

Allman KG. The effect of cricoid pressure application on airway patency. J Clin Anes 1995; 7: 197-9.

Palmer JH, Ball DR. The effect of cricoid pressure on the cricoid cartilage and vocal cords: an endoscopic study in anaesthetized patients. Anaesthesia 2000;55:253–8

Haslam N, Parker L, Duggan JE. Effect of cricoid pressure on the view at laryngoscopy. Anaesthesia 2005;60:41e7.

Smith CE, Boyer D. Cricoid pressure decreases ease of tracheal intubation using fiberoptic laryngoscopy. Can J Anesth 2002; 49(6): 614-9.

Oh J et al. Videographic analysis of glottic view with increasing cricoid pressure. Ann of EM 2013; 61: 407-13.

Priebe HJ. Use of cricoid pressure during rapid sequence induction: Facts and fiction. Tends in Anes Crit Care 2012: 123-7.

Fenton PM, Renolds F. Life-saving or ineffective? An observational study of the use of cricoid pressure and maternal outcome in an African setting. Int J Obstet Anes 2009; 18: 106-110

Agraway D., Manzi S.F., Gupta R, Krauss B. Preprocedural Fasting State and Adverse Events in Children Undergoing Procedural Sedation and Analgesia in a Pediatric Emergency Department. Annals of Emergency Medicine. 2003; 42 (5), 636-646

Apfelbaum, J.I., Caplan, R.A., Connis, R.T. et al. Practice guidelines for preoperative fasting and the use of pharmacologic agents to reduce the risk of pulmonary aspiration: application to healthy patients undergoing elective procedures. An updated report by the American Society of Anesthesiologists Committee on Standards and Practice Parameters. Anesthesiology. 2011; 114: 495–511

Babl FE, Puspitadewi A, Barnett P, et al. Preprocedural fasting state and adverse events in children receiving nitrous oxide for procedural sedation and analgesia. Pediatr Emerg Care. 2005;21:736-743.

Bell A, Treston G, McNabb C, et al. Profiling adverse respiratory events and vomiting when using propofol for emergency department procedural sedation. Emerg Med Australas. 2007;19:405-410.

Godwin SA, Burton JH, Gerardo CJ, et al: Clinical policy: procedural sedation and analgesia in the emergency department. Ann Emerg Med 2014 Feb; 63(2): 247-58

Green SM, Krauss Baruch. Pulmonary aspiration risk during emergency department procedural sedation–an examination of the role of fasting and sedation depth. Acad Emerg Med. 2002 Jan;9(1):35–42

Green SM, Roback MG, Miner JR, et al: Fasting and emergency department procedural sedation and analgesia: A consensus-based clinical practice advisory. Ann Emerg Med 49: 454, 2007

Maltby JR. Early reports of pulmonary aspiration during general anesthesia [letter]. Anesthesiology. 1990; 73:792–3.

Mendelson CL. The aspiration of stomach contents into the lungs during obstetric anesthesia. Am J Obstet Gynecol. 1946; 52:191 – 204.

Miner JR. Chapter 41. Procedural Sedation and Analgesia. In: Tintinalli JE, Stapczynski J, Ma O, Cline DM, Cydulka RK, Meckler GD, T. eds. Tintinalli’s Emergency Medicine: A Comprehensive Study Guide, 7e. New York, NY: McGraw-Hill; 2011

Roback MG, Bajaj L, Wathen JE, et al. Preprocedural fasting and adverse events in procedural sedation and analgesia in a pediatric emergency department: are they related? Ann Emerg Med. 2004;44:454-459.

Strayer, Reuben. “The Harms of Fasting.” EM Updates. 16 May 2014. <;

Treston G. Prolonged pre-procedure fasting time is unnecessary when using titrated intravenous ketamine for paediatric procedural sedation. Emerg Med Australas. 2004;16:145-150.

Brown L, Christian-Kopp S, Sherwin TS, et al. Adjunctive atropine is unnecessary during ketamine sedation in children. Acad Emerg Med. 2008;15:314-318.

Green SM, Roback MG, Krauss B, et al. Predictors of emesis and recovery agitation with emergency department ketamine sedation: an individual-patient data meta-analysis of 8,282 children. Ann Emerg Med. 2009;54:171-180

Green SM, Roback MG, Krauss B. Anticholinergics and ketamine sedation in children: a secondary analysis of atropine versus glycopyrrolate. Acad Emerg Med. 2010;17:157-162.

Green SM, Roback MG, Kennedy RM, Krauss B (2011) Clinical practice guideline for emergency department ketamine dissociative sedation: 2011 update. Ann Emerg Med 57: 449–461

Haas DA, Harper DG. Ketamine: a review of its pharmacologic pro- perties and use in ambulatory anesthesia. Anesth Prog 1992;39:61-8.

Sener S, Eken C, Schultz CH, et al. Ketamine with and without midazolam for emergency department sedation in adults: a randomized controlled trial. Ann Emerg Med. 2011;57:109-114.

Strayer RJ, Nelson LS. Adverse events associated with ketamine for procedural sedation in adults. AM J Emerg Med. 2008;26(9):985–1028

Posted in Uncategorized | Tagged , , , | 2 Comments

Questions, Airway and Sedation 2014

1. Do you reach for video laryngoscopy or direct laryngoscopy first for intubations?

eml airway 20142. Do you use cricoid pressure during induction and paralysis?

3. How long do you keep patients NPO prior to procedural sedation?

4. When using ketamine for procedural sedation do you pretreat with benzodiazepines or anticholinergics?

Posted in Uncategorized | Tagged , , , , | 8 Comments

Seizure, “Answers”

1. Which benzodiazepine do you prefer for the treatment of status epilepticus (SE)? Which do you prefer for pediatric patients?

An epileptic seizure (ES) is defined as an abrupt disruption in brain function secondary to abnormal neuronal firing, and is characterized by changes in sensory perception and or motor activity. The clinical manifestation of a seizure is vast, encompassing focal or generalized motor activity, sensory or autonomic dysfunction, and mental status changes. Numerous types of seizures exist, broadly classified as simple versus complex, partial versus generalized, and convulsive versus non-convulsive. All can progress to status epilepticus (SE); this discussion pertains to convulsive SE.

Screen Shot 2014-12-09 at 6.43.52 AM

SE was historically defined as any seizure activity lasting longer than thirty minutes, but is now more conservatively defined as a seizure lasting longer than five minutes, or consecutive seizures without a return to baseline in between seizures. It is important for emergency physicians to rapidly recognize and treat SE as studies estimate an associated mortality of 10-40%, depending on the etiology. (Shearer 2006) Initial interventions include evaluation of the airway, IV access, cardiac monitoring, and the administration of supplemental oxygen and antiepileptic agents. The goal is to terminate all seizure activity within sixty seconds.

Benzodiazepines potentiate GABA activity, thus decreasing neuronal firing, and are widely accepted as the preferred first-line treatment for SE. (Alldredge 2001, Leppik 1998, Treiman 1998, Brophy 2012, Shearer 2006) In addition to a favorable safety profile, benzodiazepines have the advantage of multiple routes of administration, including intravenous (IV), intramuscular (IM), and various mucosal routes. The IV route is generally preferred for speed of onset of action, however environmental circumstances and patient variables can complicate IV access, particularly in children. (Shearer 2006, Berg 2009)

In children (neonatal seizure not discussed), seizures are commonly treated via mucosal administration of benzodiazepines, particularly by parents and EMS in the pre-hospital setting. Mucosal routes include rectal diazepam and buccal or intranasal midazolam. Rectal diazepam has long been the favored drug in this setting and is FDA approved for such use. (Berg 2009) Rectal diazepam, however, has several limitations including social stigma, short duration of action, and risk of expulsion due to seizure induced fecal incontinence. As a consequence, administration of a second agent or repetitive diazepam dosing is often required, leading to increased risk of side effects and potential harm. Rectal diazepam has been proven to be more efficacious than placebo, but has only recently been compared to other mucosal routes of administration. (Dreifuss 1998) In 2005, a randomized controlled trial (RCT) demonstrated the superiority of buccal midazolam to rectal diazepam for seizure termination without increasing the risk of respiratory depression.(McIntyre 2005) Several RCTs comparing intranasal midazolam to rectal diazepam show superiority for intranasal midazolam when looking at time-to-seizure cessation. (Fisgin 2002, Bhattacharyya 2006, Holsti 2007, Holsti 2010) Given the disadvantages of rectal diazepam combined with the above evidence, buccal and intranasal midazolam should be considered viable alternatives for treatment of pediatric seizure. Regarding administration of IV benzodiazepines to children, IV lorazepam appears to be as effective and safer than IV diazepam. (Appleton 1995, Appleton 2008) Further studies are needed to compare non-IV and IV routes of administration in the pediatric population, particularly with comparisons to IV lorazepam. If difficult IV access is anticipated, buccal and intranasal routes should be considered. (Ulgey 2012)

Similar to the pediatric population, in adults, diazepam was historically the benzodiazepine of choice for treatment of SE. After years of research however, lorazepam has emerged as the preferred agent due to its extended duration of anticonvulsant activity and its ability to be administered via the IM route. (Treiman 1998, Leppik 1998, Walker 1979) Large RCTs comparing benzodiazepines head-to-head, however, are limited. A 2005 Cochrane review of RCTs established IV lorazepam as superior to IV diazepam for cessation of SE, evaluating three studies including 289 patients. The relative risk (RR) of non-cessation of seizures for lorazepam compared to diazepam was 0.64. The comparison to midazolam, however, was less clear. A single study found IV midazolam, when compared to IV lorazepam, to have a RR 0.2 for non-cessation of seizures. The authors concluded a non-significant trend favoring IV midazolam compared to IV lorazepam. Unfortunately much of the pediatric data is based on single studies and is not conclusive. (Prasad 2005)

In addition to a trend towards improved efficacy in SE, midazolam does not require refrigeration, lending it another advantage over lorazepam in the pre-hospital setting. To further compare these two antiepileptic agents in the pre-hospital environment, the RAMPART (Rapid Anticonvulsant Medication Prior to Arrival) study was completed in 2012. It compared 10 mg IM midazolam to 4 mg IV lorazepam in a double-blinded RCT. Children over 13 kg were included in this analysis. Upon arrival to the ED, seizures were absent in 73.4% of patients in the midazolam treatment group and in 63.4% of patients in the lorazepam group (p<0.001, primary outcome). Admission rates were also significantly lower in the midazolam treatment group (p<0.001). Although the time-to-administration of the drug was shorter in the midazolam group, the onset of action was shorter in the lorazepam group. This study showed IM midazolam to be non-inferior to IV lorazepam when given by EMS providers prior to ED arrival. (Silbergleit 2012) Important limitations of this study include use of an autoinjector for midazolam as opposed to standard IM injections, and occurrence of study in the pre-hospital environment where IV access is often more difficult to obtain. While extrapolation of this study to ED patients should be limited, IM midazolam for SE appears to be a viable option.

What do the experts say? In 2012, the Neurocritical Care Society published guidelines for the treatment of SE based on limited available evidence and consensus opinion. They recommend lorazepam as the preferred agent for IV administration, midazolam for the IM route and diazepam for the rectal route. Lorazepam, midazolam and diazepam all carry Level A recommendations for emergent treatment of SE. (Brophy 2012)

2. Which second-line agents do you use for treatment of SE?

Unless the underlying cause of SE is known and reversible by another means (i.e. metabolic, toxic ingestion), the initial benzodiazepine is immediately followed by a second anti-epileptic agent. If the seizure has already been successfully terminated, the goal of this second agent is to prevent recurrence through rapid achievement of therapeutic levels of an antiepileptic drug (AED). However, if the benzodiazepine has failed, the goal is to rapidly stop all seizure activity. While the use of benzodiazepines as the first-line treatment for SE is widely accepted, there remains a significant debate over what this second-line agent should be.

Phenobarbital, a long-acting barbiturate that potentiates GABA activity, is the oldest AED still in use today. Historically a first-line agent, it has fallen out of favor due to its significant adverse event profile, namely hypotension and respiratory depression. Currently, it is typically reserved for refractory SE. (Shearer 2006)

Phenytoin has emerged as the preferred second-line agent after benzodiazepines for the treatment of SE. Phenytoin prolongs inactivation of voltage-activated sodium channels, thus inhibiting repetitive neuronal firing. Although it is possible to rapidly achieve therapeutic levels of phenytoin, the drug is limited by side effects including ataxia, hypotension, cardiac dysrhythmias and tissue necrosis secondary to extravasation. (Shearer 2006) Fosphenytoin, a precursor of phenytoin, allows for IM administration with preserved bioavailability, but can have similar hemodynamic side effects. The combination of benzodiazepines and phenytoin is only effective in approximately 60% of patients, leaving a substantial group in SE. (Treiman 1998, Knake 2009) This, combined with its side effect profile, has led to a search for alternative second agents for the treatment of SE.

Valproic acid is an established AED used to treat many forms of seizures, and has been available for IV administration since 1996. Like phenytoin, it acts through prolonging the recovery of voltage-activated sodium channels. The efficacy of valproic acid in treating SE has been quoted ranging from 40-80%. Its primary side effect is hepatotoxicity, either from chronic use over the first six months or as an idiosyncratic reaction. Compared to phenytoin’s risk with local extravasation and significant hypotension, valproic acid is a potentially safer option for some patients. (Shearer 2006). A 2012 meta-analysis sought to compare valproic acid to other available AEDs for SE. Unfortunately, heterogeneity in defining SE and variability within the data limited the conclusions of the meta-analysis. Despite this, authors deemed valproic acid to be as effective as phenytoin in treating SE based on three randomized studies including 256 patients. (Misra 2006, Agarwal 2007, Gilad 2008, Liu 2012) An Italian meta-analysis that same year found no difference in time to seizure cessation when comparing the use of valproic acid and phenytoin, with a trend towards fewer side effects with valproic acid. The authors warn however, against over-interpretation of this data given its inherent limitations and suggest waiting for larger RCTs before changing one’s clinical practice. (Brigo 2012)

Levetiracetam is a comparatively newer medication, with an IV formulation only available since 2006. Its exact mechanism of action is unknown, but it has fewer side effects and limited drug-drug interactions when compared to the older AEDs. (Shearer 2006) Given this favorable safety profile, many have heralded levetiracetam as an ideal second agent for SE. In 2012, a review paper by Zelano et al. compared ten studies looking at levetiracetam for treatment of SE, including one prospective randomized study and a total of 334 patients. The authors found that levetiracetam had an efficacy ranging from 44% to 94%, and was not associated with any significant adverse events. Overall, however, the efficacy was significantly higher in the retrospective studies, raising concern over potential bias influencing the positive results. The single randomized study reported an efficacy of 76%, however this group received levetiracetam as primary therapy and it is unclear if this group was “less sick” and would have responded to initial benzodiazepines. (Misra 2012) Furthermore, in many of the studies, levetiracetam was used because phenytoin was contraindicated, creating another source of bias. Zelano’s review concluded that despite its favorable safety profile, there is scarce evidence to support levetiracetam as a second-line agent in the treatment of SE. (Zelano 2012) More studies are needed.

Currently, using the limited data available, the Neurocritical Care society recommends fosphenytoin as the preferred second-line agent for treatment of SE. They do allow for consideration of other agents on a case-by-case basis, including SE in patients with known epilepsy, in which valproic acid may be preferred. Additionally, an IV bolus dose of the patient’s maintenance AED is also recommended in such cases. (Brophy 2012)

If SE has not resolved after administration of the second agent, the patient is considered to have refractory SE (RSE), and should receive additional treatment immediately. Continuous infusion of an AED, typically propofol, midazolam, phenobarbital, valproic acid or high dose phenytoin, is recommended. (ACEP 2014) Bolus doses of the infusion AED can also be given for breakthrough seizures. Available data do not support the use of one agent over another. (Brophy 2012)

Other agents may soon be available for the treatment of SE and RSE. Animal data reporting decreased GABA receptors in the setting of SE has sparked interest in targeting the NMDA receptor. The theory being, if you cannot potentiate the inhibitory GABA system, perhaps antagonizing the excitatory NMDA system could have efficacy. Ketamine, an NMDA antagonist, has been discussed as a potential future direction in the treatment of RSE. (Kramer 2012)

3. In which adult patients with first-time seizure do you obtain emergent imaging?

Seizure is a common presentation to the ED and represents between 1-2% of ED visits. Although manifested by a common presentation, the etiology of seizure is incredibly broad, including trauma, hemorrhage, metabolic derangements, toxic exposures, infection, and congenital abnormalities. For adult patients with new-onset seizures, the evaluation can be tailored to the history provided by the patient. Laboratory investigation in particular should be fitted to the specific patient as multiple studies have shown the history and physical exam to predict laboratory abnormalities (Shearer 2006). Serum glucose and sodium tests, however, are recommended (Level B) in all patients who have returned to baseline. A pregnancy test is recommended in all women of childbearing age. (ACEP 2014)

When it comes to neuroimaging first-time seizure, the best course of action is less clear. Although it is established that all patients presenting with first-time seizure should receive neuroimaging, the timing and modality of that imaging is highly controversial. Neurologists prefer brain magnetic resonance imaging (MRI) for seizure work up, but it is rarely available in the ED setting. Computed tomography (CT) is the predominant test available to ED providers, however it is inferior to MRI for evaluation of seizure, with the exception of detection of acute hemorrhage. (Jagoda 2011)

Who then, needs a screening CT in the ED prior to discharge and who can wait for the definitive MRI? Experts suggest dividing patients into two groups. First are those with persistent neurologic deficits, an abnormal mental status, or evidence of medical illness, and second are those who have returned to baseline with a non-focal exam. The first group is clearly high-risk and warrants an extensive work up including an emergent head CT. In fact, abnormal head CTs have been documented in 81% of patients with neurologic deficits on exam. (Tardy 1995) The second group is more nuanced and the utility of emergent head CT is less defined. Even in patients with non-focal neurologic exams, however, the rate of CT abnormalities ranges from 17-22%. (Tardy 1995, Sempere 1992, Jagoda 2011) The clinical significance of a nonspecific abnormal head CT, the definition of which often includes simple atrophy, is uncertain in a neurologically intact patient. Furthermore, there may be elements of the presentation or history that place the patient at higher risk. Several studies have noted advanced age (Tardy 1995), HIV (Jagoda 2011, Harden 2007) and chronic alcohol abuse to be associated with increased risk of abnormal head CTs in the setting of seizure, despite normal exams. (Tardy 1995, Harden 2007, Jagoda 2011, Earnest 1988)

In 2007 a multidisciplinary committee including ED physicians, in association with the American Academy of Neurologists (AAN), updated guidelines on neuroimaging for the emergency patient with seizure. The authors specifically sought evidence for emergent neuroimaging that would change ED management to offer a clinically relevant guideline. Based on a nearly forty-year literature review, they offer a weak recommendation (Level C) for emergent CT in adults with first time seizure, noting CT to make acute management changes in 9-17% of cases. They offered a higher recommendation (Level B) for a subset of patients more likely to have significant findings on CT. In addition to patients with an abnormal neurologic exam, this subset included those with focal seizures, predisposing history such as trauma, neurocutaneous disorders, malignancy and shunt. (Harden 2007)

Due to limited data, the above recommendations and summary of evidence ultimately fail to provide a clear, universal algorithm for all cases. An abnormal mental status, focal neurologic exam, predisposing history, trauma, immunocompromised state, or focal seizures should prompt emergent imaging in the ED. Increased age and inability to obtain reliable follow up should also tip the scales in favor of obtaining a CT prior to discharge. A patient without a concerning history, at his/her baseline with a normal neurologic exam will need an outpatient MRI and EEG for definitive diagnosis. Whether that work up includes a CT in the ED will be up to the discretion of the provider. The ACEP clinical policy guideline, last updated in 2004, offer Level B recommendations as follows: 1. When feasible, perform neuroimaging of the brain in the ED on patients with a first time seizure. 2. Deferred outpatient neuroimaging may be used when reliable follow up is available. (ACEP 2014) Neither ACEP nor the AAN are able to make a comment on the use of MRI in the ED based on insufficient evidence.

4. How do you diagnose pseudoseizure?

Pseudoseizures, formally known as psychogenic nonepileptic seizures (PNES), are characterized by motor, sensory, automatic or cognitive behavior similar to epileptic seizures (ES) but without abnormal neuronal firing. PNES is often misunderstood and patients are perceived as malingering or “faking it”. PNES, however, is a defined psychoneurologic condition falling under the same umbrella as conversion and somatoform disorders. Interestingly, epilepsy and PNES frequently coexist in the same patient. It has been estimated that up 60% of patients with PNES have another seizure disorder, however more conservative studies place the estimate closer to 10%. (Benbadis 2000, Benbadis 2001, Shearer 2006) PNES is found across cultures and occurs more frequently in women in the third and fourth decades of life. (Reuber 2003, Lesser 1996)

It can be extremely difficult to distinguish PNES from ES in the ED. Video EEG is the gold standard for diagnosis of ES, however this is not typically possible in the ED setting. There is utility, however, in differentiating PNES from ES as antiepileptic treatment is not benign and creates potential for iatrogenic harm. (Reuber 2003) Many have attempted to clarify PNES semiology in studies of variable quality, including many case reports and uncontrolled studies. In 2010, Avbersek reviewed rigorous studies that included EEG to establish clinical signs distinguishing PNES from ES. A sign was considered well supported for PNES if it had positive findings in two controlled studies and the remaining studies were also supportive. Based on their findings, clinical signs suggestive of PNES, applicable to the ED setting included:

  1. Duration of event >2 minutes
  2. Fluctuating course
  3. Asynchronous movement of limbs
  4. Pelvic thrusting
  5. Side to side head or body movement
  6. Closed eyes
  7. Ictal crying
  8. Recall of event
  9. Absence of postictal confusion
  10. Absence of postictal stertorous breathing

Flailing or thrashing movements and absence of tongue biting or urinary incontinence are frequently cited as suggestive of PNES, however this study did not find sufficient evidence to support this distinction. (Avbersek 2010) It is important to remember that many of these findings apply to generalized seizures only and cannot be used to separate PNES from partial seizures. Frontal seizures, for example, often demonstrate bizarre movements and emotional displays easily mistaken for PNES. (Reuber 2003) When applying this information to ED patients, one must take the entire history and exam into account, never relying upon a single sign to rule out ES. Ultimately, PNES is not a diagnosis to make in the ED, as it requires video EEG monitoring along with the assessment of experienced epileptologists.

In addition to clinical signs, physiologic parameters including cortisol, prolactin, white blood cell count, creatine kinase and neuron-specific enolase have been investigated in PNES. Though most have met significant limitations, prolactin, a hormone secreted by the anterior pituitary, has emerged as the most promising serum marker. (Willert 2004, LaFrance 2010) In 1978, Trimbel first demonstrated prolactin elevation in ES. Many subsequent studies have replicated similar findings while showing no prolactin elevation in PNES. (Trimble, 1978, Mehta 1994, Fisher 1991, Mishra 1990)

Serum prolactin is known to peak fifteen to twenty minutes after seizure, returning to baseline at one hour. (Trimble 1978) Interestingly, however, prolactin levels do not consistently rise in all types of seizures. On average, prolactin is elevated in 88% of generalized tonic-clonic seizures, 64% of complex partial seizures and 12% of simple partial seizures. (LaFrance 2013) In one study, patients with PNES also demonstrated a statistically significant increase in prolactin from baseline. Notably, the prolactin elevation in PNES was much smaller than that in ES. Nevertheless, this study raises questions over the specificity of prolactin elevation for diagnosis of ES. (Alving 1998) To further complicate interpretation of prolactin, levels are subject to significant variations. Up to 100% fluctuations are seen prior to awakening from sleep, levels in women and men differ, and baseline prolactin levels are elevated in those with epilepsy. (Chen 2005) These factors, combined with variability in seizure classification and definition of prolactin elevation has made interpretation of the limited data difficult. Despite these limitations, The American Academy of Neurology Therapeutics and Technology Assessment Subcommittee reviewed available high quality data. They determined elevated prolactin to have a specificity of 96% for detection of ES. They conclude that a twice-normal rise in serum prolactin, drawn ten to twenty minutes after an ictal event, compared to a baseline prolactin, is useful in differentiating GTC and CPS from ES. The pooled sensitivity for this data was very poor, however, averaging 53% for all types of ES. (Chen 2005) Another review, including less rigorous data, reported an average sensitivity of 89%. (Cragar 2002) Both studies agree that absence of an elevated prolactin level should not be used to rule out ES. Additionally, baseline prolactin levels are often not available, further limiting the utility of this test.

Posted in Uncategorized | Tagged , , , , , | 2 Comments

Seizure, Questions

1.  Which benzodiazepine do you prefer for the treatment of status epilepticus (SE)? Which do you prefer for pediatric patients?

EMl Seizure questions2. Which second-line agents do you use for treatment of SE?

3. In which adult patients with first-time seizure do you obtain emergent imaging?

4. How do you diagnose pseudoseizure?

Posted in Uncategorized | Tagged , , | 6 Comments