Questions, Airway and Sedation 2014

1. Do you reach for video laryngoscopy or direct laryngoscopy first for intubations?

eml airway 20142. Do you use cricoid pressure during induction and paralysis?

3. How long do you keep patients NPO prior to procedural sedation?

4. When using ketamine for procedural sedation do you pretreat with benzodiazepines or anticholinergics?

Posted in Uncategorized | Tagged , , , , | 4 Comments

Seizure, “Answers”

1. Which benzodiazepine do you prefer for the treatment of status epilepticus (SE)? Which do you prefer for pediatric patients?

An epileptic seizure (ES) is defined as an abrupt disruption in brain function secondary to abnormal neuronal firing, and is characterized by changes in sensory perception and or motor activity. The clinical manifestation of a seizure is vast, encompassing focal or generalized motor activity, sensory or autonomic dysfunction, and mental status changes. Numerous types of seizures exist, broadly classified as simple versus complex, partial versus generalized, and convulsive versus non-convulsive. All can progress to status epilepticus (SE); this discussion pertains to convulsive SE.

Screen Shot 2014-12-09 at 6.43.52 AM

SE was historically defined as any seizure activity lasting longer than thirty minutes, but is now more conservatively defined as a seizure lasting longer than five minutes, or consecutive seizures without a return to baseline in between seizures. It is important for emergency physicians to rapidly recognize and treat SE as studies estimate an associated mortality of 10-40%, depending on the etiology. (Shearer 2006) Initial interventions include evaluation of the airway, IV access, cardiac monitoring, and the administration of supplemental oxygen and antiepileptic agents. The goal is to terminate all seizure activity within sixty seconds.

Benzodiazepines potentiate GABA activity, thus decreasing neuronal firing, and are widely accepted as the preferred first-line treatment for SE. (Alldredge 2001, Leppik 1998, Treiman 1998, Brophy 2012, Shearer 2006) In addition to a favorable safety profile, benzodiazepines have the advantage of multiple routes of administration, including intravenous (IV), intramuscular (IM), and various mucosal routes. The IV route is generally preferred for speed of onset of action, however environmental circumstances and patient variables can complicate IV access, particularly in children. (Shearer 2006, Berg 2009)

In children (neonatal seizure not discussed), seizures are commonly treated via mucosal administration of benzodiazepines, particularly by parents and EMS in the pre-hospital setting. Mucosal routes include rectal diazepam and buccal or intranasal midazolam. Rectal diazepam has long been the favored drug in this setting and is FDA approved for such use. (Berg 2009) Rectal diazepam, however, has several limitations including social stigma, short duration of action, and risk of expulsion due to seizure induced fecal incontinence. As a consequence, administration of a second agent or repetitive diazepam dosing is often required, leading to increased risk of side effects and potential harm. Rectal diazepam has been proven to be more efficacious than placebo, but has only recently been compared to other mucosal routes of administration. (Dreifuss 1998) In 2005, a randomized controlled trial (RCT) demonstrated the superiority of buccal midazolam to rectal diazepam for seizure termination without increasing the risk of respiratory depression.(McIntyre 2005) Several RCTs comparing intranasal midazolam to rectal diazepam show superiority for intranasal midazolam when looking at time-to-seizure cessation. (Fisgin 2002, Bhattacharyya 2006, Holsti 2007, Holsti 2010) Given the disadvantages of rectal diazepam combined with the above evidence, buccal and intranasal midazolam should be considered viable alternatives for treatment of pediatric seizure. Regarding administration of IV benzodiazepines to children, IV lorazepam appears to be as effective and safer than IV diazepam. (Appleton 1995, Appleton 2008) Further studies are needed to compare non-IV and IV routes of administration in the pediatric population, particularly with comparisons to IV lorazepam. If difficult IV access is anticipated, buccal and intranasal routes should be considered. (Ulgey 2012)

Similar to the pediatric population, in adults, diazepam was historically the benzodiazepine of choice for treatment of SE. After years of research however, lorazepam has emerged as the preferred agent due to its extended duration of anticonvulsant activity and its ability to be administered via the IM route. (Treiman 1998, Leppik 1998, Walker 1979) Large RCTs comparing benzodiazepines head-to-head, however, are limited. A 2005 Cochrane review of RCTs established IV lorazepam as superior to IV diazepam for cessation of SE, evaluating three studies including 289 patients. The relative risk (RR) of non-cessation of seizures for lorazepam compared to diazepam was 0.64. The comparison to midazolam, however, was less clear. A single study found IV midazolam, when compared to IV lorazepam, to have a RR 0.2 for non-cessation of seizures. The authors concluded a non-significant trend favoring IV midazolam compared to IV lorazepam. Unfortunately much of the pediatric data is based on single studies and is not conclusive. (Prasad 2005)

In addition to a trend towards improved efficacy in SE, midazolam does not require refrigeration, lending it another advantage over lorazepam in the pre-hospital setting. To further compare these two antiepileptic agents in the pre-hospital environment, the RAMPART (Rapid Anticonvulsant Medication Prior to Arrival) study was completed in 2012. It compared 10 mg IM midazolam to 4 mg IV lorazepam in a double-blinded RCT. Children over 13 kg were included in this analysis. Upon arrival to the ED, seizures were absent in 73.4% of patients in the midazolam treatment group and in 63.4% of patients in the lorazepam group (p<0.001, primary outcome). Admission rates were also significantly lower in the midazolam treatment group (p<0.001). Although the time-to-administration of the drug was shorter in the midazolam group, the onset of action was shorter in the lorazepam group. This study showed IM midazolam to be non-inferior to IV lorazepam when given by EMS providers prior to ED arrival. (Silbergleit 2012) Important limitations of this study include use of an autoinjector for midazolam as opposed to standard IM injections, and occurrence of study in the pre-hospital environment where IV access is often more difficult to obtain. While extrapolation of this study to ED patients should be limited, IM midazolam for SE appears to be a viable option.

What do the experts say? In 2012, the Neurocritical Care Society published guidelines for the treatment of SE based on limited available evidence and consensus opinion. They recommend lorazepam as the preferred agent for IV administration, midazolam for the IM route and diazepam for the rectal route. Lorazepam, midazolam and diazepam all carry Level A recommendations for emergent treatment of SE. (Brophy 2012)

2. Which second-line agents do you use for treatment of SE?

Unless the underlying cause of SE is known and reversible by another means (i.e. metabolic, toxic ingestion), the initial benzodiazepine is immediately followed by a second anti-epileptic agent. If the seizure has already been successfully terminated, the goal of this second agent is to prevent recurrence through rapid achievement of therapeutic levels of an antiepileptic drug (AED). However, if the benzodiazepine has failed, the goal is to rapidly stop all seizure activity. While the use of benzodiazepines as the first-line treatment for SE is widely accepted, there remains a significant debate over what this second-line agent should be.

Phenobarbital, a long-acting barbiturate that potentiates GABA activity, is the oldest AED still in use today. Historically a first-line agent, it has fallen out of favor due to its significant adverse event profile, namely hypotension and respiratory depression. Currently, it is typically reserved for refractory SE. (Shearer 2006)

Phenytoin has emerged as the preferred second-line agent after benzodiazepines for the treatment of SE. Phenytoin prolongs inactivation of voltage-activated sodium channels, thus inhibiting repetitive neuronal firing. Although it is possible to rapidly achieve therapeutic levels of phenytoin, the drug is limited by side effects including ataxia, hypotension, cardiac dysrhythmias and tissue necrosis secondary to extravasation. (Shearer 2006) Fosphenytoin, a precursor of phenytoin, allows for IM administration with preserved bioavailability, but can have similar hemodynamic side effects. The combination of benzodiazepines and phenytoin is only effective in approximately 60% of patients, leaving a substantial group in SE. (Treiman 1998, Knake 2009) This, combined with its side effect profile, has led to a search for alternative second agents for the treatment of SE.

Valproic acid is an established AED used to treat many forms of seizures, and has been available for IV administration since 1996. Like phenytoin, it acts through prolonging the recovery of voltage-activated sodium channels. The efficacy of valproic acid in treating SE has been quoted ranging from 40-80%. Its primary side effect is hepatotoxicity, either from chronic use over the first six months or as an idiosyncratic reaction. Compared to phenytoin’s risk with local extravasation and significant hypotension, valproic acid is a potentially safer option for some patients. (Shearer 2006). A 2012 meta-analysis sought to compare valproic acid to other available AEDs for SE. Unfortunately, heterogeneity in defining SE and variability within the data limited the conclusions of the meta-analysis. Despite this, authors deemed valproic acid to be as effective as phenytoin in treating SE based on three randomized studies including 256 patients. (Misra 2006, Agarwal 2007, Gilad 2008, Liu 2012) An Italian meta-analysis that same year found no difference in time to seizure cessation when comparing the use of valproic acid and phenytoin, with a trend towards fewer side effects with valproic acid. The authors warn however, against over-interpretation of this data given its inherent limitations and suggest waiting for larger RCTs before changing one’s clinical practice. (Brigo 2012)

Levetiracetam is a comparatively newer medication, with an IV formulation only available since 2006. Its exact mechanism of action is unknown, but it has fewer side effects and limited drug-drug interactions when compared to the older AEDs. (Shearer 2006) Given this favorable safety profile, many have heralded levetiracetam as an ideal second agent for SE. In 2012, a review paper by Zelano et al. compared ten studies looking at levetiracetam for treatment of SE, including one prospective randomized study and a total of 334 patients. The authors found that levetiracetam had an efficacy ranging from 44% to 94%, and was not associated with any significant adverse events. Overall, however, the efficacy was significantly higher in the retrospective studies, raising concern over potential bias influencing the positive results. The single randomized study reported an efficacy of 76%, however this group received levetiracetam as primary therapy and it is unclear if this group was “less sick” and would have responded to initial benzodiazepines. (Misra 2012) Furthermore, in many of the studies, levetiracetam was used because phenytoin was contraindicated, creating another source of bias. Zelano’s review concluded that despite its favorable safety profile, there is scarce evidence to support levetiracetam as a second-line agent in the treatment of SE. (Zelano 2012) More studies are needed.

Currently, using the limited data available, the Neurocritical Care society recommends fosphenytoin as the preferred second-line agent for treatment of SE. They do allow for consideration of other agents on a case-by-case basis, including SE in patients with known epilepsy, in which valproic acid may be preferred. Additionally, an IV bolus dose of the patient’s maintenance AED is also recommended in such cases. (Brophy 2012)

If SE has not resolved after administration of the second agent, the patient is considered to have refractory SE (RSE), and should receive additional treatment immediately. Continuous infusion of an AED, typically propofol, midazolam, phenobarbital, valproic acid or high dose phenytoin, is recommended. (ACEP 2014) Bolus doses of the infusion AED can also be given for breakthrough seizures. Available data do not support the use of one agent over another. (Brophy 2012)

Other agents may soon be available for the treatment of SE and RSE. Animal data reporting decreased GABA receptors in the setting of SE has sparked interest in targeting the NMDA receptor. The theory being, if you cannot potentiate the inhibitory GABA system, perhaps antagonizing the excitatory NMDA system could have efficacy. Ketamine, an NMDA antagonist, has been discussed as a potential future direction in the treatment of RSE. (Kramer 2012)

3. In which adult patients with first-time seizure do you obtain emergent imaging?

Seizure is a common presentation to the ED and represents between 1-2% of ED visits. Although manifested by a common presentation, the etiology of seizure is incredibly broad, including trauma, hemorrhage, metabolic derangements, toxic exposures, infection, and congenital abnormalities. For adult patients with new-onset seizures, the evaluation can be tailored to the history provided by the patient. Laboratory investigation in particular should be fitted to the specific patient as multiple studies have shown the history and physical exam to predict laboratory abnormalities (Shearer 2006). Serum glucose and sodium tests, however, are recommended (Level B) in all patients who have returned to baseline. A pregnancy test is recommended in all women of childbearing age. (ACEP 2014)

When it comes to neuroimaging first-time seizure, the best course of action is less clear. Although it is established that all patients presenting with first-time seizure should receive neuroimaging, the timing and modality of that imaging is highly controversial. Neurologists prefer brain magnetic resonance imaging (MRI) for seizure work up, but it is rarely available in the ED setting. Computed tomography (CT) is the predominant test available to ED providers, however it is inferior to MRI for evaluation of seizure, with the exception of detection of acute hemorrhage. (Jagoda 2011)

Who then, needs a screening CT in the ED prior to discharge and who can wait for the definitive MRI? Experts suggest dividing patients into two groups. First are those with persistent neurologic deficits, an abnormal mental status, or evidence of medical illness, and second are those who have returned to baseline with a non-focal exam. The first group is clearly high-risk and warrants an extensive work up including an emergent head CT. In fact, abnormal head CTs have been documented in 81% of patients with neurologic deficits on exam. (Tardy 1995) The second group is more nuanced and the utility of emergent head CT is less defined. Even in patients with non-focal neurologic exams, however, the rate of CT abnormalities ranges from 17-22%. (Tardy 1995, Sempere 1992, Jagoda 2011) The clinical significance of a nonspecific abnormal head CT, the definition of which often includes simple atrophy, is uncertain in a neurologically intact patient. Furthermore, there may be elements of the presentation or history that place the patient at higher risk. Several studies have noted advanced age (Tardy 1995), HIV (Jagoda 2011, Harden 2007) and chronic alcohol abuse to be associated with increased risk of abnormal head CTs in the setting of seizure, despite normal exams. (Tardy 1995, Harden 2007, Jagoda 2011, Earnest 1988)

In 2007 a multidisciplinary committee including ED physicians, in association with the American Academy of Neurologists (AAN), updated guidelines on neuroimaging for the emergency patient with seizure. The authors specifically sought evidence for emergent neuroimaging that would change ED management to offer a clinically relevant guideline. Based on a nearly forty-year literature review, they offer a weak recommendation (Level C) for emergent CT in adults with first time seizure, noting CT to make acute management changes in 9-17% of cases. They offered a higher recommendation (Level B) for a subset of patients more likely to have significant findings on CT. In addition to patients with an abnormal neurologic exam, this subset included those with focal seizures, predisposing history such as trauma, neurocutaneous disorders, malignancy and shunt. (Harden 2007)

Due to limited data, the above recommendations and summary of evidence ultimately fail to provide a clear, universal algorithm for all cases. An abnormal mental status, focal neurologic exam, predisposing history, trauma, immunocompromised state, or focal seizures should prompt emergent imaging in the ED. Increased age and inability to obtain reliable follow up should also tip the scales in favor of obtaining a CT prior to discharge. A patient without a concerning history, at his/her baseline with a normal neurologic exam will need an outpatient MRI and EEG for definitive diagnosis. Whether that work up includes a CT in the ED will be up to the discretion of the provider. The ACEP clinical policy guideline, last updated in 2004, offer Level B recommendations as follows: 1. When feasible, perform neuroimaging of the brain in the ED on patients with a first time seizure. 2. Deferred outpatient neuroimaging may be used when reliable follow up is available. (ACEP 2014) Neither ACEP nor the AAN are able to make a comment on the use of MRI in the ED based on insufficient evidence.

4. How do you diagnose pseudoseizure?

Pseudoseizures, formally known as psychogenic nonepileptic seizures (PNES), are characterized by motor, sensory, automatic or cognitive behavior similar to epileptic seizures (ES) but without abnormal neuronal firing. PNES is often misunderstood and patients are perceived as malingering or “faking it”. PNES, however, is a defined psychoneurologic condition falling under the same umbrella as conversion and somatoform disorders. Interestingly, epilepsy and PNES frequently coexist in the same patient. It has been estimated that up 60% of patients with PNES have another seizure disorder, however more conservative studies place the estimate closer to 10%. (Benbadis 2000, Benbadis 2001, Shearer 2006) PNES is found across cultures and occurs more frequently in women in the third and fourth decades of life. (Reuber 2003, Lesser 1996)

It can be extremely difficult to distinguish PNES from ES in the ED. Video EEG is the gold standard for diagnosis of ES, however this is not typically possible in the ED setting. There is utility, however, in differentiating PNES from ES as antiepileptic treatment is not benign and creates potential for iatrogenic harm. (Reuber 2003) Many have attempted to clarify PNES semiology in studies of variable quality, including many case reports and uncontrolled studies. In 2010, Avbersek reviewed rigorous studies that included EEG to establish clinical signs distinguishing PNES from ES. A sign was considered well supported for PNES if it had positive findings in two controlled studies and the remaining studies were also supportive. Based on their findings, clinical signs suggestive of PNES, applicable to the ED setting included:

  1. Duration of event >2 minutes
  2. Fluctuating course
  3. Asynchronous movement of limbs
  4. Pelvic thrusting
  5. Side to side head or body movement
  6. Closed eyes
  7. Ictal crying
  8. Recall of event
  9. Absence of postictal confusion
  10. Absence of postictal stertorous breathing

Flailing or thrashing movements and absence of tongue biting or urinary incontinence are frequently cited as suggestive of PNES, however this study did not find sufficient evidence to support this distinction. (Avbersek 2010) It is important to remember that many of these findings apply to generalized seizures only and cannot be used to separate PNES from partial seizures. Frontal seizures, for example, often demonstrate bizarre movements and emotional displays easily mistaken for PNES. (Reuber 2003) When applying this information to ED patients, one must take the entire history and exam into account, never relying upon a single sign to rule out ES. Ultimately, PNES is not a diagnosis to make in the ED, as it requires video EEG monitoring along with the assessment of experienced epileptologists.

In addition to clinical signs, physiologic parameters including cortisol, prolactin, white blood cell count, creatine kinase and neuron-specific enolase have been investigated in PNES. Though most have met significant limitations, prolactin, a hormone secreted by the anterior pituitary, has emerged as the most promising serum marker. (Willert 2004, LaFrance 2010) In 1978, Trimbel first demonstrated prolactin elevation in ES. Many subsequent studies have replicated similar findings while showing no prolactin elevation in PNES. (Trimble, 1978, Mehta 1994, Fisher 1991, Mishra 1990)

Serum prolactin is known to peak fifteen to twenty minutes after seizure, returning to baseline at one hour. (Trimble 1978) Interestingly, however, prolactin levels do not consistently rise in all types of seizures. On average, prolactin is elevated in 88% of generalized tonic-clonic seizures, 64% of complex partial seizures and 12% of simple partial seizures. (LaFrance 2013) In one study, patients with PNES also demonstrated a statistically significant increase in prolactin from baseline. Notably, the prolactin elevation in PNES was much smaller than that in ES. Nevertheless, this study raises questions over the specificity of prolactin elevation for diagnosis of ES. (Alving 1998) To further complicate interpretation of prolactin, levels are subject to significant variations. Up to 100% fluctuations are seen prior to awakening from sleep, levels in women and men differ, and baseline prolactin levels are elevated in those with epilepsy. (Chen 2005) These factors, combined with variability in seizure classification and definition of prolactin elevation has made interpretation of the limited data difficult. Despite these limitations, The American Academy of Neurology Therapeutics and Technology Assessment Subcommittee reviewed available high quality data. They determined elevated prolactin to have a specificity of 96% for detection of ES. They conclude that a twice-normal rise in serum prolactin, drawn ten to twenty minutes after an ictal event, compared to a baseline prolactin, is useful in differentiating GTC and CPS from ES. The pooled sensitivity for this data was very poor, however, averaging 53% for all types of ES. (Chen 2005) Another review, including less rigorous data, reported an average sensitivity of 89%. (Cragar 2002) Both studies agree that absence of an elevated prolactin level should not be used to rule out ES. Additionally, baseline prolactin levels are often not available, further limiting the utility of this test.

Posted in Uncategorized | Tagged , , , , , | 2 Comments

Seizure, Questions

1.  Which benzodiazepine do you prefer for the treatment of status epilepticus (SE)? Which do you prefer for pediatric patients?

EMl Seizure questions2. Which second-line agents do you use for treatment of SE?

3. In which adult patients with first-time seizure do you obtain emergent imaging?

4. How do you diagnose pseudoseizure?

Posted in Uncategorized | Tagged , , | 6 Comments

Trauma, “Answers”

1. When do you use tranexamic acid in trauma?

Tranexamic acid (TXA) is a synthetic derivative of the amino acid lysine. It was discovered in the 1950s and has traditionally been employed in surgery to minimize blood loss. TXA works by inhibiting lysine binding sites on plasminogen, thereby preventing its conversion to plasmin and reducing fibrinolysis and clot breakdown.

EML Trauma 2014 answersTrauma is consistently in the top ten leading causes of death worldwide (WHO, 2013). TXA has been studied to see if it can be used to improve morbidity and mortality. The major randomized controlled trial (RCT) is the CRASH-2 trial, which randomly assigned over 20,000 adult trauma patients in 40 countries with, or at risk of, significant bleeding, to either TXA or placebo (Shakur, 2010). The TXA protocol entailed given a loading dose of 1g over 10 minutes then an infusion of 1g over eight hours. The primary outcome was death in hospital within four weeks of injury, and the results were favorable for TXA. All-cause mortality was significantly reduced by 1.5% (14.5% TXA vs. 16.0% placebo (RR 0.91, 95% CI 0.85-0.97; p=0.0035)). Risk of death due to bleeding, which was a secondary outcome, was also significantly reduced by 0.8% when using TXA (RR 0.85, p=0.0077). The trial was large enough for subgroup analyses, which found the group that benefited most from TXA received it less than three hours from injury (RR 0.87, 99% CI 0.75-1.00). The study also showed there was no significant difference in deaths from vascular occlusion (MI, CVA, PE), multiorgan failure, or head injury between TXA and placebo. The strength of this trial lies in the large sample from multiple settings and countries, the double blinded randomization, similar baseline factors in both groups, and minimal loss to follow up. One of the weaknesses mentioned by the authors is that the diagnosis of traumatic hemorrhage can be difficult and some included patients might not have been bleeding at the time of randomization, which could reduce the power of the trial. However using a broad clinical inclusion criteria (hypotension, tachycardia, physician judgment) and not depending on lab results or imaging also makes this study more applicable and generalizable. In addition, the study found no difference in RBC transfusion in both groups. The lack of difference may be secondary to transfusion decisions made prior to completion of TXA administration; since there were more survivors in the TXA group, they also had greater opportunity to receive RBCs.

The CRASH-2 data was subsequently reanalyzed in other studies. One such study looked at four predefined risk of mortality groups (<6%, 6-20%, 21-50%, >50%) and showed that TXA was beneficial in terms of all-cause mortality and deaths from bleeding regardless of baseline risk of death. The implication is that TXA should be considered in all comers with traumatic hemorrhage within three hours of injury (Roberts, 2012). Subsequent studies found the greatest benefit of TXA if given in the first hour. If it is given more than three hours after injury, an increase in deaths from bleeding was observed (Roberts, 2011).

In 2012 the MATTERs study, a retrospective, observational study of combat injuries in Afghanistan, was published (Morrison, 2012). The study looked at the non-randomized use of TXA in hemorrhagic trauma patients that received at least one unit of RBCs. They found decreased mortality in the patients who were given TXA (17.4% vs. 23.9%) and a more marked mortality reduction in the group receiving massive transfusion (14.4% vs. 28.1%). This study has a number of limitations including external validity (most of us aren’t treating high velocity rifle injuries from combat) and the lack of randomization and blinding.

Although TXA is not yet standard of care in traumatic hemorrhage, it appears to be safe in terms of thrombotic complications, and if given within three hours of injury, also beneficial in decreasing bleeding and mortality. The use of TXA in traumatic hemorrhage should be considered in future pre-hospital and ED trauma resuscitation protocols.

2. When you can’t get peripheral access in a trauma patient, do you prefer a subclavian, femoral or intraosseous (IO)?

Establishing IV access is a vital early step in the ATLS algorithm. The sickest patients need access the fastest, yet are often the most difficult. Whether due to intravascular depletion causing venous constriction or severe trauma limiting access to the extremities, emergency physicians should always be ready to obtain central venous access. Most trauma patients arrive in a c-collar via EMS making internal jugular access impractical and unsafe. The remaining options are subclavian, femoral or IO.

A recent prospective, observational study investigated first attempt success rates and procedure times of IO access vs. central venous catheterization (CVC) in adult resuscitation patients with inaccessible peripheral veins (Leidel, 2012). In a fairly small sample of 40 consecutive patients (73% trauma), each received IO access (55% humeral site) and a CVC (83% subclavian) simultaneously. There was a significantly higher first attempt success rate for IO [85% vs. 60% for landmark-based CVC (p=0.024)], and faster median procedure time [IO 2.0 min vs. CVC 8.0 min (p<0.001)]. The authors stated that relevant complications (infection, extravasation, compartment syndrome, cannula dislodgement, bleeding, arterial puncture, hemo/pneumothorax, venous thrombosis or vascular access related infection) were not observed. Although there are no RCTs comparing in-hospital IO vs. CVC, there are several case series and observational studies supporting higher first attempt success rates and faster access times for IOs (Valdes, 1977; Iserson, 1989; Iwama, 1996; Cooper, 2007; Ngo, 2009; Paxton, 2009; Ong, 2009). A 1996 study (Iwama, 1996) also showed similar IO (clavicular) flow rates compared to CVC (subclavian). There is also an RCT studying out-of-hospital cardiac arrests, which found tibial IO access to have the highest first-attempt success rate and the fastest time to vascular access compared to peripheral IV and humeral IO access (Reades, 2011).

When it comes to central venous access, the complication that is studied most often is catheter-related bloodstream infections (CRBI). In 2011, the CDC released a class 1A recommendation to avoid using the femoral vein for central access in adult patients (O’Grady, 2011), a view also shared by the Infectious Diseases Society of America (Marschall, 2008). Typically, a class 1A recommendation is based on multiple high quality studies. This recommendation, however, was based on a single study in Critical Care Medicine (Lorente, 2005). This was a prospective, observational study that found significant differences in CRBI between femoral (8.34%), IJ (2.99%) and subclavian (0.97%) lines. In a 2012 meta-analysis encompassing two RCTs and eight cohort studies, including over 3000 subclavians lines, 10,000 IJ lines, and 3100 femoral lines, the data against femoral access became less clear (Marik, 2012). After the authors excluded two studies that were statistical outliers (Lorente, 2005; Nagashima, 2006), they found no significant difference in the risk of CRBI between femoral and IJ routes (RR 1.35; 95% CI 0.84-2.19, p=0.2, I2=0%) or femoral and subclavian routes (RR 1.02; 95% CI 0.64-1.65, p=0.92, I2=0%). The meta-analysis also found no statistical difference in DVT complications between femoral access and the other routes combined (Marik, 2012), although a previous RCT showed increased DVT rate in the femoral site compared to subclavian alone (Merrer, 2001). The authors comment that infection rates have decreased across the board over the last 10-15 years likely due to the increased focus on sterile placement of lines. They recommend that physicians choose the site that they are most comfortable with and that is appropriate for the patient. Whether the results from this meta-analysis are applicable to the crashing trauma patient without venous access is debatable.

There are no RCTs to make a head to head comparison of these three access points in the trauma setting. At this point, a rational approach in resuscitating a sick trauma patient is to go for the quickest and easiest route, which appears to be IO, especially in EDs staffed by only one physician. There are no limitations to the medications or blood products that can be infused through an IO. At the same time, if additional personnel are available, central access, whether femoral, subclavian, or IJ can be obtained simultaneously. Practically speaking, this increases the chances of getting access quickly, and more access points may be beneficial for giving high volume and speedy fluid/blood infusions. The ultimate goal is to stabilize the patient; infection risk is not the primary concern and the lines can and should be changed in a more sterile environment.

3. Which trauma patients do you give PCC to over FFP?

It is commonly accepted that hypothermia, acidosis, and coagulopathy form a lethal triad in worsening traumatic hemorrhage. Fresh frozen plasma (FFP) is widely used to correct coagulopathies in traumatic bleeding and is an integral part of any massive transfusion protocol. With the availability of prothrombin complex concentrate (PCC) in most trauma centers, studies have arisen to determine its place in coagulopathy reversal. PCC contains coagulation factors II, VII, IX, and X. Products available in the U.S. are Kcentra™ (aka Beriplex™, Prothrombin Complex Concentrate), Profilnine SD™ (Coagulation Factor IX complex), and Bebulin VH™ (Factor IX Complex). The former contains all four factors, while the latter two contain mostly factor IX, but also factors II and X, and very low levels of factor VII. The advantage of PCC is that it can be quickly reconstituted and administered in a low volume IV bolus. FFP involves type-specific matching, thawing, longer administration times, and a larger overall volume of delivery.

Studies investigating the role of PCC in trauma have focused on reversal of elevated INR both in patients on warfarin and those not on anticoagulants. Kalina, et al. put forth a protocol at Christiana Care Hospital in Delaware to give PCC to trauma patients with an INR >1.5, history of warfarin use, and head CT showing intracranial hemorrhage (Kalina, 2008). Clinicians had the option to use the PCC protocol (54.3%) or FFP with vitamin K (35.4%). Protocol patients had improved times to INR normalization (331.3 vs. 737.8 minutes, p=0.048), number of patients with reversal of coagulopathy (73.2% vs. 50.9%, p=0.026), and time to operative intervention (222.6 vs. 351.3 mins, p=0.045). There was no difference in ICU days, hospital days, or mortality. INR reversal, however, is not a patient oriented outcome. The ability of PCC to rapidly correct the INR does not equate to an improvement in patient care. This is reflected in the lack of difference in mortality. In another study, Safaoui, et al. did a retrospective chart review of patients who presented to the ED with possible brain injury and a history of warfarin use and received FIX complex (three factor PCC) (Safaoui, 2009). Of the 28 patients who met inclusion criteria, a PCC dose of 2000 units reduced admission INR on average from 5.1 to 1.9 (p=0.008), with a mean time to correction of 116 minutes. Eleven patients who had a repeat INR drawn within 30 minutes following PCC had a mean time to INR correction of 13.5 minutes. Limitations of this study include a lack of defined target INR, heterogeneity among times to obtain INR, variation in PCC dosing, as well as variation in obtaining timely INR redraws post-treatment.

There are also a number of small, retrospective studies looking at the use of PCC in general trauma patients who are on warfarin. In 2011, a chart review of 31 patients on warfarin with trauma (13 receiving 3 factor PCC (Profilnine SD™) and 18 receiving FFP) showed a faster reversal of INR with the PCC (16:59 hours vs. 30:03 hours) (Chapman, 2011). However, there was a difference in mortality (actually, the only patient deaths were in the PCC group).

A recent prospective cohort study out of Austria looked at using fibrinogen concentrate (CF) and/or PCC alone compared with those additionally receiving FFP in 144 patients with major blunt trauma (Injury Severity Score (ISS) ≥15) (Innerhofer, 2013). Patients treated with CF alone showed sufficient hemostasis and required fewer RBCs and platelets than those also receiving FFP. They also found significantly lower rates of complications such as multiorgan failure and sepsis in the CF alone group. The limitations of this study are that it used fibrinogen concentrate (in addition to PCC) and measured hemostasis with rotational thrombelastometry, which may not be practical or obtainable in everyday ED settings. In addition, the study did not compare PCC directly to FFP.

There has not been a large meta-analysis comparing PCC to FFP and such a study may be difficult secondary to the heterogeneity in the existing studies. Differences in variables such as drug dosing, coagulation factor differences, baseline patient coagulopathy, and outcome measurements make it difficult to formulate overarching conclusive statements about PCC use. At this point, it is reasonable to treat patients with traumatic ICH on warfarin with PCC, as rapid reversal is necessary to prevent mass effect and herniation. There are no RCTs at this time to conclusively recommend the use of PCC in trauma simply for an elevated INR. Recently, data collection has been completed for a study entitled, “A Randomized, Open Label, Efficacy and Safety Study of OCTAPLEX and Fresh Frozen Plasma (FFP) in Patients Under Vitamin K Antagonist Therapy With the Need for Urgent Surgery or Invasive Procedures” (OCTAPLEX, 2013). This study is pitting Octaplex, a 4-factor PCC, head-to-head against FFP. It will be interesting to see what dose of each drug the investigators use, as PCC can be thought of as a very concentrated version of FFP, making it easier and faster to administer. The limitation, however, is that because PCC is a new drug, it is considerably more expensive than FFP.

4. In blunt abdominal/flank trauma, do you send a urinalysis or simply look for gross hematuria?

Urinalysis (UA) is traditionally performed in blunt trauma as a screening test to diagnose urogenital injuries. The most commonly injured genitourinary (GU) structure is the kidney, and the proportion of trauma patients with renal injuries ranges between 1.4-3.3% (Santucci, 2004). A retrospective observational cohort study of 1815 patients was recently undertaken to investigate whether the routine performance of UA in patients with blunt trauma is still valuable (Olthof, 2013). The main outcome measures were the presence of GU (bladder, kidney, ureter or urethral) injury, and whether the findings on urine specimen and/or imaging led to clinical consequences (additional imaging, intervention, admission for observation, or out-patient follow-up). Microscopic hematuria was defined as greater than three erythrocytes per high powered field. Macroscopic hematuria was defined as blood visible to the naked eye.

The presence of macroscopic/gross hematuria (n=16) led to clinical consequences in 73% of patients, regardless of findings on imaging. Bypassing UA and going straight to imaging resulted in clinical consequences in 1.5% (4/268) of patients, whereas performing both a UA and imaging only resulted in a 2% (22/1031) rate of clinical consequences. The authors state that the 0.5% difference in clinical consequence mostly consisted of additional imaging and outpatient follow up, indicating little added value to the initial screening UA. Limitations of this study include the retrospective design, as it was not possible to determine whether the physician performed imaging based on the UA results or independent of it. In addition, the definition of macroscopic/gross hematuria was subject to the physician’s interpretation and could be influenced by certain foods, medications, or menstruation.

An older study, from 1989, prospectively looked at 1146 consecutive patients with either blunt (1007) or penetrating (139) renal trauma (Mee, 1989). Of the 812 patients with blunt trauma and microscopic hematuria without shock (SBP >90), there were no significant injuries (significant = grade 2-5 renal injury). A related study from the same group, but using more data, found that in 1588 blunt trauma patients with microscopic hematuria and no shock, 3 out of 584 (0.5%) who had imaging had significant injuries (Miller, 1995). Of the 1004 that did not get imaging, 51% were followed up and had no significant complications. These studies support the premise that microscopic hematuria rarely picks up significant renal injuries. Of note, in the 436 patients who had gross hematuria, or microscopic hematuria plus shock, 78 significant renal injuries were identified (Miller, 1995).

In the setting of blunt trauma and hemodynamic stability, it appears reasonable to avoid screening UAs and only look for gross hematuria. The practical benefit is that one can make a disposition decision without having to wait for microscopic UA results. In addition, making decisions based on a UA can be falsely reassuring, as bleeding in the kidney parenchyma may not cause hematuria.

 

Posted in Uncategorized | Tagged , , , , , , , , | 2 Comments

Trauma, Questions

1.  When do you use tranexamic acid in trauma?

EML Trauma 20142.  When you can’t get peripheral access in a trauma patient, do you prefer subclavian, femoral, or IO?

3.  Which trauma patients do you give PCC to over FFP?

4.  In blunt abdominal/flank trauma, do you send a urinalysis or simply look for gross hematuria?

EML Trauma 2014 Poster

Posted in Uncategorized | Tagged , , , , , | 2 Comments

DKA, “Answers”

 1. When you are suspicious for DKA do you obtain a VBG or an ABG? How good is a VBG for determining acid/base status?

Diabetic ketoacidosis (DKA) is defined by five findings: acidosis (pH < 7.30, serum bicarbonate (HCO3) < 18 mEq/L, the presence of ketonuria or ketonemia, an anion gap > 10 mEq/L, and a plasma glucose concentration > 250 mg/dl. It is one of the most serious complications of diabetes seen in the emergency department. The mortality rate of hospitalized DKA patients is estimated to be between 2-10% (Lebovitz, 1995). As a result, its prompt recognition is vital to improving outcomes in these patients. As a result, emergency physicians have long relied on the combination of hyperglycemia and anion gap metabolic acidosis to help point them in the correct diagnostic direction.

EML DKA answersIn the assessment of the level of acidosis in a DKA patient, an arterial blood gas (ABG) has long been thought of as much more accurate than a venous blood gas (VBG) and thus necessary in evaluating a DKA patient’s pH and HCO3 level, two values often used to direct treatment decisions. An ABG is more painful, often time-consuming and labor intensive as it may involve multiple attempts. In addition, ABGs can be complicated by radial artery aneurysms, radial nerve injury and compromised blood supply in patients with peripheral vascular disease or inadequate ulnar circulation. A VBG is less painful, can obtained at the time of IV placement, and is therefore less time consuming. But is it good enough to estimate acid/base status in these patients?

Brandenburg, et al. compared arterial and venous blood gas samples in DKA patients taken at the exact same time prior to treatment and found a mean difference in pH between the arterial and venous samples to be only 0.03, with a Pearson’s correlation coefficient of 0.97 (Brandenburg, 1998). Gokel, et al. also demonstrated in twenty one DKA patients a mean difference in arterial and venous pH of 0.05 + 0.01 and mean difference in arterial and venous HCO3- of 1.88 + 0.4 (Gokel, 2000). A study of 195 patients in 2003 showed similar correlation between arterial and venous pH with a correlation coefficient r = 0.951(Ma, 2003). Further studies have also been published comparing ABG and VBG results in pathologically diverse groups of patients both in the ICU and the ED and achieved similar results (Malatesha, 2007; Middleton, 2006).

Ma, et al. went further and asked physicians to make diagnosis, treatment and disposition decisions without seeing the ABG results first. They found that the results affected diagnosis in only 1% of patients, and treatment in only 3.5% of patients (Ma, 2003).

As a result, the Joint British Diabetes Society 2011 Guidelines for the Management of DKA advise using a VBG in not only the initial assessment of acid/base status, but also to help monitor the progress of treatment (Savage, 2011). In summary, it appears that in patients presenting in DKA, a VBG sample is an adequate substitute for an ABG in determining a patient’s pH and HCO3- level with only a minor degree of inaccuracy that is not clinically significant enough to alter treatment decisions.

Bottom Line: A VBG is adequate for the diagnosis and ongoing management of patients with DKA. ABGs offer no added benefit and are associated with increased pain and complications.

2. Do you use serum or urine ketones to guide your diagnosis and treatment of DKA?

Although the presence of ketones is part of the DKA definition, many clinicians make the diagnosis based on acidosis, decreased serum HCO3 and the presence of an anion gap alone. The presence of ketones, however, is superior in making the diagnosis to HCO3 (Sheikh-Ali, 2008). Serum or urine samples can be used to detect ketones but urine testing is more rapid and thus, more likely to be utilized. Unfortunately, urinalysis testing may be misleading. In DKA, fatty acid breakdown results in the production of two major ketone bodies: acetoacetate and beta-hydroxybutyrate. Beta-hydroxybutyrate is the predominant ketone but urinalysis is only able to detect for acetoacetate via the nitroprusside assay (Marliss, 1970). Thus, early in DKA, the urinalysis may be negative for ketones and falsely reassuring. This has prompted many clinicians to do serum ketone testing. Serum testing also offers a quantitative measure of ketones instead of the simple qualitative measure with a urine test (Foreback, 1997). However, serum beta hydoxybutyrate testing is  unavailable in many hospital systems and may not elucidate the entire clinical picture by itself (Fulop, 1999).

Additionally, as the patient is treated for DKA, acetoacetate is converted to beta-hydroxybutyrate. Appropriate treatment may cause a stronger positive nitroprusside assay reaction for ketones, misleading the physician into thinking the patient is not improving or worsening. However, following serum ketones to assess for DKA improvement has not been shown to be superior to clinical evaluation.

Where does this leave us? In patients presenting with clinical signs and symptoms of DKA, serum pH, HCO3, glucose, and anion gap should be assessed. A urine should be checked for the presence of ketones and if positive, emergency department serum ketone testing would be unnecessary. However, if urine ketones are not present and the diagnosis is unclear, the addition of serum ketones (specifically beta-hydroxybutyrate) seems reasonable. There is no evidence to suggest that following serum ketones during treatment is necessary.

Bottom Line: Patients with DKA may present with a weak or absent nitroprusside assay reaction on urinalysis for ketones as this test only checks for acetoacetate (the minor ketone body produced in DKA). Serum beta-hydroxybutyrate testing may be helpful in certain cases in making the diagnosis.

3. Do you use IV bicarbonate administration for the treatment of severe acidosis in DKA? If so, when?

The cornerstones of DKA treatment involve reversal of the effects of osmotic diuresis with fluids and electrolyte repletion as well as correcting the acidemia present in these patients. Treatment with sodium bicarbonate has frequently been recommended to assist in raising the pH to a “safer level.”

However, recent evidence shows that bicarbonate is not only ineffective in correcting acidemia but that it may be detrimental. In their study, Morris, et al. took twenty one patients with severe DKA patients (pH 6.9-7.14) and found no significant difference in the decrease in glucose concentrations, decrease in ketone levels, the rate of increase in pH, the time to reach a serum glucose of 250 or to reach a pH of 7.3 in patients treated with bicarbonate versus those treated without bicarbonate (Morris 1986). In 2013, a study of 86 patients with DKA confirmed these findings. Patients who received bicarbonate had no significant difference in time to resolution of acidosis or time to hospital discharge (Duhon, 2013). However, the insulin and fluid requirements were higher in the bicarbonate group. A pediatric study of severe DKA patients (pH < 7.15) found that 39% of patients were successfully treated without bicarbonate with a comparable number of complications (Green, 1998).

In addition to its apparent lack of efficacy, numerous studies have also pointed to its potential deleterious effects. Okuda, et al. showed with seven patients in DKA that those assigned to receiving bicarbonate as part of their treatment had a 6-hour delay in the improvement of ketosis compared to the control group (Okuda, 1995).   Bicarbonate has also been found to worsen hypokalemia and can cause paradoxical intracellular and central nervous system acidosis (Viallon, 1999). Additionally, a bicarbonate infusion shifts the oxygen dissociation curve decreasing tissue oxygen uptake and has been associated (although not shown to cause) cerebral edema in pediatric patients.

In spite of the lack of evidence, the American Diabetes Association continues to recommend the use of bicarbonate in patients with a serum pH < 7.0 (Kitabachi, 2006). However, in the face of mounting evidence and a lack of support in the literature, this recommendation should be readdressed. A systematic review of 44 studies, including three randomized clinical trials in adults found no clinical efficacy to the use of bicarbonate in DKA (Chua, 2011). Of note, none of the trials cited in the ADA recommendations or the systematic review included patients with an initial pH < 6.85, making it difficult for the clinician to know what to do in cases of such severe acidosis.

Bottom Line: There is no established role for administration of sodium bicarbonate to patients with DKA regardless of their pH. Sodium bicarbonate administration is associated with more complications including hypokalemia and cerebral edema.

4. When do you start an insulin infusion in patients with hypokalemia? Do you give a bolus followed by a drip?

Insulin administration is paramount to the successful treatment of the DKA patient as it reverses the mobilization of free fatty acids and the production of ketoacids and glucose. Prior to the isolation of insulin for medical use, the mortality of DKA was 100%. It functions by treating the acidosis and ketosis present in these patients. DKA patients, however, often have profound potassium losses secondary to the osmotic dieresis that occurs with such a hyperglycemic state. As a result, about 5-10% of patients with DKA will present with hypokalemia (Aurora, 2012). In addition to its other functions, insulin drives potassium from the serum into the cells. Thus it is vital to know the serum potassium level prior to starting insulin therapy in order to avoid a lethal hypokalemia-induced dysrhythmia. An EKG can also assist in detecting any signs of hypo- or hyperkalemia that may be seen in these patients.   The American Diabetes Association recommends beginning insulin therapy once the potassium level is repleted to > 3.3 meq/L. Below a potassium level of 5.5 meq/L, 20-30meq KCL should be added to each liter of fluids to prevent hypokalemia from occurring with insulin therapy (Kitabchi, 2006).

Traditional teaching in DKA treatment recommends starting a bolus of insulin followed by an infusion. The bolus was believed to rapidly activate the insulin receptors and lead to a resolution of hyperglycemia, ketosis, and acidosis. Recent literature, however, has shown that this initial bolus of insulin is likely unnecessary and may pose harm by creating a greater risk for hypoglycemic events. A randomized trial in 2008 demonstrated that giving patients a bolus of insulin followed by a drip (at 0.07 units/kg/hr) resulted in a brief period of supranormal insulin levels followed by a plateau at subnormal levels (Kitabchi, 2008). Providing an infusion at 0.14 units/kg/hr, however resulted in a serum insulin plateau that was more consistent with normal physiology. Goyal, et al. divided 157 patients and treated half of them with insulin bolus + drip and the other half with insulin drip only and found that there were no statistically significant differences in the rate of change of glucose (both groups with approximately 60mg/dl/hr decrease), change in anion gap, or length of stay in the ED or the hospital (Goyal, 2007). Patients treated with an insulin bolus + infusion also had more side effects including more episodes of hypoglycemia and higher potassium requirements (although these were trends seen in this small observational study, neither reached statistical significance).

Most current guidelines state the initial insulin infusion rate of 0.1 units/kg/hr is acceptable. If the insulin infusion does not cause the serum glucose level to drop by 50-70mg/dL in the first hour, the insulin infusion may be doubled until a steady decrease is achieved.

Bottom Line: Insulin should not be started in patients with DKA until the serum potassium level is confirmed to be > 3.5 mEq/L. The use of an insulin bolus prior to infusion has not been shown to improve any patient centered outcomes or surrogate markers and is associated with an increased rate of hypoglycemic episodes.

Posted in Uncategorized | Tagged , , , , , , | 4 Comments

DKA, Questions

1. When you are suspicious for DKA do you get a VBG or an ABG? How good is a VBG for determining acid/base status? EMl DKA Questions
2. Do you use serum or urine ketones to guide your diagnosis and treatment of DKA?
3. Do you use IV bicarbonate for the treatment of severe acidosis in DKA? If so, when?
4. When do you start an insulin infusion in patients with hypokalemia? Bolus or no bolus?

EML DKA Questions

Posted in Uncategorized | Tagged , , , , , | 8 Comments

Eye Emergencies, “Answers”

1.) Do you prescribe ophthalmic topical anesthetics to patients with corneal abrasions who complain of severe pain?

Corneal abrasion is one of the most common acute eye complaints that presents to the ED, accounting for approximately 10% of eye related ED visits (Verma, 2013). The cornea is highly innervated, and even small abrasions can cause significant pain. The use of topical ophthalmologic anesthetics was first documented in 1818 with erythroxylum coca (a cocaine derivative), and is quite effective at blocking nerve conduction in the superficial cornea and conjunctiva, thus eliminating the sensation of pain (Rosenwasser, 1989).

EML eye answersThere are a number of proposed dangers in using topical anesthetics for corneal abrasions. This includes inhibition of mitosis (and subsequent delayed healing) and decreased corneal sensation with the fear that the abrasion will progress to an ulcer without the patient noticing. Additionally, these agents may have direct toxicity to corneal epithelium with prolonged use.

These theoretical dangers could potentially lead to keratitis, edema, erosion, and the formation of infiltrates and opacities. These concerns prompted early research of the effects of topical anesthetics on the cornea. The adage that topical anesthetics should not be prescribed to patients with corneal abrasions originated from animal studies and case studies dating back to the 1960’s. Many of the animal studies were done on enucleated rat and rabbit eyes or animal cell preparations. This research may not, for obvious reasons, be applicable to living human subjects.

The earliest human studies date back to the 1960’s and 70’s, and are mostly small case reports of patients using topical anesthetics inappropriately. The first such study was a case report of five patients who used topical anesthetics chronically, resulting in keratitis (Epstein, 1968). All five patients used topical anesthetics for either a prolonged period of time, too frequently, or without physician supervision or proper examination prior to application. In contrast to the inappropriate uses detailed in the case reports, topical anesthetics commonly used to facilitate slit lamp examinations include tetracaine 0.5% or proparacaine 0.5%. A theoretical prescription regimen would be a short course (2-3 days) of a dilute topical anesthetic used only a few times daily (every 4-6 hours as needed).

The next case report condemning the use of topical anesthetics was published two years later, and examined the outcomes of nine patients who misused topical anesthetics (Willis, 1970). Like the previous case reports, this study included nine patients who used topical anesthetics inappropriately: either too frequently, for prolonged periods of time, or without appropriate physician supervision or examination. Of these nine patients, only one patient used the medication in a somewhat reasonable manner (a 46 year old factory worker who used topical anesthetic every two hours for two days), however it is unclear from the paper whether he received a proper slit lamp examination on initial evaluation or was given the drops empirically. When he saw an ophthalmologist two days later, he was diagnosed with anterior uveitis and epithelial erosion, which may have been present at the time of initial injury.

More recent case studies specifically address topical anesthetic abuse and its effects on the cornea (Erdem, 2013; Yeniad, 2010). Types of misuse seen in the literature include using higher concentrations of topical anesthetics, using with excessive frequency, or using for prolonged periods of time. To date, there are no studies that show adverse outcomes from short courses of dilute topical anesthetic with use limited to every 4-6 hours as needed.

There are studies demonstrating the safety of topical anesthetics from the ophthalmology literature. PRK (photorefractive keratectomy) is a type of laser vision correction surgery that involves ablation of a small amount of tissue from the corneal stroma, thus creating an epithelial deficit (similar to a corneal abrasion). In a two-part study, proparacaine was first administered to healthy volunteers in different concentrations to assess anesthetic efficacy (Shahinian, 1997). Dilute (0.05%) proparacaine was then given to healthy volunteers to determine the safety of excessive use. No corneal toxicity was observed. In the second part of the double-blinded study, 34 PRK patients were prospectively randomized into a treatment group (proparacaine 0.05% for one week as needed) or placebo group (artificial tears). Both groups also received oral opioids and topical NSAIDS. Patients in the treatment group reported significantly decreased pain scores, longer duration of pain relief, and decreased opioid use compared to the placebo group.

Another study in the PRK literature looked at post-operative patients given approximately ten drops of tetracaine 0.5% to use as needed (Brilakis, 2000). Patients were re-examined on post-op days 1 and 3. The study found that all of the eyes had healed within 72 hours and use of the tetracaine drops did not prolong time to re-epithelialization.

There are some studies in the emergency medicine literature which support the use of topical anesthetics. One such study was a prospective, randomized controlled trial that included adults with corneal injuries presenting to one of two tertiary emergency departments in Ontario (Ball, 2010). Participants were randomized to receive either proparacaine 0.05% or placebo drops and were followed up by an ophthalmologist on days 1, 3, and 5. All patients were also prescribed topical NSAIDS and oral acetaminophen with codeine, and were told to take the study drops 2-4 at a time as needed. Patients were prescribed 40 mL of drops. The study was small (only 15 patients in the proparacaine group and 18 patients in the placebo group), but showed significantly better pain reduction and decreased opioid use in the proparacaine group. There were no ocular complications or delay in healing in either group.

Another recently published 12-month prospective, double-blinded randomized trial assessed a convenience sample of 116 patients with uncomplicated corneal abrasions (Waldman, 2014). Study participants were randomized to receive either 1% tetracaine or saline every 30 minutes as needed for twenty-four hours. Results showed no complications attributed to topical anesthetics, and no statistically significant difference in corneal healing at 48 hours. To assess pain control, both a visual analogue scale as well as a patient-reported numeric rating scale for overall effectiveness were used. While no difference was seen between the two groups on the visual analogue scale, patients rated tetracaine as having a better overall effectiveness on the numerical rating scale. Although 48-hour follow-up was relatively low (64% in the saline group and 69% in the tetracaine group), the study found that topical tetracaine used for 24 hours was safe and that patients perceived a better overall effectiveness with tetracaine. Both blinding of treatment groups and the pain scores may have been compromised here by the burning sensation that accompanies initial tetracaine application.

Bottom Line: Major EM textbooks still discourage prescribing topical anesthetics for corneal abrasions. In spite of this, there is mounting evidence in the EM literature that topical anesthetics are safe and effective for the treatment of pain in corneal abrasions. It may be reasonable to send selected, reliable patients home with a limited supply of topical anesthetic agents along with strict instructions for return to the ED and 48 hour follow up with an ophthalmologist. Larger randomized, controlled, ED-based studies are needed before the safety of this practice can be fully elucidated and thus, at this time treatment with topical anesthetics cannot be absolutely recommended.

2.) When do you schedule ophthalmology follow up for patients with corneal abrasions?

 The cornea functions to protect the eye, filter UV light, and refract light to allow for image formation. To properly refract light the cornea must be completely transparent and thus it is avascular and obtains its nutrients from the aqueous humor, tears, and ambient oxygen. While most corneal abrasions heal quickly and without consequence despite the cornea being an avascular structure, there is potential for complications ranging from infection to ulceration to permanent vision loss, especially if the abrasion is not properly treated.

After a corneal abrasion is diagnosed via slit lamp exam, there are various options for further care. Despite numerous review articles offering various recommendations on the optimal follow up method, there is no evidence-based literature to guide this decision.

Several articles recommend 24-hour follow-up, but don’t specify with whom the patient should follow. A guideline statement from Wilson, et al. recommends that most patients should be re-evaluated in 24 hours and if the abrasion is not fully healed, additional follow-up is needed (Wilson, 2004). It furthermore states that close attention should be paid to contact lens wearers and immunocompromised patients, and that specific ophthalmology referral is recommended for patients with deep eye injuries, foreign bodies unable to be removed, and suspected recurrent corneal erosions. Also, patients with persistent symptoms after 72 hours, worsening symptoms, or vision abnormalities should be referred to an ophthalmologist.

On the contrary, Khan, et al. suggests that patients with corneal abrasions should be seen specifically by an ophthalmologist within 24-48 hours to assess for healing (Khan, 2013). The articles goes on to state that most injuries heal quickly and without infection within 24 hours and that these patients will not need long-term follow-up, with the exception of contact lens users who may need follow-up over the course of 3-5 days.

A review on EM Updates (Strayer, 2009) recommends immediate ophthalmology evaluation if the corneal abrasion is associated with penetrating injury or infiltrate ,and 24 hour ophthalmology evaluation if the abrasion is “high risk,”such as those created by an artificial fingernail or organic matter (which are prone to fungal infections) or contact lens wearers (who are prone to bacterial infections, including pseudomonas). All others can be re-evaluated (not necessarily by an ophthalmologist) in 24 hours.

While most sources recommend at least one follow-up visit within 24-48 hours, some recent articles propose that “small” corneal abrasions (definitions of which range from less than 4mm to less than one fourth of the corneal surface area) which are uncomplicated (i.e., no organic material or contact lens use) in reliable patients with normal vision and resolving symptoms may not require follow-up (Wipperman, 2013).

Do practice patterns reflect these varying recommendations? A nationwide, Canadian survey study that concluded that 88% percent of ED physicians routinely arranged follow-up for their patients with cornal abrasions (Calder, 2004). Most often it was a return to the emergency department (69%) but 45% referred patients to ophthalmologists and 35% referred to the family physician.

Bottom line: Based on expert consensus, it is a reasonable and safe approach to have every patient re-evaluated in 24-48 hours. Those with “high risk” abrasions that you are worried about can be referred to ophthalmology for this follow-up, and others can most likely be re-evaluated by their primary care doctor or told to return to the ED in 24 hours for re-evaluation to ensure proper healing and the absence of infection.

3.) How soon after presentation do you have a patient with floaters see an ophthalmologist?

Floaters are defined as the perception of moving spots in the visual field of one eye. They are usually black or grey in color, and are caused by either light bending at the interface of fluid pockets in the vitreous jelly or opacities caused by cells within the vitreous. They are a very common condition, especially in patients over the age of fifty. In contrast, flashes (which often accompany floaters) can be described as brief repeated sensations of bright light, typically seen at the periphery of the visual field. Flashes are caused by vitreous traction on the retina. Both floaters and flashes are painless (Hollands, 2009; Margo, 2005). Most cases of floaters and flashes (especially when monocular) are of ocular etiology, the most common of which is posterior vitreous detachment (PVD). However, the differential diagnosis also includes retinal tear or detachment, posterior uveitis and other causes of vitreous inflammation, vitreous hemorrhage (which can result from diabetic retinopathy), macular degeneration, ocular lymphoma, intraocular foreign body, TIA, migraine aura, postural hypotension, and occipital lobe disorders. In contrast to ocular etiologies, extra-ocular causes of floaters and flashes are often bilateral and accompanied by other symptoms (Hollands, 2009).

Posterior vitreous detachment is the most common cause of floaters, and occurs in approximately two thirds of patients over age 65 (Margo, 2005). The posterior vitreous is composed mostly of water and collagen. As we age this structure shrinks in size, causing it to detach from the underlying retina. Although most people will develop PVD at some point in their lives, for the majority it will remain benign and without serious consequences. For others, it may progress to retinal tear, which often appears as a horseshoe shaped hole in the retina. Tears allow fluid to enter the sub-retinal space, which then leads to retinal detachment. About 33% to 46% of untreated retinal tears will result in retinal detachment (Hollands, 2009). Retinal detachment causes ischemia and photoreceptor degeneration, which progresses to blindness. If retinal detachment is detected early and surgically corrected, vision loss can be prevented or even restored.

It is difficult to differentiate PVD from retinal tear or detachment based on history alone. Thus, patients who present with unilateral flashes and floaters require a complete eye exam, including visual acuity, pupillary light reflex, visual fields, slit lamp exam of the anterior and posterior segments, thorough inspection of the vitreous using slit lamp, and dilated fundoscopy. Indirect ophthalmoscopy and scleral depression are useful tools (Margo, 2005), but are not routinely performed by emergency physicians and thus, will not be discussed further. A monocular visual field deficit in the affected eye may represent an area of detached retina. A dilated ophthalmoscopic exam can detect a retinal tear (seen as a hole or defect which is often horseshoe shaped) or retinal detachment (which is seen as a billowing or wrinkled retina). Slit lamp exam may reveal vitreous pigment (“tobacco dust”) or hemorrhage, which is suggestive of retinal tear or detachment.

Often, fundoscopic examination is limited in patients with contraindications to mydriatics, significant periorbital soft tissue swelling, or inability to visualize the posterior segment of the eye due to hyphema, lens opacification, or vitreous hemorrhage (Teismann, 2009). In these cases, ocular ultrasound may be beneficial.   While the sensitivity and specificity of emergency physician performed ocular ultrasound to detect retinal detachment is beyond the scope of this topic, suffice it to say that ultrasound can be helpful to rule in (but not rule out) the diagnosis.

Since we cannot perform as detailed of an exam as can be done in an ophthalmologist’s office, our role in the ED is to make the diagnosis of probable PVD and to identify patients who are at risk for progression to retinal tear and detachment. Determining this risk will help differentiate patients who require urgent ophthalmology referral from those who can follow up in a less urgent manner. With time, PVD becomes more stable, and patients with floaters and flashes that have remained unchanged for months to years depict a reassuring scenario. In contrast, patients with new onset of floaters and flashes (days to weeks) are more concerning, since the acute phase of tractional forces on the retina makes it prone to developing tears.

In a 2009 meta-analysis, data from 17 different studies regarding patients with acute onset floaters and flashes of suspected ocular origin secondary to PVD demonstrated that 14% were found to have a retinal tear at initial presentation (Hollands, 2009). Besides acute onset of symptoms, other factors found to be predictive of retinal tears included subjective vision reduction and vitreous hemorrhage or pigment (“tobacco dust”) on slit lamp exam. In patients with subjective vision reduction, the prevalence of retinal tears increased from 14% to 45% (likelihood ratio (LR) of 5). The post-test probability of retinal tears in patients with acute onset floaters or flashes (with baseline prevalence of 14%) increased to 62% in patients with vitreous hemorrhage and 88% in patients with vitreous pigment on slit lamp exam. The study also concluded that patients initially diagnosed as having uncomplicated PVD have a 3.4% chance of developing a retinal tear within six weeks. The risk increases with new onset of at least 10 floaters (summary LR 8.1) or subjective vision reduction (summary LR 2.3).

Schweitzer, et al., performed a prospective cohort study looking for predictive characteristics in patients with acute PVD. They found that in vitreal or retinal hemorrhage a large number and/or high-frequency of floaters indicated a high risk for delayed retinal tears within 6 weeks (Schweitzer, 2011). The study was limited by small sample size (99 patients, only two of whom developed delayed retinal tears), however the results make intuitive sense: the more severe the symptoms at onset, the more likely patients are to progress to retinal tears.

Another study reviewed the charts of 295 patients presenting to an eye clinic with complaints of flashes or floaters, and found that 64% had uncomplicated PVD, 10.5% had retinal tears, and 16.6% had retinal detachments (Dayan, 1996). Although the study did identify features that were predictive of retinal tears, including subjective vision reduction and acute onset of symptoms (less than six weeks), a proportion of patients with retinal tears were found to lack these historical factors. The authors recommend routine follow-up visits for patients diagnosed with isolated PVD within six weeks. It should be noted that the study patients presented to an eye specialty clinic and were evaluated initially by an ophthalmologist using tools that are unavailable in standard EDs (i.e. ,indirect ophthalmoscopy with scleral indentation). It could be argued that patients presenting to an ED should be referred for follow-up earlier than the six weeks recommended in this study.

A prospective study of 270 patients with symptomatic, isolated PVD found that 3.7% developed new retinal tears within six weeks. Multiple floaters, a curtain or cloud, retinal or vitreous hemorrhages, and an increase in the number of floaters after initial examination were all found to be predictive of new retinal tears (van Overdam, 2005). Like several others described above, this study identified certain features suggestive of retinal tear that would indicate more urgent ophthalmology evaluation, but did not offer specific recommendations regarding timing of follow-up.

Bottom line: Most cases of floaters or flashes are due to PVD. Although PVD often follows a benign course, a small but clinically significant percentage of patients will develop a retinal tear. Left untreated, the tear can lead to detachment and vision loss. A reasonable approach to managing the patient who presents with floaters or flashes would be as follows (Hollands, 2009):

1.)  Start with an exam to rule out obvious retinal tear or detachment seen on fundoscopy or ultrasound. This diagnosis requires emergent ophthalmology consult in the ED.

2.)  Patients with monocular visual field loss suggestive of acute retinal detachment (i.e. “curtain of darkness”) or high-risk features for retinal tear (such as subjective or objective vision reduction or vitreous pigment or hemorrhage on slit lamp exam) also require same day ophthalmology evaluation.

3.)  In the absence of obvious retinal tear or detachment or aforementioned high-risk features, patients with monocular floaters/flashes thought to be of ocular origin should receive urgent ophthalmology referral (within 1-2 weeks) if symptoms are of acute onset. These patients should be counseled regarding high-risk features, and informed that if any of these symptoms develop, they should return to the emergency department or see their ophthalmologist within 24 hours.

4.)  In patients with chronic floaters/flashes that have suddenly increased in number, the case should be discussed with an ophthalmologist to determine the urgency of follow-up.

5.)  Patients with chronic, stable PVD should be counseled regarding high-risk features that suggest more urgent ophthalmology evaluation.

4.) Do you use ultrasound to assess patients for increased intracranial pressure?

Patients who present to the ED with increased intracranial pressure can be quite challenging to evaluate, not only because of their often depressed mental statuses but also because facial trauma and/or patient discomfort may interfere with the ability to perform a fundoscopic exam to assess for papilledema. Ultrasound, which can be done quickly at the bedside in cases where fundoscopy is difficult or impossible, is a useful tool in such circumstances. Multiple studies suggest that emergency physician performed ocular ultrasound to measure optic nerve sheath diameter (ONSD) is fairly sensitive and specific for detecting increased intracranial pressure. One study found that ONSD > 5mm detects ICP > 20 mm Hg with sensitivity of 88% and specificity of 93% (Kimberley, 2008). The prospective, blinded observational study was performed using a convenience sample of patients in the emergency department and the neurological ICU who already had invasive intracranial pressure monitors as part of their care.   All ONSD measurements were performed by emergency physicians who were blinded to the ICP monitor data. Another study found slightly improved results when a cutoff of 4.8mm ONSD was used, which was 96% sensitive and 94% specific for ICP > 20 mm Hg (Rajajee, 2011). Like the previous study, the standard criterion was ICP measured via invasive monitoring. A third study compared the ONSD of patients with intracranial hemorrhages requiring ICP monitors in an intensive care unit who were sedated and ventilated to the ONSD of ventilated, sedated control patients without intracranial pathology (Moretti, 2008). A threshold of 5.2mm predicted ICP > 20 mm Hg with 94% sensitivity and 76% specificity.

A prospective study published in Annals of Emergency Medicine found that ONSD greater than 5mm was 100% sensitive and 63% specific for elevated intracranial pressure detected on CT. Furthermore, ONSD > 5mm was 84% sensitive and 73% specific for detection of any traumatic intracranial injury found by CT (Tayal, 2007). Another prospective blinded observational study of a single sonographer who performed 27 ocular ultrasounds in patients with ICP monitors found that ONSD of 5.2mm was 83% sensitive and 100% specific for ICP > 20 mm Hg (Frumin, 2011).

While several papers indicate that ocular ultrasound to measure ONSD does correlate with increased intracranial pressure, much of the literature is based on small observational studies. Large randomized controlled trials are lacking.

While most studies use ONSD as a surrogate for intracranial pressure, a blinded prospective observational study compared point-of-care emergency physician performed ultrasound for optic disc height to both ophthalmology performed dilated fundoscopic exam (primary outcome) and optical coherence tomography (secondary outcome). In contrast to optic nerve sheath diameter, optic disc height refers to the budding of the optic disc into the hypoechoic globe on ultrasound. Results of the study showed that a disc height greater than 0.6mm predicted papilledema with a sensitivity of 82% and specificity of 76%. If the disc height threshold is increased to 1.0 mm, sensitivity decreased to 73% but specificity was 100% (Teismann, 2013).

Much of the evidence for sonographic ONSD measurement comes from head trauma literature. Can ocular ultrasound be used to evaluate non-traumatic etiologies of increased intracranial pressure? Unfortunately, large randomized controlled studies are lacking. A case report of a patient who presented to the emergency department with headache and photophobia who was ultimately diagnosed with pseudotumor cerebri found her ONSD to be 7mm (Stone, 2009). Another paper describes three patients with optic disc swelling due to idiopathic intracranial hypertension, secondary syphilis, and malignant hypertension in which ocular ultrasound revealed elevated optic disc height (Daulaire, 2012).

Two prospective studies evaluated patients presenting to the emergency department who were suspected of having elevated intracranial pressure for various non-traumatic reasons (CVA, SAH, tumor, meningitis, etc.). The first study assessed 26 patients who required CT in the emergency department due to concern for elevated ICP. Prior to CT, all patients received ocular ultrasound to measure ONSD. Using a cut-off of 5mm, ONSD was found to be 100% specific and 84% sensitive for increased ICP on CT. Furthermore, ONSD was 60% sensitive and 100% specific for any acute intracranial abnormality detected on CT (Major, 2011). The second study evaluated fifty patients deemed to be candidates for lumbar puncture due to concern for various diagnoses. Immediately prior to lumbar puncture, ONSD was measured using ultrasound. The mean ONSD for patients with ICP > 20 mm Hg (determined by opening pressure on LP) was 6.66 mm compared to 4.6 mm in patients with normal ICP. Using ROC curves, a cutoff of 5.5 mm predicted ICP > 20 mm Hg with 100% sensitivity and specificity (Amini, 2012).

The literature for ONSD in the evaluation of hydrocephalus seems to be conflicted. In children with VP shunt malfunction, symptoms often overlap with other common childhood illnesses such as viral syndrome or viral gastroenteritis, making timely diagnosis difficult despite the obvious urgency of the situation. Furthermore, CT and MRI are insensitive for shunt malfunction, missing as many as one third of patients. One prospective observational study of pediatric emergency department patients presenting with possible VP shunt malfunction found no statistically significant difference between the ONSD measurements in patients with VP shunt malfunction compared to patients with functional VP shunts (Hall, 2013). Another study showed more promising results: pediatric patients with functioning VP shunts had a mean ONSD of 2.9 mm compared to 5.6 mm in patients with shunt malfunction (Newman, 2002).

Bottom line: Ocular ultrasound can be useful in detecting elevated ICP, especially in the setting of head trauma when fundoscopy is difficult or impossible. Larger studies are needed to confirm these findings.

 

 

Posted in Uncategorized | Tagged , , , , | 3 Comments

Eye Emergencies, Questions

1. Do you prescribe ophthalmic topical anesthetics to patients with corneal abrasions who complain of severe pain?

EML Eye Emergencies
2. When do you schedule ophthalmology follow up for patients with corneal abrasions?

3. How soon after presentation do you have a patient with floaters see an ophthalmologist?

 

4. Do you use ultrasound to assess patients for increased intracranial pressure?

EML Eye Emergencies Questions

 

Posted in Uncategorized | Tagged , , , | 1 Comment

Monoarticular Arthritis, “Answers”

1. When do you tap a painful swollen joint? When do you obtain imaging before arthrocentesis?

Acute monoarticular arthritis is an inflammatory process involving a single joint that develops over a period of less than 2 weeks. The possible etiologies include infectious, crystalloid arthropathy, trauma, Lyme, and rheumatoid arthritis. The most feared cause is septic arthritis, as failure to diagnose it can lead to significant morbidity and mortality. It can result in permanent disability, with destruction of cartilage in a matter of days. Even treated infections have been associated with an in-hospital mortality rate of up to 15% (Carpenter, 2011). Therefore the main concern in the ED is to diagnose or rule out septic arthritis.

OLYMPUS DIGITAL CAMERAThere are very few published practice guidelines that explicitly indicate when arthrocentesis should be performed from any of the myriad specialties (rheumatology, orthopedics, EM, etc.) that encounter this complaint.

EB Medicine’s Emergency Medicine Practice cites the following 4 indications:

  • Obtaining synovial fluid for lab analysis
  • Draining traumatic tense hemarthroses
  • Determining whether a laceration communicates with a joint
  • Injecting analgesics and/or anti-inflammatory medications

In regards to the first indication, the authors do not specify when synovial fluid analysis should be done, other than endorsing arthrocentesis whenever septic arthritis is in the differential diagnosis (Genes, 2012).

So when should we consider septic arthritis in the differential? Under what other circumstances should we send synovial fluid? In the setting of a painful swollen joint, the first step is to differentiate a true articular versus periarticular inflammation. Among the latter are bursitis, tendonitis, and cellulitis. These are often associated with pain and swelling in a nonuniform distribution over the joint and limited active range of motion, whereas true arthritis is more associated with generalized pain and swelling, and limitations in both active and passive range of motion (Genes, 2012).

Once you have determined that there is a joint effusion, further physical exam and history may provide clues to the etiology. For example, dry scaly plaques on the skin would suggest psoriatic arthritis, while tophi would suggest gout. However, an important point here is that even if you have a high suspicion for one of these processes, it does not rule out concomitant septic arthritis. Patients with chronic joint disease are actually at increased risk of septic arthritis. Yu, et al. reported 30 cases of concomitant septic and gouty arthritis in their Taiwanese hospital over a 14 year period, underlining the importance of maintaining a high level of suspicion for septic arthritis (Yu, 2013).

So what historical aspects should prompt heightened suspicion? A Dutch prospective study of 5000 rheumatology patients found that the likelihood of septic arthritis increases with the following historical aspects (Kaandorp, 1997):

  • Skin infection + prosthesis (+LR 15)
  • Joint surgery within past 3 mo (+LR 6.9)
  • Rheumatoid arthritis (+LR 3.5)
  • Age >80 (+LR 3.5)
  • Hip/knee prosthesis (+LR 3.1)
  • Skin infection (+LR 2.8)
  • Diabetes (+LR 2.7)

Carpenter, et al. conducted a systematic review of 32 trials to determine if there were any physical exam characteristics that altered post test probability of septic arthritis. Exam findings had variable sensitivities across different studies (Carpenter, 2011):

  • Pain with motion (100%)
  • Limited motion (92%)
  • Tenderness (68-100%)
  • Effusion (92%)
  • Swelling (45-92%)
  • Warmth (18-92%)
  • Erythema (13-62%)

No studies have looked at clinical gestalt in predicting septic arthritis.

It is generally accepted that x-rays are of little value in the work-up of atraumatic, acute, monoarticular arthritis in the ED. Changes associated with septic arthritis are not seen early. Signs suggestive of other arthritides, such as osteophytes, joint space narrowing, bony erosions, or chondrocalcinosis may be interesting findings but will not rule out septic arthritis or change management. The only utility of plain films is for baseline imaging for the future. CT and MRI are also not indicated, unless there is suspicion of osteomyelitis.

Bottom line: Tap the joint if there is an acute, unexplained, and atraumatic painful joint effusion. If the patient has a history of gout or RA but there is still suspicion for septic arthritis, tap the joint. Consider x-ray but recognize that it will not change your management.

2. In a patient with monoarticular arthritis, do you send any serum labs such as CBC, ESR, or CRP? How do they guide your management?

While it is common to send a serum CBC, ESR, and CRP when suspecting septic arthritis, these tests are not helpful in guiding management. The first issue is that they are very nonspecific. Unfortunately, their sensitivities are also unreliable.

Carpenter, et al. performed a systematic review which analyzed the sensitivities of CBC, ESR, and CRP. Five studies found sensitivities ranging from 42-90% for WBC >10,000. One study yielded a sensitivity of 75% for WBC>11,000. Two studies found sensitivities of 23% and 30% for WBC >14,000. Only two studies calculated likelihood ratios. Jeng, et al. reported a +LR of 1.4 and a –LR of 0.28 for WBC >10,000 (Jeng ,1997), and Li, et al. reported a +LR of 1.7 and a –LR of 0.84 for WBC >11,000 (Li, 2007).

Seven studies looked at various cutoff values for ESR and found sensitivities ranging from 18-95% that had no correlation with the different ESR values being investigated. The same was true of four studies looking at various cutoff values of CRP, with sensitivities ranging from 44-91% in a random fashion. As with WBC, none of the studies that calculated specificities and LRs showed any values that significantly changed the posttest probability of septic arthritis, with the exception of one study, which reported a +LR of 7 if ESR was >100 (Martinot, 2005).

In short, there is no cutoff value of WBC, ESR, or CRP at which the posttest probability of septic arthritis is significantly increased, nor any value below which septic arthritis can safely be ruled out.

Several less commonly sent serum labs have also been investigated. Soderquist, et al. looked at procalcitonin, TNF-a, IL-6, and IL-β and found that all were quite specific but lacked sensitivity (Soderquist, 1998). Two additional studies also analyzed procalcitonin and concluded the same. Therefore, even if these tests yielded results in a timely fashion, none of them would be helpful to send when trying to rule out septic arthritis.

When there is suspicion of gout, serum uric acid is often sent, but again this test is not very sensitive as the value is frequently normal in acute gouty arthritis. Confirmation of Lyme requires IgM and IgG serology, which will not come back while the patient is still in the ED, but may be helpful later and therefore should be sent if suspicion is high (Genes, 2012).

Bottom line: No serum lab will change your management, nor will it rule in or out septic arthritis. Orthopedics and rheumatology will most likely want them regardless.

3. Which synovial fluid studies do you send in order to help make the diagnosis? Which of these rule out septic arthritis?

The gold standard for confirming a diagnosis of septic arthritis is a positive synovial fluid culture; however, it may take several days for cultures to grow, which makes them of little use in the emergent setting. Gram stains may result more quickly and offer the ability to tailor antibiotic treatment. Unfortunately, the yield of gram stains in septic arthritis is only 50-80%. The other synovial fluid labs that are typically sent are of varying utility (Genes, 2012).

Textbooks often cite ranges of synovial WBC values (sWBC) that are associated with normal joints, inflammatory processes, or septic arthritis. It may be more accurate to say that the likelihood of septic arthritis increases with the sWBC, and that for values >100,000 the likelihood is very high. Margaretten, et al. performed a systematic review which looked at 5 studies that each collected data for sWBC cutoffs of 25,000, 50,000, and 100,000. The averaged +LRs were 2.9, 7.7, and 28, respectively, suggesting a significant increase in post test probability for the higher two thresholds. Perhaps the most important point to make is that there is no value of sWBC at which one can safely rule out septic arthritis. Average sensitivities were 77%, 62%, and 29%, respectively, indicating that many patients with septic arthritis do not have exceedingly high sWBC values (Margaretten, 2007).

Four of the above studies also analyzed synovial polymorphonuclear cell counts using the often cited >90% as the cutoff. +LRs ranged from 1.8-4.2, which are not significant values for diagnostic purposes.

It is important to be aware that patients with prosthetic septic joints often present with lower sWBC and sPMN counts. One study that found that values of 1700 and 65% were sensitive and specific (Trampuz, 2004).

Glucose and protein in the synovial fluid do not alter the posttest probability of septic arthritis. Two studies that investigated decreased glucose found sensitivities of 56%-64%, and specificities of 85%. Only one investigated increased protein and reported 50% sensitivity and 47% specificity (Schmerling, 1990; Soderquist, 1998).

One of these studies also investigated synovial LDH and found 100% sensitivity for LDH >250, suggesting that septic arthritis could be ruled out if LDH is <250; however, this is the only study of its kind and was associated with a large number of false positives (specificity 51%) (Schmerling, 1990).

Serum lactate has become one of the most important diagnostic studies sent in suspected sepsis. Likewise, synovial lactate has shown promising diagnostic accuracy in septic arthritis. A recent study showed a +LR approaching infinity for serum lactate > 10 (Lenski, 2014). Several other studies have yielded similar supporting evidence for serum lactate in differentiating septic arthritis from other etiologies such as gout and rheumatoid arthritis. (Brook, 1978; Mossman, 1981; Riordan, 1982; Gobelet, 1984).

For the diagnosis of gout, the gold standard is the finding of negatively birefringent (MSU) crystals in the synovial fluid (and no organisms). Likewise, the gold standard for pseudogout is rhomboidal, positively birefringent (CPPD) crystals. No cases of either entity have been reported in the absence of the corresponding crystals. However, the finding of crystals does not necessarily explain an acute episode of joint pain, as they can also be found in the synovial fluid of asymptomatic patients (Pascual, 2011).

Bottom line: Synovial fluid culture is the gold standard for diagnosis of septic arthritis. sWBC may be helpful in that a very high value significantly raises the likelihood of septic arthritis (many say 50,000 but 100,000 is much more specific), but a low value does not rule it out. Serum lactate is a promising test that may be available in the future.

4. Do you inject the joint with any medication for symptomatic relief? If so, which medication?

Corticosteroids were the first substances to be injected intra-articularly (IA) for joint pain relief. First described by Hollander in the 1950s, IA steroids have been shown to decrease leukocyte secretion from the synovium as well as neutrophil migration into inflamed joints. This elevates the hyaluronic acid concentration in the joint and therefore the synovial fluid viscosity (Snibbe, 2005). Local injection has been shown to avoid many of the adverse effects of systemic steroids.

Furtado, et al., studied the use of IA steroids in rheumatoid arthritis and found that they gave better results than systemic steroids in terms of side effects, hospitalization, and patients’ subjective reporting of pain and overall disease (Furtado, 2005). Because steroids work on inflamed synovium, results are not as favorable in joint pain caused by weight-bearing forces as in osteoarthritis or sports-related injuries (Snibbe, 2005).

There are several options for steroids. In order of decreasing solubility, which corresponds to increasing duration of effect, they include dexamethasone, hydrocortisone, methylprednisolone, prednisolone, and triamcinolone (Lavelle, 2007).

There are no guidelines for the administration of IA steroids in terms of indication. There are, however, some contraindications. The most important is suspected infection of the joint space or overlying soft tissue. Others include joint prosthesis and bleeding diathesis. There are also concerns about steroid injections producing local adverse effects such as tendon and ligament rupture, soft tissue atrophy, and joint capsule calcification  (Snibbe, 2005). As a result, many practitioners will limit the number of injections they give and will not give a repeat dose for at least 3 months (Lavelle, 2007). It is also worthwhile to know that injection of crystalline corticosteroid material can potentially interfere w/ synovial fluid crystal analysis (Parillo, 2010).

Local anesthetic injections are another option for pain relief. Most of the literature supporting their use comes from orthopedic studies looking at the post-operative period, in particular after arthroscopy. Bupivacaine is typically the drug of choice due to its long duration of action. A systematic review of double-blind, randomized, controlled trials comparing IA local anesthetics to placebo showed a statistically significant decrease in both pain scores and additional analgesic requirements (Lavelle, 2007).

The problem with local anesthetics is the potential for cartilage destruction. This has only been found in animals and has not been studied in humans, but concern is high enough that most practitioners limit the number and frequency of injections. There have not been any reports of an ED patient having adverse effects from a single IA dose of a local anesthetic (Genes, 2012), but a study on rats showed prolonged chondrotoxicity after a single IA dose of bupivacaine (Chu, 2010).

Intra-articular opioids (morphine, fentanyl, etc.) have been found to be effective in inflamed tissues, in which the perineurium is disrupted and the opioids have better access to nerve receptors (Lavelle, 2007). Stein, et al., showed that IA morphine actually acted on peripheral receptors rather than systemically by demonstrating that the pain reduction resulting from IA morphine was reversed by injection of IA naloxone (Stein, 1991). Since then the efficacy of IA morphine has been debated, with a myriad of studies investigating its analgesic effects in patients post-op from arthroscopy and ACL repair. One systematic review looked at 19 of these studies and determined that IA morphine had a “mild analgesic effect” (Gupta, 2001). Meanwhile another study found very favorable results for IA morphine in patients with chronic knee pain from osteoarthritis. They actually found that the analgesic effect lasted longer than a week (Likar ,1997).

Other substances that may be injected into painful joints are hyaluronic acid, ketorolac, and clonidine (Lavelle, 2007), but none has been sufficiently studied or is commonly used in the ED.

Bottom line: The best options for intra-articular injections for pain control are steroids, local anesthetics, or morphine. All have been subject to controversy surrounding their efficacies and adverse effects. Steroids should be withheld in suspected infection.

Posted in Uncategorized | 4 Comments