1. When do you use tranexamic acid in trauma?
3. Which trauma patients do you give PCC to over FFP?
4. In blunt abdominal/flank trauma, do you send a urinalysis or simply look for gross hematuria?
1. When do you use tranexamic acid in trauma?
3. Which trauma patients do you give PCC to over FFP?
4. In blunt abdominal/flank trauma, do you send a urinalysis or simply look for gross hematuria?
1. When you are suspicious for DKA do you obtain a VBG or an ABG? How good is a VBG for determining acid/base status?
Diabetic ketoacidosis (DKA) is defined by five findings: acidosis (pH < 7.30, serum bicarbonate (HCO3) < 18 mEq/L, the presence of ketonuria or ketonemia, an anion gap > 10 mEq/L, and a plasma glucose concentration > 250 mg/dl. It is one of the most serious complications of diabetes seen in the emergency department. The mortality rate of hospitalized DKA patients is estimated to be between 2-10% (Lebovitz, 1995). As a result, its prompt recognition is vital to improving outcomes in these patients. As a result, emergency physicians have long relied on the combination of hyperglycemia and anion gap metabolic acidosis to help point them in the correct diagnostic direction.
In the assessment of the level of acidosis in a DKA patient, an arterial blood gas (ABG) has long been thought of as much more accurate than a venous blood gas (VBG) and thus necessary in evaluating a DKA patient’s pH and HCO3 level, two values often used to direct treatment decisions. An ABG is more painful, often time-consuming and labor intensive as it may involve multiple attempts. In addition, ABGs can be complicated by radial artery aneurysms, radial nerve injury and compromised blood supply in patients with peripheral vascular disease or inadequate ulnar circulation. A VBG is less painful, can obtained at the time of IV placement, and is therefore less time consuming. But is it good enough to estimate acid/base status in these patients?
Brandenburg, et al. compared arterial and venous blood gas samples in DKA patients taken at the exact same time prior to treatment and found a mean difference in pH between the arterial and venous samples to be only 0.03, with a Pearson’s correlation coefficient of 0.97 (Brandenburg, 1998). Gokel, et al. also demonstrated in twenty one DKA patients a mean difference in arterial and venous pH of 0.05 + 0.01 and mean difference in arterial and venous HCO3– of 1.88 + 0.4 (Gokel, 2000). A study of 195 patients in 2003 showed similar correlation between arterial and venous pH with a correlation coefficient r = 0.951(Ma, 2003). Further studies have also been published comparing ABG and VBG results in pathologically diverse groups of patients both in the ICU and the ED and achieved similar results (Malatesha, 2007; Middleton, 2006).
Ma, et al. went further and asked physicians to make diagnosis, treatment and disposition decisions without seeing the ABG results first. They found that the results affected diagnosis in only 1% of patients, and treatment in only 3.5% of patients (Ma, 2003).
As a result, the Joint British Diabetes Society 2011 Guidelines for the Management of DKA advise using a VBG in not only the initial assessment of acid/base status, but also to help monitor the progress of treatment (Savage, 2011). In summary, it appears that in patients presenting in DKA, a VBG sample is an adequate substitute for an ABG in determining a patient’s pH and HCO3– level with only a minor degree of inaccuracy that is not clinically significant enough to alter treatment decisions.
Bottom Line: A VBG is adequate for the diagnosis and ongoing management of patients with DKA. ABGs offer no added benefit and are associated with increased pain and complications.
2. Do you use serum or urine ketones to guide your diagnosis and treatment of DKA?
Although the presence of ketones is part of the DKA definition, many clinicians make the diagnosis based on acidosis, decreased serum HCO3 and the presence of an anion gap alone. The presence of ketones, however, is superior in making the diagnosis to HCO3 (Sheikh-Ali, 2008). Serum or urine samples can be used to detect ketones but urine testing is more rapid and thus, more likely to be utilized. Unfortunately, urinalysis testing may be misleading. In DKA, fatty acid breakdown results in the production of two major ketone bodies: acetoacetate and beta-hydroxybutyrate. Beta-hydroxybutyrate is the predominant ketone but urinalysis is only able to detect for acetoacetate via the nitroprusside assay (Marliss, 1970). Thus, early in DKA, the urinalysis may be negative for ketones and falsely reassuring. This has prompted many clinicians to do serum ketone testing. Serum testing also offers a quantitative measure of ketones instead of the simple qualitative measure with a urine test (Foreback, 1997). However, serum beta hydoxybutyrate testing is unavailable in many hospital systems and may not elucidate the entire clinical picture by itself (Fulop, 1999).
Additionally, as the patient is treated for DKA, beta-hydroxybutyrate is converted to acetoacetate. Appropriate treatment may cause a stronger positive nitroprusside assay reaction for acetoacetate, misleading the physician into thinking the patient is not improving or worsening. However, following serum ketones to assess for DKA improvement has not been shown to be superior to clinical evaluation.
Where does this leave us? In patients presenting with clinical signs and symptoms of DKA, serum pH, HCO3, glucose, and anion gap should be assessed. A urine should be checked for the presence of ketones and if positive, emergency department serum ketone testing would be unnecessary. However, if urine ketones are not present and the diagnosis is unclear, the addition of serum ketones (specifically beta-hydroxybutyrate) seems reasonable. There is no evidence to suggest that following serum ketones during treatment is necessary.
Bottom Line: Patients with DKA may present with a weak or absent nitroprusside assay reaction on urinalysis for ketones as this test only checks for acetoacetate (the minor ketone body produced in DKA). Serum beta-hydroxybutyrate testing may be helpful in certain cases in making the diagnosis.
3. Do you use IV bicarbonate administration for the treatment of severe acidosis in DKA? If so, when?
The cornerstones of DKA treatment involve reversal of the effects of osmotic diuresis with fluids and electrolyte repletion as well as correcting the acidemia present in these patients. Treatment with sodium bicarbonate has frequently been recommended to assist in raising the pH to a “safer level.”
However, recent evidence shows that bicarbonate is not only ineffective in correcting acidemia but that it may be detrimental. In their study, Morris, et al. took twenty one patients with severe DKA patients (pH 6.9-7.14) and found no significant difference in the decrease in glucose concentrations, decrease in ketone levels, the rate of increase in pH, the time to reach a serum glucose of 250 or to reach a pH of 7.3 in patients treated with bicarbonate versus those treated without bicarbonate (Morris 1986). In 2013, a study of 86 patients with DKA confirmed these findings. Patients who received bicarbonate had no significant difference in time to resolution of acidosis or time to hospital discharge (Duhon, 2013). However, the insulin and fluid requirements were higher in the bicarbonate group. A pediatric study of severe DKA patients (pH < 7.15) found that 39% of patients were successfully treated without bicarbonate with a comparable number of complications (Green, 1998).
In addition to its apparent lack of efficacy, numerous studies have also pointed to its potential deleterious effects. Okuda, et al. showed with seven patients in DKA that those assigned to receiving bicarbonate as part of their treatment had a 6-hour delay in the improvement of ketosis compared to the control group (Okuda, 1995). Bicarbonate has also been found to worsen hypokalemia and can cause paradoxical intracellular and central nervous system acidosis (Viallon, 1999). Additionally, a bicarbonate infusion shifts the oxygen dissociation curve decreasing tissue oxygen uptake and has been associated (although not shown to cause) cerebral edema in pediatric patients.
In spite of the lack of evidence, the American Diabetes Association continues to recommend the use of bicarbonate in patients with a serum pH < 7.0 (Kitabachi, 2006). However, in the face of mounting evidence and a lack of support in the literature, this recommendation should be readdressed. A systematic review of 44 studies, including three randomized clinical trials in adults found no clinical efficacy to the use of bicarbonate in DKA (Chua, 2011). Of note, none of the trials cited in the ADA recommendations or the systematic review included patients with an initial pH < 6.85, making it difficult for the clinician to know what to do in cases of such severe acidosis.
Bottom Line: There is no established role for administration of sodium bicarbonate to patients with DKA regardless of their pH. Sodium bicarbonate administration is associated with more complications including hypokalemia and cerebral edema.
4. When do you start an insulin infusion in patients with hypokalemia? Do you give a bolus followed by a drip?
Insulin administration is paramount to the successful treatment of the DKA patient as it reverses the mobilization of free fatty acids and the production of ketoacids and glucose. Prior to the isolation of insulin for medical use, the mortality of DKA was 100%. It functions by treating the acidosis and ketosis present in these patients. DKA patients, however, often have profound potassium losses secondary to the osmotic dieresis that occurs with such a hyperglycemic state. As a result, about 5-10% of patients with DKA will present with hypokalemia (Aurora, 2012). In addition to its other functions, insulin drives potassium from the serum into the cells. Thus it is vital to know the serum potassium level prior to starting insulin therapy in order to avoid a lethal hypokalemia-induced dysrhythmia. An EKG can also assist in detecting any signs of hypo- or hyperkalemia that may be seen in these patients. The American Diabetes Association recommends beginning insulin therapy once the potassium level is repleted to > 3.3 meq/L. Below a potassium level of 5.5 meq/L, 20-30meq KCL should be added to each liter of fluids to prevent hypokalemia from occurring with insulin therapy (Kitabchi, 2006).
Traditional teaching in DKA treatment recommends starting a bolus of insulin followed by an infusion. The bolus was believed to rapidly activate the insulin receptors and lead to a resolution of hyperglycemia, ketosis, and acidosis. Recent literature, however, has shown that this initial bolus of insulin is likely unnecessary and may pose harm by creating a greater risk for hypoglycemic events. A randomized trial in 2008 demonstrated that giving patients a bolus of insulin followed by a drip (at 0.07 units/kg/hr) resulted in a brief period of supranormal insulin levels followed by a plateau at subnormal levels (Kitabchi, 2008). Providing an infusion at 0.14 units/kg/hr, however resulted in a serum insulin plateau that was more consistent with normal physiology. Goyal, et al. divided 157 patients and treated half of them with insulin bolus + drip and the other half with insulin drip only and found that there were no statistically significant differences in the rate of change of glucose (both groups with approximately 60mg/dl/hr decrease), change in anion gap, or length of stay in the ED or the hospital (Goyal, 2007). Patients treated with an insulin bolus + infusion also had more side effects including more episodes of hypoglycemia and higher potassium requirements (although these were trends seen in this small observational study, neither reached statistical significance).
Most current guidelines state the initial insulin infusion rate of 0.1 units/kg/hr is acceptable. If the insulin infusion does not cause the serum glucose level to drop by 50-70mg/dL in the first hour, the insulin infusion may be doubled until a steady decrease is achieved.
Bottom Line: Insulin should not be started in patients with DKA until the serum potassium level is confirmed to be > 3.5 mEq/L. The use of an insulin bolus prior to infusion has not been shown to improve any patient centered outcomes or surrogate markers and is associated with an increased rate of hypoglycemic episodes.
1. When you are suspicious for DKA do you get a VBG or an ABG? How good is a VBG for determining acid/base status?
2. Do you use serum or urine ketones to guide your diagnosis and treatment of DKA?
3. Do you use IV bicarbonate for the treatment of severe acidosis in DKA? If so, when?
4. When do you start an insulin infusion in patients with hypokalemia? Bolus or no bolus?
1.) Do you prescribe ophthalmic topical anesthetics to patients with corneal abrasions who complain of severe pain?
Corneal abrasion is one of the most common acute eye complaints that presents to the ED, accounting for approximately 10% of eye related ED visits (Verma, 2013). The cornea is highly innervated, and even small abrasions can cause significant pain. The use of topical ophthalmologic anesthetics was first documented in 1818 with erythroxylum coca (a cocaine derivative), and is quite effective at blocking nerve conduction in the superficial cornea and conjunctiva, thus eliminating the sensation of pain (Rosenwasser, 1989).
There are a number of proposed dangers in using topical anesthetics for corneal abrasions. This includes inhibition of mitosis (and subsequent delayed healing) and decreased corneal sensation with the fear that the abrasion will progress to an ulcer without the patient noticing. Additionally, these agents may have direct toxicity to corneal epithelium with prolonged use.
These theoretical dangers could potentially lead to keratitis, edema, erosion, and the formation of infiltrates and opacities. These concerns prompted early research of the effects of topical anesthetics on the cornea. The adage that topical anesthetics should not be prescribed to patients with corneal abrasions originated from animal studies and case studies dating back to the 1960’s. Many of the animal studies were done on enucleated rat and rabbit eyes or animal cell preparations. This research may not, for obvious reasons, be applicable to living human subjects.
The earliest human studies date back to the 1960’s and 70’s, and are mostly small case reports of patients using topical anesthetics inappropriately. The first such study was a case report of five patients who used topical anesthetics chronically, resulting in keratitis (Epstein, 1968). All five patients used topical anesthetics for either a prolonged period of time, too frequently, or without physician supervision or proper examination prior to application. In contrast to the inappropriate uses detailed in the case reports, topical anesthetics commonly used to facilitate slit lamp examinations include tetracaine 0.5% or proparacaine 0.5%. A theoretical prescription regimen would be a short course (2-3 days) of a dilute topical anesthetic used only a few times daily (every 4-6 hours as needed).
The next case report condemning the use of topical anesthetics was published two years later, and examined the outcomes of nine patients who misused topical anesthetics (Willis, 1970). Like the previous case reports, this study included nine patients who used topical anesthetics inappropriately: either too frequently, for prolonged periods of time, or without appropriate physician supervision or examination. Of these nine patients, only one patient used the medication in a somewhat reasonable manner (a 46 year old factory worker who used topical anesthetic every two hours for two days), however it is unclear from the paper whether he received a proper slit lamp examination on initial evaluation or was given the drops empirically. When he saw an ophthalmologist two days later, he was diagnosed with anterior uveitis and epithelial erosion, which may have been present at the time of initial injury.
More recent case studies specifically address topical anesthetic abuse and its effects on the cornea (Erdem, 2013; Yeniad, 2010). Types of misuse seen in the literature include using higher concentrations of topical anesthetics, using with excessive frequency, or using for prolonged periods of time. To date, there are no studies that show adverse outcomes from short courses of dilute topical anesthetic with use limited to every 4-6 hours as needed.
There are studies demonstrating the safety of topical anesthetics from the ophthalmology literature. PRK (photorefractive keratectomy) is a type of laser vision correction surgery that involves ablation of a small amount of tissue from the corneal stroma, thus creating an epithelial deficit (similar to a corneal abrasion). In a two-part study, proparacaine was first administered to healthy volunteers in different concentrations to assess anesthetic efficacy (Shahinian, 1997). Dilute (0.05%) proparacaine was then given to healthy volunteers to determine the safety of excessive use. No corneal toxicity was observed. In the second part of the double-blinded study, 34 PRK patients were prospectively randomized into a treatment group (proparacaine 0.05% for one week as needed) or placebo group (artificial tears). Both groups also received oral opioids and topical NSAIDS. Patients in the treatment group reported significantly decreased pain scores, longer duration of pain relief, and decreased opioid use compared to the placebo group.
Another study in the PRK literature looked at post-operative patients given approximately ten drops of tetracaine 0.5% to use as needed (Brilakis, 2000). Patients were re-examined on post-op days 1 and 3. The study found that all of the eyes had healed within 72 hours and use of the tetracaine drops did not prolong time to re-epithelialization.
There are some studies in the emergency medicine literature which support the use of topical anesthetics. One such study was a prospective, randomized controlled trial that included adults with corneal injuries presenting to one of two tertiary emergency departments in Ontario (Ball, 2010). Participants were randomized to receive either proparacaine 0.05% or placebo drops and were followed up by an ophthalmologist on days 1, 3, and 5. All patients were also prescribed topical NSAIDS and oral acetaminophen with codeine, and were told to take the study drops 2-4 at a time as needed. Patients were prescribed 40 mL of drops. The study was small (only 15 patients in the proparacaine group and 18 patients in the placebo group), but showed significantly better pain reduction and decreased opioid use in the proparacaine group. There were no ocular complications or delay in healing in either group.
Another recently published 12-month prospective, double-blinded randomized trial assessed a convenience sample of 116 patients with uncomplicated corneal abrasions (Waldman, 2014). Study participants were randomized to receive either 1% tetracaine or saline every 30 minutes as needed for twenty-four hours. Results showed no complications attributed to topical anesthetics, and no statistically significant difference in corneal healing at 48 hours. To assess pain control, both a visual analogue scale as well as a patient-reported numeric rating scale for overall effectiveness were used. While no difference was seen between the two groups on the visual analogue scale, patients rated tetracaine as having a better overall effectiveness on the numerical rating scale. Although 48-hour follow-up was relatively low (64% in the saline group and 69% in the tetracaine group), the study found that topical tetracaine used for 24 hours was safe and that patients perceived a better overall effectiveness with tetracaine. Both blinding of treatment groups and the pain scores may have been compromised here by the burning sensation that accompanies initial tetracaine application.
Bottom Line: Major EM textbooks still discourage prescribing topical anesthetics for corneal abrasions. In spite of this, there is mounting evidence in the EM literature that topical anesthetics are safe and effective for the treatment of pain in corneal abrasions. It may be reasonable to send selected, reliable patients home with a limited supply of topical anesthetic agents along with strict instructions for return to the ED and 48 hour follow up with an ophthalmologist. Larger randomized, controlled, ED-based studies are needed before the safety of this practice can be fully elucidated and thus, at this time treatment with topical anesthetics cannot be absolutely recommended.
2.) When do you schedule ophthalmology follow up for patients with corneal abrasions?
The cornea functions to protect the eye, filter UV light, and refract light to allow for image formation. To properly refract light the cornea must be completely transparent and thus it is avascular and obtains its nutrients from the aqueous humor, tears, and ambient oxygen. While most corneal abrasions heal quickly and without consequence despite the cornea being an avascular structure, there is potential for complications ranging from infection to ulceration to permanent vision loss, especially if the abrasion is not properly treated.
After a corneal abrasion is diagnosed via slit lamp exam, there are various options for further care. Despite numerous review articles offering various recommendations on the optimal follow up method, there is no evidence-based literature to guide this decision.
Several articles recommend 24-hour follow-up, but don’t specify with whom the patient should follow. A guideline statement from Wilson, et al. recommends that most patients should be re-evaluated in 24 hours and if the abrasion is not fully healed, additional follow-up is needed (Wilson, 2004). It furthermore states that close attention should be paid to contact lens wearers and immunocompromised patients, and that specific ophthalmology referral is recommended for patients with deep eye injuries, foreign bodies unable to be removed, and suspected recurrent corneal erosions. Also, patients with persistent symptoms after 72 hours, worsening symptoms, or vision abnormalities should be referred to an ophthalmologist.
On the contrary, Khan, et al. suggests that patients with corneal abrasions should be seen specifically by an ophthalmologist within 24-48 hours to assess for healing (Khan, 2013). The articles goes on to state that most injuries heal quickly and without infection within 24 hours and that these patients will not need long-term follow-up, with the exception of contact lens users who may need follow-up over the course of 3-5 days.
A review on EM Updates (Strayer, 2009) recommends immediate ophthalmology evaluation if the corneal abrasion is associated with penetrating injury or infiltrate ,and 24 hour ophthalmology evaluation if the abrasion is “high risk,”such as those created by an artificial fingernail or organic matter (which are prone to fungal infections) or contact lens wearers (who are prone to bacterial infections, including pseudomonas). All others can be re-evaluated (not necessarily by an ophthalmologist) in 24 hours.
While most sources recommend at least one follow-up visit within 24-48 hours, some recent articles propose that “small” corneal abrasions (definitions of which range from less than 4mm to less than one fourth of the corneal surface area) which are uncomplicated (i.e., no organic material or contact lens use) in reliable patients with normal vision and resolving symptoms may not require follow-up (Wipperman, 2013).
Do practice patterns reflect these varying recommendations? A nationwide, Canadian survey study that concluded that 88% percent of ED physicians routinely arranged follow-up for their patients with cornal abrasions (Calder, 2004). Most often it was a return to the emergency department (69%) but 45% referred patients to ophthalmologists and 35% referred to the family physician.
Bottom line: Based on expert consensus, it is a reasonable and safe approach to have every patient re-evaluated in 24-48 hours. Those with “high risk” abrasions that you are worried about can be referred to ophthalmology for this follow-up, and others can most likely be re-evaluated by their primary care doctor or told to return to the ED in 24 hours for re-evaluation to ensure proper healing and the absence of infection.
3.) How soon after presentation do you have a patient with floaters see an ophthalmologist?
Floaters are defined as the perception of moving spots in the visual field of one eye. They are usually black or grey in color, and are caused by either light bending at the interface of fluid pockets in the vitreous jelly or opacities caused by cells within the vitreous. They are a very common condition, especially in patients over the age of fifty. In contrast, flashes (which often accompany floaters) can be described as brief repeated sensations of bright light, typically seen at the periphery of the visual field. Flashes are caused by vitreous traction on the retina. Both floaters and flashes are painless (Hollands, 2009; Margo, 2005). Most cases of floaters and flashes (especially when monocular) are of ocular etiology, the most common of which is posterior vitreous detachment (PVD). However, the differential diagnosis also includes retinal tear or detachment, posterior uveitis and other causes of vitreous inflammation, vitreous hemorrhage (which can result from diabetic retinopathy), macular degeneration, ocular lymphoma, intraocular foreign body, TIA, migraine aura, postural hypotension, and occipital lobe disorders. In contrast to ocular etiologies, extra-ocular causes of floaters and flashes are often bilateral and accompanied by other symptoms (Hollands, 2009).
Posterior vitreous detachment is the most common cause of floaters, and occurs in approximately two thirds of patients over age 65 (Margo, 2005). The posterior vitreous is composed mostly of water and collagen. As we age this structure shrinks in size, causing it to detach from the underlying retina. Although most people will develop PVD at some point in their lives, for the majority it will remain benign and without serious consequences. For others, it may progress to retinal tear, which often appears as a horseshoe shaped hole in the retina. Tears allow fluid to enter the sub-retinal space, which then leads to retinal detachment. About 33% to 46% of untreated retinal tears will result in retinal detachment (Hollands, 2009). Retinal detachment causes ischemia and photoreceptor degeneration, which progresses to blindness. If retinal detachment is detected early and surgically corrected, vision loss can be prevented or even restored.
It is difficult to differentiate PVD from retinal tear or detachment based on history alone. Thus, patients who present with unilateral flashes and floaters require a complete eye exam, including visual acuity, pupillary light reflex, visual fields, slit lamp exam of the anterior and posterior segments, thorough inspection of the vitreous using slit lamp, and dilated fundoscopy. Indirect ophthalmoscopy and scleral depression are useful tools (Margo, 2005), but are not routinely performed by emergency physicians and thus, will not be discussed further. A monocular visual field deficit in the affected eye may represent an area of detached retina. A dilated ophthalmoscopic exam can detect a retinal tear (seen as a hole or defect which is often horseshoe shaped) or retinal detachment (which is seen as a billowing or wrinkled retina). Slit lamp exam may reveal vitreous pigment (“tobacco dust”) or hemorrhage, which is suggestive of retinal tear or detachment.
Often, fundoscopic examination is limited in patients with contraindications to mydriatics, significant periorbital soft tissue swelling, or inability to visualize the posterior segment of the eye due to hyphema, lens opacification, or vitreous hemorrhage (Teismann, 2009). In these cases, ocular ultrasound may be beneficial. While the sensitivity and specificity of emergency physician performed ocular ultrasound to detect retinal detachment is beyond the scope of this topic, suffice it to say that ultrasound can be helpful to rule in (but not rule out) the diagnosis.
Since we cannot perform as detailed of an exam as can be done in an ophthalmologist’s office, our role in the ED is to make the diagnosis of probable PVD and to identify patients who are at risk for progression to retinal tear and detachment. Determining this risk will help differentiate patients who require urgent ophthalmology referral from those who can follow up in a less urgent manner. With time, PVD becomes more stable, and patients with floaters and flashes that have remained unchanged for months to years depict a reassuring scenario. In contrast, patients with new onset of floaters and flashes (days to weeks) are more concerning, since the acute phase of tractional forces on the retina makes it prone to developing tears.
In a 2009 meta-analysis, data from 17 different studies regarding patients with acute onset floaters and flashes of suspected ocular origin secondary to PVD demonstrated that 14% were found to have a retinal tear at initial presentation (Hollands, 2009). Besides acute onset of symptoms, other factors found to be predictive of retinal tears included subjective vision reduction and vitreous hemorrhage or pigment (“tobacco dust”) on slit lamp exam. In patients with subjective vision reduction, the prevalence of retinal tears increased from 14% to 45% (likelihood ratio (LR) of 5). The post-test probability of retinal tears in patients with acute onset floaters or flashes (with baseline prevalence of 14%) increased to 62% in patients with vitreous hemorrhage and 88% in patients with vitreous pigment on slit lamp exam. The study also concluded that patients initially diagnosed as having uncomplicated PVD have a 3.4% chance of developing a retinal tear within six weeks. The risk increases with new onset of at least 10 floaters (summary LR 8.1) or subjective vision reduction (summary LR 2.3).
Schweitzer, et al., performed a prospective cohort study looking for predictive characteristics in patients with acute PVD. They found that in vitreal or retinal hemorrhage a large number and/or high-frequency of floaters indicated a high risk for delayed retinal tears within 6 weeks (Schweitzer, 2011). The study was limited by small sample size (99 patients, only two of whom developed delayed retinal tears), however the results make intuitive sense: the more severe the symptoms at onset, the more likely patients are to progress to retinal tears.
Another study reviewed the charts of 295 patients presenting to an eye clinic with complaints of flashes or floaters, and found that 64% had uncomplicated PVD, 10.5% had retinal tears, and 16.6% had retinal detachments (Dayan, 1996). Although the study did identify features that were predictive of retinal tears, including subjective vision reduction and acute onset of symptoms (less than six weeks), a proportion of patients with retinal tears were found to lack these historical factors. The authors recommend routine follow-up visits for patients diagnosed with isolated PVD within six weeks. It should be noted that the study patients presented to an eye specialty clinic and were evaluated initially by an ophthalmologist using tools that are unavailable in standard EDs (i.e. ,indirect ophthalmoscopy with scleral indentation). It could be argued that patients presenting to an ED should be referred for follow-up earlier than the six weeks recommended in this study.
A prospective study of 270 patients with symptomatic, isolated PVD found that 3.7% developed new retinal tears within six weeks. Multiple floaters, a curtain or cloud, retinal or vitreous hemorrhages, and an increase in the number of floaters after initial examination were all found to be predictive of new retinal tears (van Overdam, 2005). Like several others described above, this study identified certain features suggestive of retinal tear that would indicate more urgent ophthalmology evaluation, but did not offer specific recommendations regarding timing of follow-up.
Bottom line: Most cases of floaters or flashes are due to PVD. Although PVD often follows a benign course, a small but clinically significant percentage of patients will develop a retinal tear. Left untreated, the tear can lead to detachment and vision loss. A reasonable approach to managing the patient who presents with floaters or flashes would be as follows (Hollands, 2009):
1.) Start with an exam to rule out obvious retinal tear or detachment seen on fundoscopy or ultrasound. This diagnosis requires emergent ophthalmology consult in the ED.
2.) Patients with monocular visual field loss suggestive of acute retinal detachment (i.e. “curtain of darkness”) or high-risk features for retinal tear (such as subjective or objective vision reduction or vitreous pigment or hemorrhage on slit lamp exam) also require same day ophthalmology evaluation.
3.) In the absence of obvious retinal tear or detachment or aforementioned high-risk features, patients with monocular floaters/flashes thought to be of ocular origin should receive urgent ophthalmology referral (within 1-2 weeks) if symptoms are of acute onset. These patients should be counseled regarding high-risk features, and informed that if any of these symptoms develop, they should return to the emergency department or see their ophthalmologist within 24 hours.
4.) In patients with chronic floaters/flashes that have suddenly increased in number, the case should be discussed with an ophthalmologist to determine the urgency of follow-up.
5.) Patients with chronic, stable PVD should be counseled regarding high-risk features that suggest more urgent ophthalmology evaluation.
4.) Do you use ultrasound to assess patients for increased intracranial pressure?
Patients who present to the ED with increased intracranial pressure can be quite challenging to evaluate, not only because of their often depressed mental statuses but also because facial trauma and/or patient discomfort may interfere with the ability to perform a fundoscopic exam to assess for papilledema. Ultrasound, which can be done quickly at the bedside in cases where fundoscopy is difficult or impossible, is a useful tool in such circumstances. Multiple studies suggest that emergency physician performed ocular ultrasound to measure optic nerve sheath diameter (ONSD) is fairly sensitive and specific for detecting increased intracranial pressure. One study found that ONSD > 5mm detects ICP > 20 mm Hg with sensitivity of 88% and specificity of 93% (Kimberley, 2008). The prospective, blinded observational study was performed using a convenience sample of patients in the emergency department and the neurological ICU who already had invasive intracranial pressure monitors as part of their care. All ONSD measurements were performed by emergency physicians who were blinded to the ICP monitor data. Another study found slightly improved results when a cutoff of 4.8mm ONSD was used, which was 96% sensitive and 94% specific for ICP > 20 mm Hg (Rajajee, 2011). Like the previous study, the standard criterion was ICP measured via invasive monitoring. A third study compared the ONSD of patients with intracranial hemorrhages requiring ICP monitors in an intensive care unit who were sedated and ventilated to the ONSD of ventilated, sedated control patients without intracranial pathology (Moretti, 2008). A threshold of 5.2mm predicted ICP > 20 mm Hg with 94% sensitivity and 76% specificity.
A prospective study published in Annals of Emergency Medicine found that ONSD greater than 5mm was 100% sensitive and 63% specific for elevated intracranial pressure detected on CT. Furthermore, ONSD > 5mm was 84% sensitive and 73% specific for detection of any traumatic intracranial injury found by CT (Tayal, 2007). Another prospective blinded observational study of a single sonographer who performed 27 ocular ultrasounds in patients with ICP monitors found that ONSD of 5.2mm was 83% sensitive and 100% specific for ICP > 20 mm Hg (Frumin, 2011).
While several papers indicate that ocular ultrasound to measure ONSD does correlate with increased intracranial pressure, much of the literature is based on small observational studies. Large randomized controlled trials are lacking.
While most studies use ONSD as a surrogate for intracranial pressure, a blinded prospective observational study compared point-of-care emergency physician performed ultrasound for optic disc height to both ophthalmology performed dilated fundoscopic exam (primary outcome) and optical coherence tomography (secondary outcome). In contrast to optic nerve sheath diameter, optic disc height refers to the budding of the optic disc into the hypoechoic globe on ultrasound. Results of the study showed that a disc height greater than 0.6mm predicted papilledema with a sensitivity of 82% and specificity of 76%. If the disc height threshold is increased to 1.0 mm, sensitivity decreased to 73% but specificity was 100% (Teismann, 2013).
Much of the evidence for sonographic ONSD measurement comes from head trauma literature. Can ocular ultrasound be used to evaluate non-traumatic etiologies of increased intracranial pressure? Unfortunately, large randomized controlled studies are lacking. A case report of a patient who presented to the emergency department with headache and photophobia who was ultimately diagnosed with pseudotumor cerebri found her ONSD to be 7mm (Stone, 2009). Another paper describes three patients with optic disc swelling due to idiopathic intracranial hypertension, secondary syphilis, and malignant hypertension in which ocular ultrasound revealed elevated optic disc height (Daulaire, 2012).
Two prospective studies evaluated patients presenting to the emergency department who were suspected of having elevated intracranial pressure for various non-traumatic reasons (CVA, SAH, tumor, meningitis, etc.). The first study assessed 26 patients who required CT in the emergency department due to concern for elevated ICP. Prior to CT, all patients received ocular ultrasound to measure ONSD. Using a cut-off of 5mm, ONSD was found to be 100% specific and 84% sensitive for increased ICP on CT. Furthermore, ONSD was 60% sensitive and 100% specific for any acute intracranial abnormality detected on CT (Major, 2011). The second study evaluated fifty patients deemed to be candidates for lumbar puncture due to concern for various diagnoses. Immediately prior to lumbar puncture, ONSD was measured using ultrasound. The mean ONSD for patients with ICP > 20 mm Hg (determined by opening pressure on LP) was 6.66 mm compared to 4.6 mm in patients with normal ICP. Using ROC curves, a cutoff of 5.5 mm predicted ICP > 20 mm Hg with 100% sensitivity and specificity (Amini, 2012).
The literature for ONSD in the evaluation of hydrocephalus seems to be conflicted. In children with VP shunt malfunction, symptoms often overlap with other common childhood illnesses such as viral syndrome or viral gastroenteritis, making timely diagnosis difficult despite the obvious urgency of the situation. Furthermore, CT and MRI are insensitive for shunt malfunction, missing as many as one third of patients. One prospective observational study of pediatric emergency department patients presenting with possible VP shunt malfunction found no statistically significant difference between the ONSD measurements in patients with VP shunt malfunction compared to patients with functional VP shunts (Hall, 2013). Another study showed more promising results: pediatric patients with functioning VP shunts had a mean ONSD of 2.9 mm compared to 5.6 mm in patients with shunt malfunction (Newman, 2002).
Bottom line: Ocular ultrasound can be useful in detecting elevated ICP, especially in the setting of head trauma when fundoscopy is difficult or impossible. Larger studies are needed to confirm these findings.
1. Do you prescribe ophthalmic topical anesthetics to patients with corneal abrasions who complain of severe pain?
3. How soon after presentation do you have a patient with floaters see an ophthalmologist?
4. Do you use ultrasound to assess patients for increased intracranial pressure?
1. When do you tap a painful swollen joint? When do you obtain imaging before arthrocentesis?
Acute monoarticular arthritis is an inflammatory process involving a single joint that develops over a period of less than 2 weeks. The possible etiologies include infectious, crystalloid arthropathy, trauma, Lyme, and rheumatoid arthritis. The most feared cause is septic arthritis, as failure to diagnose it can lead to significant morbidity and mortality. It can result in permanent disability, with destruction of cartilage in a matter of days. Even treated infections have been associated with an in-hospital mortality rate of up to 15% (Carpenter, 2011). Therefore the main concern in the ED is to diagnose or rule out septic arthritis.
There are very few published practice guidelines that explicitly indicate when arthrocentesis should be performed from any of the myriad specialties (rheumatology, orthopedics, EM, etc.) that encounter this complaint.
EB Medicine’s Emergency Medicine Practice cites the following 4 indications:
In regards to the first indication, the authors do not specify when synovial fluid analysis should be done, other than endorsing arthrocentesis whenever septic arthritis is in the differential diagnosis (Genes, 2012).
So when should we consider septic arthritis in the differential? Under what other circumstances should we send synovial fluid? In the setting of a painful swollen joint, the first step is to differentiate a true articular versus periarticular inflammation. Among the latter are bursitis, tendonitis, and cellulitis. These are often associated with pain and swelling in a nonuniform distribution over the joint and limited active range of motion, whereas true arthritis is more associated with generalized pain and swelling, and limitations in both active and passive range of motion (Genes, 2012).
Once you have determined that there is a joint effusion, further physical exam and history may provide clues to the etiology. For example, dry scaly plaques on the skin would suggest psoriatic arthritis, while tophi would suggest gout. However, an important point here is that even if you have a high suspicion for one of these processes, it does not rule out concomitant septic arthritis. Patients with chronic joint disease are actually at increased risk of septic arthritis. Yu, et al. reported 30 cases of concomitant septic and gouty arthritis in their Taiwanese hospital over a 14 year period, underlining the importance of maintaining a high level of suspicion for septic arthritis (Yu, 2013).
So what historical aspects should prompt heightened suspicion? A Dutch prospective study of 5000 rheumatology patients found that the likelihood of septic arthritis increases with the following historical aspects (Kaandorp, 1997):
Carpenter, et al. conducted a systematic review of 32 trials to determine if there were any physical exam characteristics that altered post test probability of septic arthritis. Exam findings had variable sensitivities across different studies (Carpenter, 2011):
No studies have looked at clinical gestalt in predicting septic arthritis.
It is generally accepted that x-rays are of little value in the work-up of atraumatic, acute, monoarticular arthritis in the ED. Changes associated with septic arthritis are not seen early. Signs suggestive of other arthritides, such as osteophytes, joint space narrowing, bony erosions, or chondrocalcinosis may be interesting findings but will not rule out septic arthritis or change management. The only utility of plain films is for baseline imaging for the future. CT and MRI are also not indicated, unless there is suspicion of osteomyelitis.
Bottom line: Tap the joint if there is an acute, unexplained, and atraumatic painful joint effusion. If the patient has a history of gout or RA but there is still suspicion for septic arthritis, tap the joint. Consider x-ray but recognize that it will not change your management.
2. In a patient with monoarticular arthritis, do you send any serum labs such as CBC, ESR, or CRP? How do they guide your management?
While it is common to send a serum CBC, ESR, and CRP when suspecting septic arthritis, these tests are not helpful in guiding management. The first issue is that they are very nonspecific. Unfortunately, their sensitivities are also unreliable.
Carpenter, et al. performed a systematic review which analyzed the sensitivities of CBC, ESR, and CRP. Five studies found sensitivities ranging from 42-90% for WBC >10,000. One study yielded a sensitivity of 75% for WBC>11,000. Two studies found sensitivities of 23% and 30% for WBC >14,000. Only two studies calculated likelihood ratios. Jeng, et al. reported a +LR of 1.4 and a –LR of 0.28 for WBC >10,000 (Jeng ,1997), and Li, et al. reported a +LR of 1.7 and a –LR of 0.84 for WBC >11,000 (Li, 2007).
Seven studies looked at various cutoff values for ESR and found sensitivities ranging from 18-95% that had no correlation with the different ESR values being investigated. The same was true of four studies looking at various cutoff values of CRP, with sensitivities ranging from 44-91% in a random fashion. As with WBC, none of the studies that calculated specificities and LRs showed any values that significantly changed the posttest probability of septic arthritis, with the exception of one study, which reported a +LR of 7 if ESR was >100 (Martinot, 2005).
In short, there is no cutoff value of WBC, ESR, or CRP at which the posttest probability of septic arthritis is significantly increased, nor any value below which septic arthritis can safely be ruled out.
Several less commonly sent serum labs have also been investigated. Soderquist, et al. looked at procalcitonin, TNF-a, IL-6, and IL-β and found that all were quite specific but lacked sensitivity (Soderquist, 1998). Two additional studies also analyzed procalcitonin and concluded the same. Therefore, even if these tests yielded results in a timely fashion, none of them would be helpful to send when trying to rule out septic arthritis.
When there is suspicion of gout, serum uric acid is often sent, but again this test is not very sensitive as the value is frequently normal in acute gouty arthritis. Confirmation of Lyme requires IgM and IgG serology, which will not come back while the patient is still in the ED, but may be helpful later and therefore should be sent if suspicion is high (Genes, 2012).
Bottom line: No serum lab will change your management, nor will it rule in or out septic arthritis. Orthopedics and rheumatology will most likely want them regardless.
3. Which synovial fluid studies do you send in order to help make the diagnosis? Which of these rule out septic arthritis?
The gold standard for confirming a diagnosis of septic arthritis is a positive synovial fluid culture; however, it may take several days for cultures to grow, which makes them of little use in the emergent setting. Gram stains may result more quickly and offer the ability to tailor antibiotic treatment. Unfortunately, the yield of gram stains in septic arthritis is only 50-80%. The other synovial fluid labs that are typically sent are of varying utility (Genes, 2012).
Textbooks often cite ranges of synovial WBC values (sWBC) that are associated with normal joints, inflammatory processes, or septic arthritis. It may be more accurate to say that the likelihood of septic arthritis increases with the sWBC, and that for values >100,000 the likelihood is very high. Margaretten, et al. performed a systematic review which looked at 5 studies that each collected data for sWBC cutoffs of 25,000, 50,000, and 100,000. The averaged +LRs were 2.9, 7.7, and 28, respectively, suggesting a significant increase in post test probability for the higher two thresholds. Perhaps the most important point to make is that there is no value of sWBC at which one can safely rule out septic arthritis. Average sensitivities were 77%, 62%, and 29%, respectively, indicating that many patients with septic arthritis do not have exceedingly high sWBC values (Margaretten, 2007).
Four of the above studies also analyzed synovial polymorphonuclear cell counts using the often cited >90% as the cutoff. +LRs ranged from 1.8-4.2, which are not significant values for diagnostic purposes.
It is important to be aware that patients with prosthetic septic joints often present with lower sWBC and sPMN counts. One study that found that values of 1700 and 65% were sensitive and specific (Trampuz, 2004).
Glucose and protein in the synovial fluid do not alter the posttest probability of septic arthritis. Two studies that investigated decreased glucose found sensitivities of 56%-64%, and specificities of 85%. Only one investigated increased protein and reported 50% sensitivity and 47% specificity (Schmerling, 1990; Soderquist, 1998).
One of these studies also investigated synovial LDH and found 100% sensitivity for LDH >250, suggesting that septic arthritis could be ruled out if LDH is <250; however, this is the only study of its kind and was associated with a large number of false positives (specificity 51%) (Schmerling, 1990).
Serum lactate has become one of the most important diagnostic studies sent in suspected sepsis. Likewise, synovial lactate has shown promising diagnostic accuracy in septic arthritis. A recent study showed a +LR approaching infinity for serum lactate > 10 (Lenski, 2014). Several other studies have yielded similar supporting evidence for serum lactate in differentiating septic arthritis from other etiologies such as gout and rheumatoid arthritis. (Brook, 1978; Mossman, 1981; Riordan, 1982; Gobelet, 1984).
For the diagnosis of gout, the gold standard is the finding of negatively birefringent (MSU) crystals in the synovial fluid (and no organisms). Likewise, the gold standard for pseudogout is rhomboidal, positively birefringent (CPPD) crystals. No cases of either entity have been reported in the absence of the corresponding crystals. However, the finding of crystals does not necessarily explain an acute episode of joint pain, as they can also be found in the synovial fluid of asymptomatic patients (Pascual, 2011).
Bottom line: Synovial fluid culture is the gold standard for diagnosis of septic arthritis. sWBC may be helpful in that a very high value significantly raises the likelihood of septic arthritis (many say 50,000 but 100,000 is much more specific), but a low value does not rule it out. Serum lactate is a promising test that may be available in the future.
4. Do you inject the joint with any medication for symptomatic relief? If so, which medication?
Corticosteroids were the first substances to be injected intra-articularly (IA) for joint pain relief. First described by Hollander in the 1950s, IA steroids have been shown to decrease leukocyte secretion from the synovium as well as neutrophil migration into inflamed joints. This elevates the hyaluronic acid concentration in the joint and therefore the synovial fluid viscosity (Snibbe, 2005). Local injection has been shown to avoid many of the adverse effects of systemic steroids.
Furtado, et al., studied the use of IA steroids in rheumatoid arthritis and found that they gave better results than systemic steroids in terms of side effects, hospitalization, and patients’ subjective reporting of pain and overall disease (Furtado, 2005). Because steroids work on inflamed synovium, results are not as favorable in joint pain caused by weight-bearing forces as in osteoarthritis or sports-related injuries (Snibbe, 2005).
There are several options for steroids. In order of decreasing solubility, which corresponds to increasing duration of effect, they include dexamethasone, hydrocortisone, methylprednisolone, prednisolone, and triamcinolone (Lavelle, 2007).
There are no guidelines for the administration of IA steroids in terms of indication. There are, however, some contraindications. The most important is suspected infection of the joint space or overlying soft tissue. Others include joint prosthesis and bleeding diathesis. There are also concerns about steroid injections producing local adverse effects such as tendon and ligament rupture, soft tissue atrophy, and joint capsule calcification (Snibbe, 2005). As a result, many practitioners will limit the number of injections they give and will not give a repeat dose for at least 3 months (Lavelle, 2007). It is also worthwhile to know that injection of crystalline corticosteroid material can potentially interfere w/ synovial fluid crystal analysis (Parillo, 2010).
Local anesthetic injections are another option for pain relief. Most of the literature supporting their use comes from orthopedic studies looking at the post-operative period, in particular after arthroscopy. Bupivacaine is typically the drug of choice due to its long duration of action. A systematic review of double-blind, randomized, controlled trials comparing IA local anesthetics to placebo showed a statistically significant decrease in both pain scores and additional analgesic requirements (Lavelle, 2007).
The problem with local anesthetics is the potential for cartilage destruction. This has only been found in animals and has not been studied in humans, but concern is high enough that most practitioners limit the number and frequency of injections. There have not been any reports of an ED patient having adverse effects from a single IA dose of a local anesthetic (Genes, 2012), but a study on rats showed prolonged chondrotoxicity after a single IA dose of bupivacaine (Chu, 2010).
Intra-articular opioids (morphine, fentanyl, etc.) have been found to be effective in inflamed tissues, in which the perineurium is disrupted and the opioids have better access to nerve receptors (Lavelle, 2007). Stein, et al., showed that IA morphine actually acted on peripheral receptors rather than systemically by demonstrating that the pain reduction resulting from IA morphine was reversed by injection of IA naloxone (Stein, 1991). Since then the efficacy of IA morphine has been debated, with a myriad of studies investigating its analgesic effects in patients post-op from arthroscopy and ACL repair. One systematic review looked at 19 of these studies and determined that IA morphine had a “mild analgesic effect” (Gupta, 2001). Meanwhile another study found very favorable results for IA morphine in patients with chronic knee pain from osteoarthritis. They actually found that the analgesic effect lasted longer than a week (Likar ,1997).
Other substances that may be injected into painful joints are hyaluronic acid, ketorolac, and clonidine (Lavelle, 2007), but none has been sufficiently studied or is commonly used in the ED.
Bottom line: The best options for intra-articular injections for pain control are steroids, local anesthetics, or morphine. All have been subject to controversy surrounding their efficacies and adverse effects. Steroids should be withheld in suspected infection.
1. When do you tap a painful, swollen joint? When do you obtain imaging before arthrocentesis?
3. Which synovial fluid studies do you send in order to help make the diagnosis? Which of these rule out septic arthritis?
4. Do you inject the joint with any medication for symptomatic relief? If so, which medication?
1. When do you get abdominal plain films before CT in suspected SBO?
2. How do plain films guide your management in patients with suspected intraperitoneal free air?
With advances in radiologic technology and the increased availability of CT, ultrasound, and MRI, the contemporary use of plain abdominal radiographs (AXR) in the evaluation of acute abdominal pain is poorly defined (Hampson, 2010). A broad spectrum of indications are listed by the American College of Radiology. However, even for these, the accuracy of AXR is notoriously low, and it is rarely ever the ideal first-line imaging study. A prospective study of patients with non-traumatic abdominal pain presenting to the emergency department estimated the overall sensitivity, specificity and accuracy of AXR series for all pathology to be 30%, 87.8%, and 56%, respectively (MacKersie, 2005). A retrospective review by Kellow, et al. showed 72% of “normal” AXRs and 78% of “nonspecific” AXRs were actually found to have pathology on follow up imaging (Kellow, 2008).
Several recent studies have looked at the utility of obtaining AXR, including appropriate uses, diagnostic significance, and whether this imaging modality affects management. One study estimated that only 3% of AXRs obtained in 861 patients, significantly impacted management (Kellow, 2008).
Two diagnoses for which abdominal radiograph is still commonly used are bowel obstruction (SBO) and pneumoperitoneum.
1. Small bowel obstruction
Despite being one of the few abdominal pathologies with distinct plain film abnormalities, findings of obstruction on AXR are difficult to interpret. Markus, et al. found inter-observer agreement based on kappa values between radiologists for diagnosis of SBO was only “fair to good.” Additionally, this study showed that agreement was only “poor to fair” for determining large bowel obstruction, and location or completeness of SBO (Markus, 1989).
Several studies estimate AXR sensitivities between 45-90% and specificity of approximately 50% in diagnosing SBO. CT, in comparison, has a reported sensitivity of 93% and specificity of 100% (Suri, 1999; Frager, 1994). In patients with suspected SBO, Maglinte quotes AXR to yield accurate diagnoses in 50-60%, indifferent or nonspecific findings in 20-30%, and misleading reads in 10-20%. Identification of partial SBO lowers the sensitivity to 30% for AXR (Frager, 1994), whereas it is around 60% for CT (Maglinte, 1996).
Despite this data, bowel obstruction remains one of the most common indications for AXR ordered in the ED evaluation of abdominal pain. It is important to understand if, when, and how these images should affect patient management (Kellow, 2008).
A large study found that the addition of AXR to clinical assessment in ED evaluation of abdominal pain significantly increased the sensitivity of clinical diagnosis from 57% to 74%; the positive predictive value, however, was not significantly changed. Moreover, the addition of radiographs in suspected obstruction did not significantly change ED physicians’ initial diagnosis or confidence in their diagnosis (Van Randen, 2011).
In addition to poor diagnostic accuracy, AXR (unlike CT) lacks the ability to distinguish partial from complete obstruction, determine a transition point, or identify cause–information vital to clinical management and surgical planning. Thus, despite increasing diagnostic sensitivity, AXR is not likely sufficient to preclude further imaging. In a retrospective study, the majority (53%) of patients with dilated loops of bowel on AXR deemed “significant” proceeded to CT scan (Jackson, 2001). Of these 47 patients, 9 had CTs without evidence of obstruction contributing to the authors’ conclusion that the yield of initial AXR in SBO was low (Jackson, 2001). In another review, only 5% of AXRs performed to evaluate obstruction confirmed the diagnosis and were managed without obtaining further imaging (Kellow, 2008). Again, the majority of all abnormal AXRs underwent subsequent CT. In one study of patients with suspected acute SBO, CT corrected erroneous diagnoses and management in 21% of cases (Taourel, 1995).
MacKersie found nonenhanced CT scan to be more sensitive and specific compared with a three view AXR series. In equipped facilities, the time to obtain a noncontrast CT should be comparable to a 3 view AXR, suggesting that if time is critical, the test of choice is the one with diagnostic superiority (MacKersie, 2005). Given that the majority of suspected obstructions, regardless of AXR outcome, are followed by CT then radiation is rarely spared. Instead the patients ends up greater with a greater exposure than if the more definitive test was used initially.
The reasoning behind initial AXR for suspected SBO likely falls into one of two clinical scenarios. First is the situation of low clinical suspicion for obstruction and AXR acts to confirm or support a negative diagnosis.The second is one in which there is a high clinical suspicion for bowel obstruction, AXR is obtained in hopes of expediting disposition (surgical consult, operative intervention, etc.), while saving the patient time and radiation.
In both scenarios AXR is not ideal. In the first, AXR may provide a false sense of security, as its sensitivity is too low to comfortably rule out obstruction, especially early or partial. In the second scenario, AXR may support clinical suspicion, but rarely provides enough evidence to dictate management, and may actually delay treatment. Our surgical colleagues typically still want a CT even with a markedly positive AXR.
Bottom Line: AXR has a limited role in bowel obstruction. It may be useful in patients with recent surgery or known bowel adhesions, who are likely to be taken to OR for obstruction with an already known etiology,or who are too unstable to go to CT (Jackson, 2001).
Intraabdominal free air as seen on XR has been used to dictate surgical intervention for decades. Several studies have looked at the accuracy of AXR in determination of free intraperitoneal air secondary to perforated viscus, with varied results. Sensitivities ranging from 15-83% have been reported (Gans, 2012).Among common causes of this variability are the adequacy of films, amount of air present, and the use of proper positioning techniques.
Miller and Nelson’s 1971 paper demonstrated the importance of patient positioning and compared different radiographic views. The study aimed to find the best technique to detect extraluminal air by injecting subjects with small volumes of air intraperitoneally at McBurney’s point, followed by radiographic evaluation. They found the highest sensitivity with the following sequence:First, 10-20 minutes of left lateral decubitus positioning, followed by AXR. Second, careful placement into an upright position for 10 minutes, then CXR (AP or PA) and upright AXR. Third, recumbency and AXR in supine position (Miller, 1971). This technique theoretically supports the movement of air to below the right hemidiaphragm, avoiding the superimposed gastric bubble on the left. Using this technique, it was reported “possible to consistently demonstrate as little as 1cc [of air] under the right hemidiaphragm” (Markus, 1989).
For a patient with peritonitis, transport and positioning is difficult. Utilizing the imaging views with the best diagnostic yield is crucial. One study of free air determination in various AXR views, reports the accuracy of left lateral decubitus, upright, and supine to be 96%, 60%, and 56%, respectively (Roh, 1983).Despite the lower accuracy of supine films, they are often the easiest to obtain in an unstable patient. Detection of pneumoperitoneum on supine films requires the presence of significantly more extraluminal air than other views; the most frequent findings include Rigler’s Sign (gas on both sides of the bowel wall) and linear or triangular right upper quadrant gas (Levine, 1991).
Upright CXR has been repeatedly demonstrated to be superior in free air detection to upright AXR (Flak, 1993), (Miller, 1971). Sensitivity of 85% has been reported (Gans, 2012). Although AP or PA CXR is commonly utilized, upright lateral CXR may have better sensitivity, as noted in a small retrospective review (Markowitz, 1986). Field, et al. questioned the utility of the erect AXR, claiming it added nothing to the upright CXR and supine AXR. CXR has the additional benefit of identifying diagnostically significant chest pathology (Field, 1985).
CT has revolutionized evaluation of the acute abdomen. CT has several advantages over plain film including better sensitivity and higher accuracy. In a study of trauma patients status post introduction of intraperitoneal air by diagnostic peritoneal lavage, upright CXR was only 38% sensitive, missing all patients with minimal air and most with moderate free air. CT within 24 hours of DPL was 100% senstitive for free air (Stapakis, 1992). CT has the ability to identify contained perforation and to localize the site of perforation in a majority of cases, guiding management and surgical intervention (Mindelzun, 1997).
A recent study looking at the the value of plain radiographs in abdominal pain found that of those with confirmed perforated viscus, the sensitivity of initial AXR was determined to be only15% (Van Randen, 2011).Of thirteen perforations, four were contained and were not visible on AXR. The addition of AXR to clinical assessment did not significantly increase the sensitivity or positive predictive value, nor did it significantly change the suspected diagnosis.
Bottom Line: While AXR performs better in detecting free air than it does for detecting other pathologies, its diagnostic use is technique-dependent and is insufficient to rule out perforated viscous in patients with a moderate to high clinical suspicion. For patients too unstable to be taken to CT, upright CXR should be the test of choice for emergent determination of free intraperitoneal air. Supine or left lateral decubitus AXR may be of limited benefit.
3. Who do you CT scan in the work up of pancreatitis?
In acute pancreatitis (AP) the diagnosis of disease, identification of a treatable cause, and determination of disease severity are important parts of evaluation. CT scanning can theoretically aid in all of these.
AP is most commonly diagnosed by the presence of at least two of the following three criteria: characteristic abdominal pain (constant upper abdominal pain with radiation to the back), elevated amylase/lipase levels (> 3 times the upper limit of normal), and consistent findings on imaging (Tenner, 2013). When history and labs clearly indicate AP, CT is unlikely to add important information. However, abdominal pain and symptoms of AP may be atypical. Amylase and lipase have limited sensitivity and specificity for AP: both may be elevated in other causes of abdominal pain such appendicitis, cholecystitis, and bowel ischemia. Contrast-enhanced CT has been shown to have greater than 90% sensitivity and specificity for diagnosis of AP (Balthazar, 2002). Additionally, it provides the advantage of simultaneously ruling out other causes of abdominal pain.
Identifying the cause of pancreatitis may be crucial in guiding management. Gallstones are the leading cause of pancreatitis, and abdominal biliary ultrasound is recommended for all patients with undifferentiated AP to evaluate for gallstones (Tenner, 2013). However, ultrasound is limited in its evaluation of distal stones. Contrast CT can visualize evidence of obstruction such as biliary dilatation, however, it is only moderately sensitive for detecting gallstones and biliary stones (Anderson, 2006; Anderson, 2008).While contrast CT and MRI are comparable studies for use in early assessment of AP, MR adds sensitivity in detecting choledocolithiasis and pancreatic duct disruption (Macari, 2010). MRCP, endoscopic ultrasound, or ERCP should be considered when biliary obstruction is strongly suspected.
Because mortality increases significantly based on severity, early prediction of severe disease is important for proper management and disposition, but may be difficult on initial presentation to the ED. Severity scoring systems such as Ranson’s criteria are generally less accurate within the first 48 hours of disease, and have been routinely debunked by intensivists. Even APACHE II is only 75% sensitive on presentation (Osvaldt, 2001). AP severity is now separated into three categories after the 2012 revision of the Atlanta classifications (Banks, 2012). Mild AP is the absence of organ failure or local complications, and has expected improvement within 48 hours. Moderately severe AP includes local complications and/or <48 hours of organ failure. Severe AP is defined only by persistent organ failure >48 hours.
Two phases of disease are recognized as peaks of mortality: early (<1 week from symptom onset) and late (>1week). The early phase is characterized by systemic inflammatory response syndrome (SIRS). Morbidity and mortality reflects the presence of end-organ failure (defined as SBP<90, creatinine >2, PaO2<60%, or GI bleeding >500cc/24hr). In this phase, management is based on disease presentation and not imaging as findings on contrast CT often underestimate disease severity and rarely prompt urgent intervention. The later phase connotes development of local complications (including peripancreatic fluid collections, necrosis, and pseudocysts) and/or persistent SIRS. Infected pancreatic necrosis is associated with significant morbidity and may, like other local complications, require intervention. Contrast CT evaluation is an effective technique to detect and characterize these complications. After four days from symptom onset and with proper technique contrast CT is reported to identify necrosis with 87% accuracy, 50-100% sensitivity (depending if the patient has minor or extended areas of necrosis, respectively), and specificity nearing 100% (Balthazar, 2002).
Most cases of AP are mild, and will clinically improve with supportive care by 48 hours. For more severe cases, local complications, including necrosis, typically will not be present on initial presentation and likely not clinically important in the first week of symptoms. Given this, recent guidelines published in the American Journal of Gastroenterology regarding management of AP state that contrast CT is not recommended as part of the routine evaluation of AP. These guidelines recommend contrast CT or MRI be limited to those in whom diagnosis is unclear on initial presentation, those who fail to improve after 2-3 days, or exhibit acute decline, in order to evaluate for local complications (Tenner, 2013).
Bottom Line: Contrast CT on initial presentation of acute pancreatitis does not routinely contribute to management and should be reserved for those who have not improved after 2-3 days or have decompensated.
Bonus: Do you use dextrose containing IV fluids in the resuscitation of kids with vomiting?
Gastroenteritis in children is a frequent cause of ED visits. Dehydration and carbohydrate depletion from vomiting, diarrhea and poor oral intake leads to decreased tissue perfusion, anaerobic metabolism, and glucagon release. Glucagon, in turn, promotes breakdown of glycogen with resultant ketogenesis. Ketones contribute to metabolic acidosis, which has been associated with oral intolerance and shown to be predictive of hospital admission (Friedman, 2005; Gorelick, 2013). Ketones themselves are postulated to be associated with persistent nausea, vomiting, and anorexia (Wheless, 2001). The idea of using dextrose-containing fluids in these dehydrated, ketotic patients makes physiologic sense. By giving carbohydrate, insulin production is stimulated, glucagon suppressed, lipolysis stops, ketosis resolves, and oral intake improves. Several studies have looked at the effects of dextrose containing solutions versus fluids without dextrose in relatively small sample populations. Two small studies showed, as one might expect, increased blood glucose in the treatment arm, but no significant clinical benefits (Juca, 2005). Rahman, et al., in a randomized control trial of 67 children, showed no adverse outcomes with dextrose-containing solution, noting comparable urine output, suggesting osmotic diuresis did not occur in the treatment arm (Rahman, 1988).
Levy and Bachur performed a non-blinded retrospective case control study in which they found a significant inverse association between the amount of IV dextrose received on initial visit for acute gastroenteritis with dehydration and return visits with admission (Levy, 2007).
Levy, et al.,in a double-blind randomized controlled trial, looked at the effects of an initial bolus of normal saline (NS) versus D5NS in children with dehydration from gastroenteritis (Levy, 2013). A significantly greater decrease in serum ketones was seen in the treatment group at 1 and 2 hours. A trend towards a lower admission rate was seen (9% absolute risk reduction in the treatment arm), however, no statistically significant decrease in hospitalization was found. Of the discharged patients reached by phone for follow up, a trend towards more unscheduled medical care was seen in the normal saline control group. This trend was exaggerated when analyzing only the discharged patients who had had an acidosis. Further research to increase the power of this study may help better determine if these trends are meaningful, and should influence general practice.
Bottom Line: Despite the logic behind the addition of dextrose to IVF in gastroenteritis-associated dehydration with ketosis, no compelling evidence exists yet to show its association with an improvement in clinically significant outcomes.
Our second GI topic for April:
1. When do you get abdominal plain films before CT in suspected small bowel obstruction?
3. When do you order a CT scan in the work up of pancreatitis?
Bonus: Do you use dextrose-containing IV fluids in the resuscitation of kids with vomiting?
1. Do you use a PPI infusion in patients with undifferentiated UGIB?
Acute upper gastrointestinal hemorrhage (UGIB) is a potentially life threatening condition caused by a number of etiologies. It is defined as any lesion causing bleeding proximal to the ligament of Trietz. This includes Mallory-Weiss tears, Boerhaave’s syndrome, esophageal varices and arteriovenous malformations. The most common cause of UGIB, however, is peptic ulcer disease (Lau, 2007). Proton-pump Inhibitors (PPIs) were first investigated for use in patients with peptic ulcer disease (PUD). PPIs act by decreasing the production and secretion of gastric acid. They irreversibly block the hydrogen/potassium ATPase in gastric parietal cells. Gastric acid has been shown in in vitro experiments to impair clot formation, promote fibrinolysis and impair platelet aggregation (Chaimoff, 1978; Green, 1978). Thus in theory, inhibition of gastric acid would allow the pH to rise, promoting clot stability and decreasing the likelihood of rebleeding. However, this benefit is only theoretical. The goal of raising the gastric pH above 6 has not been shown to be a reliable proxy for treatment efficacy (Gralnek, 2008).
Intravenous PPIs have become standard care post-endoscopy and post-operatively to prevent rebleeding in patients with PUD (Gralnek, 2008). Recent meta-analyses show that PPIs decrease the rate of rebleeding and surgical intervention in patients with PUD after endoscopic intervention. A pooled analysis of 16 randomized controlled trials found that a bolus dose of PPI followed by an infusion is more effective than bolus dosing alone for reducing the rebleed rate and the need for surgery, leading to the current recommendation (Morgan, 2002). However, this study clearly states: “Intravenous proton pump inhibitors appear to be useful in the prevention of rebleeding in patients with acute peptic ulcer bleeding that has been successfully treated with endoscopic hemostasis.”
A meta-analysis of 24 randomized trials (4373 patients) from the Cochrane group reached similar conclusions (Leontiadis, 2004). In patients with PUD, PPI treatment reduced rebleeding (NNT = 15), surgical intervention (NNT = 32) and repeat endoscopy (NNT = 10). However, they found no change in mortality (OR = 1.01). Overall, outcomes were modest. PPIs prevented rebleeds in 6.6% of patients, surgical interventions in 3.2% of patients and repeat endoscopy in 10% of patients. Interestingly, the Cochrane group did separate analyses of Western and Asian populations. They found that trials conducted in Asia demonstrated benefits to PPI infusions in peptic ulcer disease in terms of mortality (NNT = 34), rebleeding (NNT = 6) and surgical intervention (NNT = 23). Conversely, Western patients showed a suggestion of increased harm in PPI groups, although it was not statistically significant).
So, the current literature suggests PPI infusions in patients with known PUD offer only marginal benefits overall and possible harm in certain populations. What about in undifferentiated UGIB? Are PPIs beneficial in undifferentiated UGIB? Are they beneficial when given prior to endoscopy?
Fortunately, a Cochrane Review published in 2010 helps to address these questions (Sreedharan, 2010). This group found six randomized control trials (RCTs) relevant to the question. Of these, four compared PPI to either placebo (Daneshmend, 1992; Hawkey, 2001; Lau, 2007) or no treatment (Naumovski, 2005) and the other 2 studies compared PPI to H2 blockers. The studies comparing PPI to placebo (n = 1983) are the most relevant to our question.
The Cochrane review found no difference in mortality comparing PPI to placebo (OR = 1.19 (0.75-1.68)), no difference in rebleeding within 30 days (OR = 0.87 (95% CI 0.66-1.16)) or surgery within 30 days (OR 0.90 (0.64-1.27)). One of the limitations of this review was that PPI treatment was not the same in all studies. Only the Lau study compared a PPI bolus and infusion with placebo. In this trial, the authors reported that fewer patients in the PPI arm required intervention during endoscopy. However, there were no differences in any patient oriented outcome (death, rebleeding, or surgery within 30 days).
–PPI treatment in undifferentiated UGIB does not appear to decrease any clinically important effects including rebleeding, need for surgery, or death.
–PPI treatment prior to endoscopy for undifferentiated UGIB decreases the number of patients who require an endoscopic therapy during endoscopy.
2. Do you use octreotide in patients with bleeding varices?
There is little in the world of Emergency Medicine that gets a clinician’s pulse racing as much as the massive upper GI bleeder who has a history of esophageal or gastric varices. These patients have a high morbidity and mortality rate even with aggressive ED management, transfusion, intensive care, and gastroenterology consultation. Six week mortality is estimated between 11-20% (Dell’Era, 2008). Patients often are aware that they have varices, which aids in guiding treatment. However, in patients with no prior diagnosis or those too sick to communicate, establishing the presence of varices may be more difficult.
Octreotide is a somatostatin analog that acts by decreasing blood flow into the portal circulation, thus decreasing portal pressure, particularly postprandial flow (Abraldes, 2002). It is widely used for the treatment of variceal bleeding. In clinical trials, octreotide has only been noted to be beneficial when paired with endoscopic therapy (Dell’Era, 2008). A meta-analysis in 2001 demonstrated an improved efficacy of endoscopic therapy in terms of early rebleeding when octreotide was given concomitantly (Corley, 2001). The study found an NNT of 6 when compared to placebo for rebleeding and transfusion requirement. The reduction in transfusion was modest (about 0.7 units of PRBCs). Additionally, no study found any reduction in mortality or overall rebleeding (only benefit in early rebleeding) (Corley, 2001; Dell’Era, 2008; Longacre, 2006). A more recent randomized, placebo controlled study found that sclerotherapy plus octreotide was equal to sclerotherapy plus placebo in terms of 7-day mortality, rebleeding, transfusion requirements, and ICU stay (Morales, 2007).
In 2008, the Cochrane group performed a systematic review of all randomized trials looking at octreotide in the treatment of varices (Gøtzsche, 2008). The group found 21 randomized trials of octreotide versus placebo (or no treatment). They concluded that the use of octreotide did not reduce mortality. It did reduce the amount transfused by about ½ a unit of blood in the studies with a low risk of bias (1.5 units in those with high risk of bias) but this result was not thought to be clinically significant by a number of commentators. They found no difference in rebleed rates in the low-risk of bias studies (but a substantial reduction in the high risk of bias studies). The Cochrane group did find a lower rate of failed initial hemostasis in the octreotide treatment group.
Overall, it seems that the incremental benefit of octreotide in addition to endoscopic therapy in the treatment of variceal bleeding is only seen in surrogate, non-patient-oriented outcomes. No study, systematic review, or meta-analysis has shown a benefit in mortality. Additionally, this is for patients in whom it is known that varices are the cause of their UGIB. There is no data on the use of octreotide in undifferentiated UGIB . Additionally, there is no pathophysiologic basis for its use in these patients.
–Octreotide in combination with endoscopic therapy in patients with bleeding varices has not been shown to reduce the rate of failed hemostasis, rebleed rate, or mortality.
–Octreotide did show a modest reduction in blood transfusions required in this clinical scenario.
3. In which patients with UGIB do you place a nasogastric tube (NGT) and for what purpose (diagnostic vs. therapeutic)?
Historically, nasogastric intubation (NGI) in patients with suspected upper gastrointestinal bleed (UGIB), has served multiple roles: therapeutic, diagnostic, and prognostic. However, its utility has been controversial for years. Recognizing the severe discomfort of this basic procedure and its rare but potential complications, scrutiny of the extant data is warranted to decide what, if any, benefit NGI provides.
In a patient with GI bleeding, distinction between upper and lower source (i.e., bleeding proximal or distal to the ligament of Treitz), is essential for determining advanced management. Gross evaluation of nasogastric aspirate (NGA) is not an uncommon diagnostic test done in the ED to assess for the presence of a proximal bleed. Hematemesis is virtually always the result of briskly bleeding lesions proximal to the pylorus (Peura, 1997), making NGA obsolete in the diagnostic evaluation of patients with hematemesis. In patients with melena or hematochezia without hematemesis, the source is not as straightforward and NGA has been recommended for diagnosis. It stands to reason, then, that studies to determine the diagnostic utility of NGA should focus on patients without hematemesis. In one such study, the yield of positive (i.e., bloody) NGA was significantly lower than in prior studies that included subjects with and without hematemesis (Witting, 2004). This likely reflects that UGIB without hematemesis often results from either a slower bleed or one distal to the pylorus, such as a duodenal lesions. Both of those scenarios are much less likely to yield a positive result with NGI. A technically adequate NGI should include duodenal aspirate, as evidenced by presence of bile, but often do not. Although a positive NGA was determined to strongly predict an UGI lesion on endoscopy, with a likelihood ratio (LR) of +11 and a positive predictive value (PPV) of 92%, a negative NGA showed minimal utility with a LR of 0.6 and NPV of 64% (Witting, 2004).
Other diagnostic models have been shown to have utility without incorporation of NGI. Three strong independent risk factors (age<50, black stool, and BUN/creatinine ratio greater or equal to 30) were identified to predict an UGIB in patients without hematemesis and compared favorably to NGA. The absence of all three risk factors corresponds to a 5% risk of UGIB. Of those with 2 or more risk factors, 93% had an UGIB (Witting, 2006).
Historically, NGI has also contributed to assessing prognosis, helping to risk stratify patients, guide type and timing of intervention, as well as influence disposition. The prognostic value of NGI has been explored in several studies. A positive NGA has been associated with higher mortality rates (Leung, 2004) and worse outcomes (Stollman, 1997). Coffee-ground or bloody NGA has been shown to strongly predict the presence of high-risk endoscopic lesions (HRL), defined as an active bleed or visible vessel (Perng, 1994). Bloody NGA demonstrated an increased association with HRL when compared with clear or bilious NGA and with coffee-ground NGA, having an odds ratio (OR) of 4.8 and 2.8, respectively (Aljebreen, 2004). For a test to be clinically useful, however, it must be sensitive enough to rule out the dangerous condition. If “positive NGA” refers only to gross blood, the PPV and NPV of HRL are 45% and 78%, respectively; if defining “positive NGA” to include all aspirate except clear or bilious, the PPV and NPV are 32% and 85%, respectively (Aljebreen, 2004). Therefore, even using the most inclusive definition of a positive NGA, this technique misses a significant portion of patients with high risk lesions. If NGA is grossly positive, then such a lesion is likely. However, based on this data, a negative result has a poor negative likelihood ratio, and lacks the sensitivity to rule out a HRL.
The ability to identify patients with HRL, lesions likely to re-bleed, and those with higher mortality is important. These correlations, however, do not intrinsically translate to useful information in terms of acute management decisions. A more clinically useful endpoint would differentiate those who would benefit from urgent endoscopy from those who may safely wait. Noninvasive prognostic scales, such as the Glasgow-Blatchford and pre-endoscopic Rockall scores, have been developed and validated to accurately predict patients who require intervention and who can be safely wait (Barkun, 2010). The Glasgow-Blatchford score has been determined to have sensitivity approaching 100%, allowing the provider to rule out significant bleeding (Chen, 2007; Srygley, 2012).
No one test or scoring system has yet been devised to stratify those who require emergent versus urgent endoscopy. Likewise, the benefit of rapid endoscopy <24 hours is not clear (Targownik, 2007; Spiegel, 2009). A 2010 prospective study showed mortality benefit from endoscopy less than thirteen hours after presentation in high-risk patients (based on the Glasgow-Blatchford score) (Chin, 2011). Although a frankly positive NGA highly predicts a HRL, the clinical picture of the patient is more likely to determine the rapidity with which endoscopy should be performed.
In terms of therapeutic use, the importance of adequate gastric lavage was highlighted in a retrospective study revealing increased morbidity and mortality associated with inability to clear fundal blood during endoscopy (Stollman, 1997).These results postulate that inadequate pre-endoscopy gastric lavage may be to blame for poor visualization and thus worsen outcomes. No data exists to examine the impact of gastric lavage on the “fundal pool.” However, multiple adjunctive measures have been shown to be effective in clearance of blood such as endoscopic lavage or use of promotility agents such as metoclopramide or erythromycin (Leung, 2004). International consensus guidelines recommend the use of promotility agents only in the select group of patients where a significant amount of blood is anticipated (Barkun, 2010).
Looking at overall impact of NGI on patients, Huang, et al. observed the outcomes of clinically matched patients with UGIB, comparing those who had NGI and those who did not. Although patients with NGI obtained endoscopy more quickly on average than those without NGI, no significant difference in length of hospital stay or 30 day mortality was found (Huang, 2011).
The diagnostic utility of NGI is limited. It adds no useful data in patients with hematemesis and is not sensitive in patients without. The ability of NGI to predict patients with high-risk lesions is good but, again, lacks the sensitivity to rule them out, proving inferior to some noninvasive prediction scores. NGA alone is not adequate to risk-stratify patients and change the urgency of obtaining endoscopy. The therapeutic value of NGI for pre-endoscopic gastric lavage is unclear and reasonable alternatives exist which spare patient discomfort. NGI has not been shown to improve patient outcomes.
–A negative NGI cannot rule out UGIB or the presence of high risk lesions
–The color of NGA is inadequate data on which to base management decisions
–NGI does not clearly improve endoscopic results or patient outcomes
4. What is the utility of fecal occult blood test for patients in whom we suspect UGIB?
Fecal occult blood test (FOBT) is intended as a screening tool for lower GI malignancy but is commonly used in the Emergency Department as part of a work-up for suspected UGIB. Virtually all literature surrounding the test relates to its use in the outpatient setting. Without dedicated evidence to support the use of this test in the ED, we should take efforts to be knowledgeable about the test itself, and thoughtful regarding when to use it and how to interpret it.
Different types of FOBT are available and knowing the exact test used is important for proper interpretation. Non-guaiac based FOBT have increased specificity to LGIB as they use immunochemical assays to detect human hemoglobin, not heme. They are not useful in detection of UGIB, as most hemoglobin is digested in the small intestine and not present in rectal stool (Allison, 2007). Guaiac-based tests (gFOBT) detect heme by using it as a catalyst in the oxidizing reaction of the guaiac-impregnated card producing an immediate blue color where heme is present. The result can be altered by a variety of substances, which participate in this reaction. Heme-containing red meat or peroxidase-containing foods (turnips, radishes) can produce a false positive. Vitamin C may produce a false negative due to its antioxidant effect. The sensitivity varies depending on the specific test used. It increases with the amount of blood present. The Hemoccult II (guaiac based test) requires 10ml of fecal blood loss per day (10mg blood/gram of stool) for 50% sensitivity, but may be positive with <1mg/g (Stroehlein, 1976). In contrast, melena production requires 50mL gastric blood, based on a study by Schiff, et al. in 1942. Despite the apparent simplicity of the gFOBT, it is user-dependent. In a survey of 173 medical providers, 12% did not accurately interpret results (Selinger, 2003).
To determine whether FOBT enhances our management of a patient with suspected UGIB, certain questions should be answered. Do the results of FOBT have diagnostic value in these patients? Is FOBT sensitive enough that a negative test can rule out UGIB? Does the false positive rate lead to significant undue intervention such that the risks of the test outweigh the benefits?
Stool color is among the best clinical predictors of UGIB. According to a literature review, if a patient reports melena, the likelihood of an UGIB is increased more than five-fold; if found on exam, UGIB is 25 times more likely to exist (Srygley, 2012). Black stools were shown to be 80% sensitive and 84% specific for an UGIB (Witting, 2006). Conversely, blood clots present in stool make UGIB 20 times less likely (Srygley, 2012). Given opposing correlation to UGIB based on the color of stool, both of which would be theoretically guaiac-positive, the role of gFOBT in either black or red stool would only be to distinguish blood clot from red food particle. The significance of guaiac-positive brown stool in a patient with history concerning for UGIB, however, is not evident. Similarly enigmatic is the significance of guaiac-negative stool.
One to two liters of ingested blood may cause melena for up to five days, starting approximately four to 20 hours after its ingestion (Wilson, 1990). It can be inferred that guaiac negative stool may occur in active UGIB if the blood-containing stool has not had sufficient time to reach the rectum, or if the bleeding has been intermittent and the sample obtained represents a non-bleeding interval. Although it would be difficult to imagine a significant bleed occurring for >24 hours not resulting in a positive result, it cannot rule out the possibility.
The false positive rate of gFOBT in predicting acute UGIB is not known. A review article looked at the utility of endoscopy to detect upper GI lesions in non-emergency patients with positive screening FOBT and a negative colonoscopy. Of patients with guaiac-positive screen, 37-53% have negative colonoscopies. Of these patients, the literature review showed endoscopy to be positive for UGI cancer in <1%, positive for nonmalignant sources of bleeding 11-21%, and incidental, likely unrelated, findings in 10-36%. The review did not extrapolate data to differentiate results of patients with anemia or other significant symptomatology, which may have been interesting for purposes of this discussion. Although this data does not apply to our patient demographic, it does give some insight into the low specificity and low PPV (for this population) of guaiac positive, non-melenic stool in determining endoscopic pathology (Allard, 2010).
As with any diagnostic test, in order for it to have utility, it must have a reasonable sensitivity and specificity to avoid both missed diagnosis and excessive overtreatment. While the sensitivity of gFOBT to detect blood is high, its ability to detect UGIB is unknown. Therefore, it cannot be used to rule out UGIB if a high clinical suspicion exists. The extremely low specificity poses the dilemma of what to do with guaiac-positive brown stool. Without a sufficient amount of blood to produce melenic stool, can a positive guaiac test be discounted as clinically insignificant in the ED or does it commit the provider to pursuing medical and endoscopic management? Undoubtedly, the results of gFOBT alone should not dictate care, but the question remains as to whether or not occult blood testing should be obtained at all in ED evaluation of UGIB. Unfortunately, using this test in a setting for which it was neither intended nor researched limits our ability to interpret its results, imparting the risks associated with misinterpretation.
–The role of gFOBT in evaluating acute UGIB has not been sufficiently studied
–Stool color (black or red) has more diagnostic value than gFOBT results
–Positive gFOBT does not rule in UGIB and carries the potential risk of unnecessary treatment or procedures
–Negative gFOBT does not rule out UGIB and risks a false sense of security and under-treatment of true disease