Cocaine Chest Pain, “Answers”

 1. What is the relationship between cocaine and Acute Coronary Syndrome/MI?

Cocaine is the second most commonly abused drug in the United States, accounting for the most drug-related visits to the Emergency Department (Finkel, 2011). Chest pain is the most common presenting complaint, representing 40% of cocaine-related visits (Brody, 1990). In fact, up to 25% of all patients presenting to urban EDs with non-traumatic chest pain may have used cocaine (Hollander, 1995).

When evaluating cocaine chest pain (CCP), vasospasm is at the forefront of most clinicians’ minds.  Cocaine, however, has multiple hemodynamic and hematologic effects that increase the risk of myocardial ischemia, both acutely after each use and chronically over time.

Acutely, cocaine increases plasma levels of dopamine and norepinephrine through central adrenergic stimulation and inhibition of reuptake at the synapse. The resulting sympathetic outflow manifests as tachycardia, hypertension, and increased myocardial oxygen demand.  This occurs in the setting of cocaine-induced coronary vasoconstriction and acute thrombosis, which both decrease myocardial oxygen delivery.   Interestingly, this vasoconstriction appears to be more severe in areas of underlying atherosclerosis, and in patients with concomitant tobacco use, but also occurs in otherwise normal coronary arteries (Finkel, 2011; Hollander, 2006).  Cocaine also induces a hypercoaguable state with studies demonstrating acute thrombosis after cocaine administration in animal and cadaver models (Dressler, 1990). This occurs via multiple mechanisms, including activation of platelets and elevation of procoagulant factors, including fibrinogen, without compensatory increase in fibrinolytic factors (Siegel, 2002).

Chronically, cocaine has been associated with accelerated atherosclerosis.  Autopsy studies of young cocaine users and animal models have supported this theory. (Satran, 2005; Dressler, 1990). While less common in cocaine users with MI than non-cocaine users with MI, atherosclerosis is more common in people who use cocaine compared to controls (Weber, 2002). Recent data however have led some to question this presumption.  In 2010, Chang, et al., used coronary computed tomography angiograms (CTA) to evaluate coronary artery disease (CAD) in CCP patients presenting to the ED and found that these patients did not have an increased incidence of atherosclerosis. In this study only low risk patients (normal ECG, negative troponin) were included, possibly explaining this discrepancy (Chang, 2011). Chronic cocaine use has also been associated with left ventricular hypertrophy and coronary artery aneurysms, with one study showing 30% of cocaine users undergoing angiography to have aneurysms (Satran, 2005).

Cocaine use, with resultant increased myocardial oxygen demand, multi-factorial decreased myocardial oxygen delivery and chronic deleterious changes intuitively leads to an increased risk of MI.  There are several studies looking at cocaine-associated MI.  Overall, the incidence is low, ranging from 0.7%-6% (Finkel, 2011). The COCHPA (Cocaine Associated Chest Pain) study, a multi-center prospective trial looking at patients presenting with chest pain in the setting of cocaine use, found the incidence of MI to be 6% (Hollander, 1994). Compared to the studies with lower incidence of MI, COCHPA used more rigorous inclusion criteria, has been validated by other studies, and is therefore more likely to represent the true incidence of cocaine-associated MI.

With regards to the temporal association between cocaine use and MI, the period of increased risk is unclear.  A review of the literature found onset of chest pain to range from one minute to four days after cocaine use (Hollander, 1992). This delayed presentation is inconsistent with the half-life of cocaine of thirty to ninety minutes. However, active metabolites of cocaine are detectable for over forty-eight hours after use, and delayed MIs have been attributed to coronary vasoconstriction from active metabolites (Finkel, 2011). Additionally, cocaine withdrawal has been linked to myocardial ischemia several weeks after use, highlighting a second time window of risk (Hollander, 2006). Despite this extended and variable period of heightened risk, the greatest risk of MI is thought to be within the first few hours after cocaine use.  In the first hour, it is estimated that cocaine users are at a full twenty-four-fold increase in risk of MI with rapid decline thereafter (Mittleman, 1999).

2. How do you risk stratify a patient who presents with cocaine chest pain? If low risk, how does the disposition differ from that of standard chest pain patients?

Identification of high-risk patients with CCP parallels that of non-cocaine chest pain.  An appropriate clinical scenario accompanied by characteristic ECG changes consistent with a STEMI warrants immediate percutaneous coronary intervention regardless of concomitant cocaine use.  The evaluation of non-STEMI CCP however is more challenging, as ECG and serum cardiac markers are often distorted, thus limiting the physician’s ability to assess for ischemia.

ECG interpretation in cocaine users is difficult as these patients often have abnormal ECGs in the absence of ischemia, and conversely can have non-diagnostic ECGs in the setting of true ischemia.  Cocaine use promotes left ventricular hypertrophy. Early repolarization is common in the population of patients most frequently presenting with CCP: young males.  Both factors hinder the interpretation the ECG, making the diagnosis of MI more complicated.  Overall, sensitivity of ECG in the setting of CCP is only 36% (Hollander, 1994). With regard to cardiac biomarkers, false elevations of CK and CK-MB occur secondary to skeletal muscle injury and rhabdomyolysis.  Trends should be followed if using these markers; rising levels are more indicative of MI.  Preferably, troponin markers should be used as they are more sensitive, particularly in CCP (Hollander, 1998).

The more difficult question, however, is how to manage low risk patients with CCP.  While many institutions have moved towards two troponin rule outs, CTA, or observation units for low risk standard chest pain, a more conservative approach has endured for CCP given the limitations above.  This approach comes at a cost: an admission for CCP averages three days and costs $83 million annually (Weber, 2003).

There have been several recent studies that have tried to address the disposition of low risk patients with CCP.  In 2010, Chang investigated ED CTA for patients with low risk CCP to evaluate for CAD, a practice used at that institution for evaluation of standard chest pain.  In this study, CTA did not show an increased incidence of CAD in patients with CCP (Chang, 2011). In standard low risk patients, a negative CTA allowed the patient to be discharged with close follow up.  There have been concerns raised however regarding the use of CTA to manage patients with CCP.  First, CTA cannot evaluate for vasospasm, one of the major concerns in CCP.  Furthermore, CTA often requires use of beta blockade for adequate assessment, which is contraindicated in CCP.  Finally, patients with CCP often present to the ED multiple times and may be subjected to a substantial radiation burden if CTA is used (Livshitz, 2011).

Another attractive alternative to admission is the use of observation units.  A study of 302 patients looked at a nine to twelve hour observation period for low risk CCP patients and found it to be extremely safe.  At thirty days, there were no ventricular dysrhythmmias and no deaths.  Four patients had recurrent non-fatal MIs, all of which had continued cocaine use.  In this study, just over half of the patients were stressed, four were positive, only two of which had multi-vessel disease (Weber, 2003). These findings are consistent with previous studies showing patients with low risk CCP to be at very small risk for delayed complications, most of which occur within the first twelve hours (Hollander, 1994).

 3. What is your medical management of cocaine-associated chest pain?  

Medical management of patients with CCP, including those with unstable angina and MI, is similar to the standard ACS equivalent with a few notable exceptions.  Aspirin has clear mortality benefit in typical ACS; given the prothrombotic effects of cocaine, likely has benefit specific to cocaine related ischemia as well.  It has not been studied in CCP specifically, as it would be unethical to withhold.  Aspirin should always be given if not otherwise contraindicated. Nitroglycerin, which reduces infarct size in standard MI, has been shown to relieve chest pain associated with cocaine use, and to reverse vasospasm.

The inclusion of benzodiazepines in the AHA treatment algorithm for CCP distinguishes it from treatment of typical ACS.  Patients with cocaine intoxication and chest pain often present extremely agitated and anxious.  Animal studies suggest cocaine’s CNS effects are directly related to its cardiovascular manifestations. Treatment of the CNS effects with benzodiazepines improves the latter, reducing heart rate, blood pressure and mortality. Outcomes may actually be worse when these CNS effects are not addressed, despite treatment of peripheral vasoconstriction with nitroglycerin (McCord, 2008; Guinn, 1980).

Recent clinical studies have compared nitroglycerin to benzodiazepines, with conflicting results.  Baumann found both nitroglycerin and diazepam to relieve chest pain and to decrease cardiac output and index.  There was no significant difference between the two, however, and no additional benefit with combined treatment (2000).  Honderick compared lorazepam plus nitroglycerin to nitroglycerin alone and found greater reduction in chest pain using the combination (2003). Of note very few patients in either study had ACS, thus limiting the applicability of the results.

Calcium channel blockers are also considered appropriate in CCP, but the data are limited and mixed.  Some cardiac catheterization studies show reversal of vasospasm but outcomes have been worse in large scale studies of ACS, raising questions of morbidity or mortality benefit in CCP.  Similar to typical ACS, calcium channel blockers should be avoided if there is evidence of heart failure.  Calcium channel blockers are currently recommended as second line agents in CCP after nitroglycerin and benzodiazepines. (Finkel, 2011; McCord, 2008).

Phentolamine, an alpha-1 blocker, has theoretical benefit for alleviation of adrenergic alpha-1 mediated vasoconstriction in CCP.  Reduction in chest pain has been reported in case reports and reversal of vasospasm has been documented in cath lab studies only (McCord, 2008).  Phentolamine is not considered standard of care as more evidence is needed.

4. Do you ever use beta-blockers in patients with cocaine chest pain?  What about labetalol?

Conventional teaching forbids the use of beta blockers in the management of cocaine toxicity, and with good reason: beta blockade permits unopposed alpha adrenergic stimulation, causing coronary vasoconstriction and hypertension. This has been demonstrated in animal models and illustrated in two placebo-controlled patient trials.  In the first, by Lange, thirty patients receiving elective catheterization were randomized to receive intranasal saline or cocaine.  These groups were then further randomized to receive intracoronary propranolol or saline.  In the cocaine group, Lange found a 19% increase in coronary vascular resistance and a significant decrease in coronary sinus blood flow after propranolol administration.  Furthermore, five patients showed at least a 10% constriction in a single coronary artery segment (1990). The second randomized controlled clinical trial of beta blockers and cocaine examined labetalol but unfortunately, in this small study, labetalol administration failed to reverse cocaine induced vasoconstriction (Boehrer, 1993).

Periodically, the controversy is reinvigorated with a new study.  Most recently, in 2008, a retrospective cohort study claimed to find a decreased incidence of MI in patients admitted to telemetry or the ICU with a positive urine tox screen for cocaine (Dattilo, 2008).  Critics pointed out that fewer than half of the patients in this study had chest pain, and that it was not applicable as it included patients admitted for any reason who also happened to have recently used cocaine. Additionally, the mortality benefit from beta blockade appears to derive, in large part, from continued usage after hospital discharge, something unwise to advocate in patients who will most likely continue using cocaine (Hoffman, 2008).

Despite the above data, beta blockers represent a cornerstone in treatment of traditional MI with a proven morbidity and mortality benefit.  Given this, efforts to find a safe alternative and to optimize care of cocaine-associated MI has led to investigation of the combined alpha and beta blocker labetalol.  The results, however, are far from promising. This is not surprising given that labetalol is primarily a beta adrenergic antagonist with very little alpha activity.  While some animal studies have demonstrated absence of coronary vasoconstriction and improved hemodynamics others have shown an increase in mortality when cocaine exposed animals are treated with labetalol (Smith, 1991).  More recently, Hoskins compared labetalol to diltiazem in a non-randomized study of ninety patients with CCP and found improvements in biomarkers and hemodynamic profiles in both groups with no adverse events (Hoskins, 2010).

The limited data on CCP and beta blockade suggests the combination to be harmful.  The risk-benefit ratio should be considered carefully when deciding to use beta blockers in CCP, remembering the risk of MI in these patients is low and outcomes are typically good.

Posted in Uncategorized | Tagged , , | 4 Comments

Cocaine Chest Pain, Questions

 (1) What is the relationship between cocaine and acute coronary syndrome/myocardial infarction?

(2) How do you risk stratify a patient who presents with cocaine chest pain? If low risk, how does your disposition differ from that of your standard chest pain patients?

 (3) What is your medical management of cocaine-associated chest pain?

(4) Do you ever use beta-blockers in patients with cocaine chest pain?  What about labetalol?

Cocaine Chest Pain Questions Poster

Posted in Uncategorized | Leave a comment

Ovarian Torsion, “Answers”

1. What signs or symptoms make the diagnosis of ovarian torsion more likely?  What factors in the patient’s history make you more suspicious for torsion? How useful is the bi-manual exam?

Ovarian torsion is defined as a partial or complete twisting of the ovary around its vascular pedicle and ligamentous supports.  More completely, it is defined as adnexal torsion when the twisting includes the fallopian tube.  The fifth most common gynecologic emergency, adnexal torsion has a reported prevalence of 3%  (Cicchiello, 2011).

Classically, a woman of reproductive age presents to the emergency department complaining of sudden, sharp, localizable lower quadrant pain, with tenderness to palpation, and a palpable mass on pelvic exam.  How often then do patients present classically? One fifteen-year retrospective review found 90% of patients presented with lower quadrant pain, and 59% reported a sudden onset of pain.  Almost half reported a previous history of similar pain. Nausea and vomiting were reported in 70% of cases.  Overall, however, this study, and others like it, found the history of present illness to be rather insensitive, with lower abdominal pain being the only reliable symptom.  The physical exam proved to be even less reliable.  Only 3% of patients with surgically proven torsion were found to have peritoneal signs, with almost 30% having no tenderness to palpation and even fewer having a palpable mass on pelvic exam.  Fevers were exceedingly rare in these reviews of proven torsion (Houry, 2001; Mashiach, 2011).

Others have looked specifically at the pelvic exam to determine its reliability and found it to be rather unhelpful, particularly the adnexal exam.  One study of emergency department physicians found an inter-examiner reliability of only 23% for detection of pelvic masses, and only 32% for the presence of adnexal tenderness (Close, 2001).  Gynecologists do not appear to fare much better in finding adnexal masses and other abnormalities on bi-manual exam. One study demonstrated a sensitivity of only 21-36% for detecting pelvic masses by gynecology attendings and residents performing pelvic exams under anesthesia (Padilla, 2000). Regardless of expertise, the pelvic exam does not appear likely to be particularly useful in diagnosing, or ruling out, torsion.

Thankfully for the diagnostician, there are several patient risk factors that make torsion more likely.  Although torsion can occur in any age group, it is much more common in women of reproductive age.  Ovarian masses are highly associated with torsion and have been reported in up to 73% of cases.  Masses greater than 4 or 5cm pose a significant risk by causing the ovary to swing and twist on its vascular pedicle.  Much larger masses (greater than 10cm) are thought to be less likely to cause torsion, secondary to fixation to adjacent structures.  Almost exclusively, the masses associated with ovarian torsion are benign, with dermoid cysts being the most common. Pregnancy also poses a significant risk for torsion with the greatest risk occurring in the first trimester and immediately post partum.  In fact, approximately 20-25% of all cases of torsion occur in pregnancy.  Finally, ovarian hyperstimulation syndrome after fertility treatment, causing multiple enlarged follicles, also increases the risk of torsion (Cicchiello, 2011; Chang, 2008; Vandermeer, 2009).

2. Time is ovary?  If a patient presents to the ED 24 hours after the onset of pain, is the ovary salvageable?

“Time is tissue” is a mantra that has been applied to various ischemic processes, including adnexal torsion.  However, there does not seem to be a direct relationship between symptomatic time and ischemic tissue damage in torsion.  The explanation for this discrepancy lies in the pathophysiology of torsion and in ovarian anatomy.  The twisting of the ovary or fallopian tube around the vascular pedicle leads first to lymphatic and venous obstruction followed by obstruction of arterial flow.  Additionally, ovaries have a dual blood supply, the uterine and ovarian arteries, protecting against complete ischemia.  As a result, patients can experience significant pain from torsion while still maintaining adequate arterial blood flow to the ovary (Cicchiello, 2011).

Multiple studies have tried to determine the critical ischemic time for torsion, beyond which preservation of the ovary is less likely. One study in a rat model showed preserved ovarian histologic structure at 24 hours, with irreversible ovarian damage only occurring after 36 hours of ischemia (Taskin, 1998).  Clinical studies in humans have also looked at ovarian viability in cases of surgically proven torsion.  In the largest study, comprising 102 patients, and a mean symptomatic time of 16 hours, most of the patients with surgically proven torsion had preservation of ovarian function after detorsion. This was demonstrated via multiple modalities.  Ninety one percent had ultrasounds showing normal ovary size and follicular development after surgery, and 92% who subsequently required laparotomy/laparoscopy showed grossly normal appearing ovaries. Furthermore, although the sample size was small, 100% of patients who later underwent in-vitro fertilization using oocytes from affected ovaries were able to conceive (Oelsner, 2003).  Additionally, one small clinical study showed preservation of ovarian histological structure with duration of symptoms less than 48 hours (Chen, 2001).

In summary, based on the available literature it appears difficult to predict ovarian viability based on the duration of symptoms in a patient with adnexal torsion.  Animal studies have shown ovarian viability time to extend beyond 24, and perhaps up to, 36 hours.  Human studies also suggest extended ovarian viability, possibly beyond 24 and up to 48 hours after the onset of symptoms. It should be noted, however, that these studies are small and the disease process is variable.

3. What imaging modalities do you use to diagnose ovarian torsion? How good are these modalities?

Ultrasonography (US) is the primary imaging modality for evaluating ovarian torsion.  It is readily available, non-invasive, cost effective, accurate and does not expose the patient to ionizing radiation.  There are two primary modes of ultrasound used: gray-scale and Doppler.

Gray-scale mode ultrasound is used to visualize static ovarian anatomy but can be very useful in diagnosing torsion, with specificities ranging from 93-100%. In torsion, the ovary typically appears hypoechoic secondary to edema from obstructed lymphatic or venous flow.  Frequently this congestion pushes the follicles to the periphery of the ovary.  Although this is not specific for torsion, it is reported in up to 74% of cases.  The most common finding is unilateral ovarian enlargement, usually greater than 4 centimeters.  Additionally, an ovarian mass or free fluid around the ovary, or in the pouch of Douglas can be appreciated if present (Mashiach, 2011; Chang, 2008; Vandermeer, 2009; Graif, 1988; Nizar, 2009).

Doppler mode ultrasound findings are more variable and dependent on the degree of vascular compromise.  Absence of venous and arterial flow is the most specific finding.  Lack of arterial flow has a positive predictive value of 94%, but represents a very late finding of what may be a non-viable ovary. Contrary to popular parlance, it is not necessary for the diagnosis.  Normal arterial and venous Doppler scans have been documented in multiple torsion studies, with rates of normal Doppler scans with surgically proven torsion of 13% and 33% in two studies.  This discrepancy between ultrasound findings and true disease process can be explained by early or intermittent torsion, a variable degree of twisting, operator skill, and the dual arterial blood supply. Doppler mode ultrasound is therefore of limited diagnostic utility. If gray-scale mode ultrasound suggests torsion, the diagnosis should never be excluded based on a normal Doppler study (Mashiach, 2011; Houry, 2001; Vandermeer, 2009).

Combining both gray-scale and Doppler modes provides an additional tool for evaluation of torsion via the “whirlpool” sign.  Coiled or circular vessels on Doppler, within a tubular or beaked mass on gray-scale, constitutes the “whirlpool” sign.  It is thought that this may be a positive prognostic indicator of a viable ovary as studies have reported intra-operative identification of necrotic ovaries in the absence of this sign (Chang, 2008).

Ovarian torsion may also be diagnosed on CT or MRI, although ultrasound should be the primary imaging modality if torsion is the leading diagnosis.  CT is less specific for torsion and carries the additional burden of radiation exposure, while MRI is not often readily available in the emergency department setting.  These modalities are more appropriate when investigating alternative diagnoses such as chronic torsion or pelvic masses.  Common findings on CT and MRI in acute ovarian torsion include the presence of an adnexal mass, a large displaced ovary, deviation of the uterus towards the affected side, obliteration of the fat planes, thickening of the fallopian tube and finally, ascites.  T1-weighted fat suppressed MRI images demonstrate a bright ovary in the setting of vascular congestion or hemorrhage, which is highly suggestive of torsion if a thickened fallopian tube is also present (Chang, 2008; Vandermeer, 2009).

4. Do you consider torsion if imaging reveals a normal ovary?  In which patients do you worry about torsion of an otherwise normal ovary?  When do you insist on this diagnosis with your Ob-Gyn colleagues?

Torsion of an otherwise normal ovary is rare.  As discussed above, ovarian cysts are found in most cases of torsion.  Other causes of torsion include an exceedingly mobile fallopian tube or mesosalpinx, elongated pelvic ligaments, fallopian tube spasm, or abrupt changes in intra-abdominal pressure.  Although ovarian masses remain the primary risk factor for torsion, children and adolescents are more prone to torsion in the absence of a mass.  Torsion in these cases is thought to result from hyper-mobile adnexae (Cicchiello, 2011; Chang, 2008; Mordehai, 1991; Davis, 1990).

Ultimately, torsion is unlikely if ultrasound reveals a completely normal ovary without a mass, but as detailed above, ultrasound negative torsion does occur and there is no good data for the sensitivity of ultrasound in torsion.  As with any disease that carries significant morbidity, a compelling history and concerning physical exam should not be ignored based on negative diagnostic testing. Further investigation of torsion, including urgent Gynecology consultation, advanced imaging, repeat ultrasound, and in some cases, laparoscopy, should be insisted upon in patients with increased risk of torsion or atypical torsion, such as pregnant or post-partum patients, pediatric patients, or in patients in whom an alternative diagnosis is not found.

Posted in Uncategorized | Tagged , | 3 Comments

Ovarian Torsion, Questions

1. What signs or symptoms make the diagnosis of ovarian torsion more likely?  What factors in the patient’s history should make you more suspicious for torsion? How useful is the bi-manual exam?

 

 

 

 

 

 

 

 

2. Time is ovary?  If a patient presents to the ED 24 hours after the onset of pain, is the ovary salvageable?

3. What imaging modalities do you use to diagnose ovarian torsion? How good are these studies?

4. Do you consider torsion if imaging reveals a normal ovary?  In which patients do you worry about torsion of an otherwise normal ovary?  When do you insist on this diagnosis with your OB-Gyn colleagues?

Ovarian Torsion Questions Poster

Posted in Uncategorized | Tagged , | 2 Comments

Nephrolithiasis, “Answers”

1. What is your preferred pain regimen in acute renal colic?  What do you like to give for home pain control?

Many consider NSAIDs to be first-line for renal colic pain, as they directly affect the ureter, inhibiting the synthesis of prostaglandins. A  prospective, double-blinded, placebo-controlled RCT from 2006, however, found that morphine plus ketorolac provided superior pain relief when compared to morphine alone, and decreased the incidence of vomiting (Safdar, 2006).   A 2005 Cochrane review showed that both NSAIDs or opiates reduced pain in acute renal colic, and that NSAIDs had a more favorable side effect profile (Holdgate, 2005).

Most emergency physicians seem to prefer a combination approach of ketorolac (Toradol) (or other NSAID if the patient can tolerate PO) with an opiate agent (typically morphine or hydromorphone).  These drugs are often paired with an anti-emetic, as renal colic can cause a significant amount of nausea.  One common regimen for an adult is ketorolac 30 mg IV (or 60 mg IM)+ morphine 0.1 mg/kg IV + metoclopramide 10 mg IV.  Interestingly, metoclopramide is one of the few anti-emetics that has been studied with regards to renal colic; some small series suggest it also aids in pain relief on its own, and is less sedating than others in its class (Muller, 1990).

It is important to note that parenteral (neither IV nor IM) ketorolac has never really been shown to be superior to oral ibuprofen in terms of pain relief or time of onset to pain relief. If the patient is tolerating PO, ibuprofen is an appropriate substitute.

The prior dogma of forced IV hydration does not improve pain, or increase the rate of stone passage, and may, in fact, worsen pain in cases of obstruction (Springhart, 2006). A few studies have tried intranasal desmopressin, for its antidiuretic properties, and have found it to decrease pain in acute renal colic (Roshani, 2010), although not widely employed in practice.

For home pain control, following a combination method of attack is common as well.  Most patients are discharged on a short course of NSAIDs (ibuprofen 400 – 600 mg PO Q8) + an opiate/acetaminophen combination drug (oxycodone/APAP or hydrocodone/APAP) for breakthrough pain.

2. When do you use ED ultrasound?  If it shows hydronephrosis, how does this affect your management?

The issue of when and how to employ ultrasound in a patient with presumed (or known) renal colic requires restatement of the goals of ED management.  With the growing literature demonstrating the risks of ionizing radiation, CT scanning should be avoided where possible.  An ultrasound coupled with a good history and physical examination along with a urinalysis (looking for infection) may obviate the need for a CT scan to attain diagnostic certainty in the right patient population, namely the young and otherwise healthy in whom your suspicion of renal colic is very high (especially those with a history of same). For more on the question of whether or not to employ CT, and the necessity of definitive diagnosis, please also see question four.

Ultrasound in renal colic can involve attempting to visualize the stone, and/or evaluate for unilateral hydronephrosis.  With regards to the former, ultrasound has only modest sensitivity, 60-80%, depending on operator and patient characteristics, and does poorly with small stones (<5mm), obese patients, and mid-ureteral stones.  This sensitivity, when compared to the 97-99% sensitivity of CT in detecting stones, makes ultrasound seem questionable as a diagnostic modality for visualizing renal stones.  In terms of hydronephrosis, however, ultrasound has a sensitivity around 92%,  (Sheafor, 2000), a respectable level seen in numerous series on the topic. It is worth mentioning that some radiologists and ultrasonographers believe the false negative rate for hydronephrosis on renal ultrasound to be has high as 22% (Koelliker, 1997) due to anatomic variants, full bladder, etc.. It is in answering the question of whether or not hydronephrosis is present that most emergency physicians employ a bedside renal ultrasound (Noble, 2004).

The presence of a history, physical and urinalysis consistent with nephrolithiasis in a young (most experts arbitrarily say <50) otherwise healthy patient (no underlying renal disease, normal renal function), coupled with the absence of other complications (infection, acute kidney injury, etc.), even with an ultrasound showing hydronephrosis, is still often sent home with Urology follow up in less than one week. Which is to say that even patients with a complete obstruction do not necessarily require emergent decompression of their nephrolithiasis. Some emergency physicians use the presence or absence of hydronephrosis on a bedside ultrasound to risk stratify the time to  follow-up and the need to discuss the case with a urologist prior to discharge. There is also a precedent for combining a bedside ultrasound showing hydronephrosis with clinical gestalt to enhance the predicted likelihood of a diagnosis of nephrolithiasis by emergency physicians (Rosen, 1998).

3. Do you give alpha-blockers to aid stone expulsion (tamsulosin or terazosin)?

In 2007, the best studies on this treatment modality were collected and published in a systematic review (Singh, 2007).  The conclusion of this systematic review was that the use of alpha-blockers increased the rate of passage of moderately-sized, distal ureteral stones.  However, the sixteen studies reviewed were not high quality (none were randomized, none were double-blinded) and the authors stated that further research should be done to confirm their conclusions.

Since then, two studies have found no benefit to tamsulosin for treatment of renal colic.  Ferre, et al., 2009,  published a randomized, controlled trial 0f 80 subjects which did not show a difference in spontaneous stone passage at fourteen days, time to passage of stone (average stone size 3.6 mm), pain, return ED visits, or adverse outcomes.  This was the first published, randomized trial and the first published trial of ED patients.

In December 2010, a multicenter, placebo-controlled, randomized, double-blind study was published in Archives of Internal Medicine, comparing tamsulosin to placebo (Vincendeau, 2010).  This study  was also performed on ED patients.  This trial concluded that tamsulosin did not decrease the time to stone passage (primary endpoint), use of pain medications, or the rate of surgical procedures (secondary endpoints).  One caveat to this study was that the vast majority of stones were < 3 mm and some experts contend that tamsulosin may have its greatest benefit in stones > 5 mm. The controversy continues in the Urology literature with some RDCTs amongst clinic patients showing a benefit (Abdel-Mequid, 2010; Al-Ansari, 2010) and others showing none (Hermanns, 2009; Agrawal, 2009).  Many, if not most, of our urology colleagues continue to use alpha-blockers for nephrolithiasis, although this practice does not appear to be well-supported by the recent literature, with the bulk of studies showing no effect, especially in ED patients.

4.  Who do you CT?  Who do you not CT?

Helical CT has become the diagnostic modality of choice in urolithiasis in the last ten years because of its high sensitivity (97%) and specificity (96%) (Sheafor, 2000).  In addition to diagnosis, CT provides a great deal of additional information about kidney stones including size, location, presence or absence of hydronephrosis, density of the stone (Hounsfield units) to help determine best treatment options, other complicating issues associated with nephrolithiasis, and other diagnoses if nephrolithiasis is not present.

Many emergency physicians scan all adult patients on their first presentation of unilateral flank pain, presumed to be renal colic. This practice is potentially supported by a small Canadian series that looked at 132 patients and examined the effects of CT scan on diagnosis and disposition, grouped by pre-test likelihood. In 40 of the cases (33%) CT revealed alternate pathology, including 19 with very high pre-test likelihood of a nephrolithiasis diagnosis according to their physician in whom significant other pathology was found (lymphoma, AAA, metastases, undiagnosed malignancies, etc.) (Ha, 2004). This study, and others like it, are a sobering reminder of significant diagnostic uncertainty in the patient with a first episode of unilateral flank pain.

Some commentators, however, feel it is reasonable in a young person with a classic presentation to get an renal ultrasound (and possibly an ultrasound of the abdominal aorta while in the neighborhood, and/or a KUB, although sensitivity is quite low, around 50-60% even for radiopaque stones) and make a diagnosis based on the clinical picture and ultrasound findings, deferring definitive imaging to an outpatient setting, or if a return ED visit is required due to clinical course.

Posted in Uncategorized | Tagged , , | 6 Comments

Nephrolithiasis, Questions

1. What is your preferred pain regimen in acute renal colic?  What do you like to give for home pain control?

2. When do you use ED ultrasound?  If it shows hydronephrosis, how does this affect your management?

3. Do you give alpha-blockers to aid stone expulsion (tamsulosin or terazosin)?

4.  Which patients do you CT?  Which patients do you not CT?

Nephrolithiasis Questions Poster

Posted in Uncategorized | Tagged , , , , | 5 Comments

Hyperkalemia, “Answers”

1)    What are the EKG manifestations associated with hyperkalemia?  Do these changes occur in a predictable order?

Hyperkalemia is one of the most lethal and treatable metabolic disturbances faced by emergency physicians.  Therefore, its rapid recognition and treatment is paramount to the survival of the critically ill patient.  Oftentimes, the EKG is utilized to assist in its early identification at the bedside, before lab results return.

Classically, it has been taught that there is a step-wise progression of EKG changes that occur in patients as their serum potassium level rises.  Initially the EKG change begins with peaked T-waves in mild hyperkalemia (<6.5mEq/L). As the serum concentration of potassium increases, conduction from one cardiac myocyte to the next is impaired, resulting in increases in the PR intervals, flattening/disappearance of the p-waves, and then widening of the QRS with moderate hyperkalemia (6.5-8.0mEq/L).  Finally, as serum potassium continues to rise, there is further widening of the QRS until it merges with the T-wave producing the classic “sine-wave” pattern on EKG; eventually ventricular fibrillation or asystole ensues in severe hyperkalemia (>8.0mEq/L).

Although there is evidence in the medical literature that this pattern does exist, some studies have shown that when EKG changes occur, they may not progress in the expected step-wise fashion. Patients may progress from normal sinus rhythm to ventricular fibrillation as their first EKG manifestation of hyperkalemia (Dodge, 1953).  In addition to the classic EKG findings of hyperkalemia, elevated potassium levels may also manifest as sinus bradycardia, right or left bundle branch blocks, and 2nd and 3rd degree AV blocks.

Not only is progression of EKG changes unpredictable in hyperkalemia, the EKG is also not a very sensitive indicator of hyperkalemia.  In a retrospective review of 90 patients with mild-moderate hyperkalemia (80% with serum potassium levels < 7.2mEq/L) only 18% met strict criteria and only 52% met any criteria for EKG abnormalities associated with hyperkalemia (Montague, 2008).  Although studies demonstrate that the probability of having EKG changes increases as the serum potassium level rises, there are case reports of patients with extremely severe hyperkalemia (>9.0mEq/L) that still did not demonstrate any of the predicted EKG manifestations (Szerlip 1986).

Some evidence shows that instead of the absolute serum potassium concentration, there may be a higher likelihood of EKG changes when rapid rises in serum potassium concentration occur (Fisch, 1973 & Surawicz, 1967).

Therefore, the EKG is a not a marker of how sick a patient may be with regards to their potassium level.  Have a low threshold to treat a patient with hyperkalemia, whether or not their EKG shows classic manifestations.  Do not to feel a false sense of security with a lack of EKG findings in these patients, and understand that the patient’s EKG may change rapidly into a lethal arrhythmia.

2)    What is the role of Kayexalate in the treatment of hyperkalemia?

Kayexalate (Sodium Polystyrene Sulfate) is a cation-exchange resin that was approved in 1958 as a treatment for hyperkalemia by helping to exchange sodium for potassium in the colon and thus excreting potassium from the body.  Although this drug has been used for a numbers of years as an adjunct to more acute treatments, there are two potential problems with its use.

Firstly, there is little to no evidence that Kayexalate effectively reduces serum potassium levels.  The two original studies promoting its use, often cited in literature, were published in the New England Journal of Medicine in 1961.  The methodology of these two trials was completed without any controls, included multiple confounding variables, a lack of rigorous statistical analysis, and demonstrated minimal if any effect of Kayexalate on serum potassium levels (Scherr, 1961 & Flinn, 1961).  Furthermore, a 1998 study also failed to demonstrate a statistically significant difference in serum potassium levels at 4, 8, and 12 hrs after administration of 30g Kayexalate with sorbitol compared to placebo controls (Gruy-Kapral, 1998).

In addition to the lack of evidence demonstrating any efficacy, there have been multiple case reports of intestinal necrosis, GI bleeding, and intestinal perforation secondary to Kayexalate (Rogers, 2001 & Rashid 1997).  In 2009 the FDA responded to these case reports by placing a warning on Kayexalate for these effects.  The warning stated that the complications were primarily seen in patients who received Kayexalate along with sorbitol, and deemed sorbitol to be the primary culprit.  Most Kayexalate preparations carried by hospitals, however, have sorbitol already mixed into them, as the powdered version of Kayexalate alone is not easily available.

Due to the lack of literature showing any benefit of Kayexalate in decreasing serum potassium levels, as well as reports of serious side effects associated with its usage, including intestinal necrosis, Kayexalate should play little, if any, role in the treatment of hyperkalemia in the emergency department and upon admission to other hospital services.

3)    Is there a threshold serum potassium level or particular EKG finding that triggers you to administer calcium?  How do you give calcium when you use it?

Generally, calcium is administered to hyperkalemic patients to stabilize the cardiac myocytes by restoring their normal resting membrane potential (Fisch, 1973).  It is generally reserved for moderate to severely hyperkalemic patients with cardiac instability.  Although there are no clear guidelines or evidence demonstrating the exact point to administer calcium, many clinicians will administer it if: (1) the EKG shows evidence of cardiac destabilization such as widening QRS or loss of p-waves on EKG (N.B.: As discussed in question 1, EKG findings in patients with hyperkalemia can vary from patient to patient, and patients with severely elevated serum potassium levels may not manifest concomitant EKG findings);  (2) Serum potassium levels above 6.5-7mEq/L regardless of the presence of EKG changes; or (3) rapid rises in serum potassium levels.  As a general rule, however, have a low threshold to administer calcium.

There are two options when administering calcium: calcium gluconate and calcium chloride.  Both types of calcium work relatively quickly in restabilizing the cardiac myocyte membrane, within 3-5 minutes.  Calcium chloride contains three times the concentration of calcium compared to calcium gluconate.  Therefore, 1gm of calcium chloride is approximately equivalent to 3gm of Calcium gluconate. Historically, calcium chloride was recommended over calcium gluconate because it was believed that calcium gluconate required first pass metabolism in the liver in order to become bioavailable to cardiac myocytes. However, Martin, et al. showed that in patients undergoing liver transplant (i.e. no liver to perform first pass metabolism) serum calcium concentrations rose equally in patients given calcium chloride or calcium gluconate (Martin, 1990). A small study of pediatric burn patients found similar results (Cote, 1987). Although this evidence is not directly applicable to hyperkalemic patients in the ED, it runs counter to the previously held beliefs concerning first pass metabolism of calcium gluconate. Calcium chloride is still recommended in the crashing patient due to speed of administration per volume. Calcium chloride, however, does pose a very serious risk for the development of tissue necrosis if it extravasates into the surrounding tissue (Semple, 1996).  Therefore, it must be administered via a central venous line or a large bore, well-placed peripheral line.  Calcium gluconate, however, can be administered through a small peripheral IV if needed, as its risk of tissue necrosis is much less.  Patients should be on a cardiac monitor when receiving calcium infusions.

4)    When do you re-dose patients?

When treating patients with hyperkalemia, we often forget to re-dose patients after their initial treatment is given.  In terms of calcium, a second dose can be administered after about 5 minutes if EKG changes persist or worsen.  Although its effect is quite rapid (within 3-5 minutes), it only stabilizes the cardiac myocytes for approximately 30-60 minutes (Weisberg, 2008).  After this time, a repeat EKG may be necessary, along with a repeat serum potassium level, and an additional dose of calcium.

In terms of albuterol, if the appropriate dose of 20mg (nebulized) is given, its effect usually lasts for approximately 2 hours after which time it may require re-dosing.  Note that the dose to treat hyperkalemia is approximately four times the amount normally given to an asthmatic or patient with emphysema for respiratory complaints.

Insulin is normally administered as a 10 Unit IV bolus along with 1-2 amps of D50.  When given as an IV bolus, the intracellular effects of insulin will last for approximately 4-6 hours after which it may need to be re-dosed.

Posted in Uncategorized | Tagged , , | 3 Comments

Hyperkalemia, Questions

1.    What are the EKG manifestations associated with hyperkalemia?  Do these changes occur in a particular order?

2.    What is the role of Kayexalate in the treatment of hyperkalemia?

3.    Is there a threshold serum potassium level or particular EKG finding that triggers you to administer calcium?  How do you give calcium when you use it?

4.    When do you re-dose patients you’ve treated for hyperkalemia?

Hyperkalemia Questions Poster

Posted in Uncategorized | Tagged , , | 2 Comments

Viral Meningitis, “Answers”

1.  Are there any elements on history & physical that make you suspect viral meningitis in adult patients? Do you LP all patients you suspect have viral meningitis?

Clinicians have long tried to identify elements of a patient’s history or physical examination that may help rule-out a diagnosis of meningitis–viral or otherwise–so as to spare patients an unnecessary lumbar puncture.  This has not proven to be an easy task, and unfortunately seems to be an unrealistic goal.

In one JAMA analysis of multiple studies, the complaints of headache and nausea/vomiting were found to have pooled sensitivities/specificities of only 50%/50% and 38%/60%, respectively. The same review found that the physical exam findings were slightly more helpful: fever had the highest sensitivity (85%), while neck stiffness had the next highest (70%).  Perhaps the most helpful finding was that 95% of patients who had meningitis had at least two of the classic findings of fever, neck stiffness, and altered mental status/headache, and that 99-100% had at least one such finding.  The absence of all of those complaints may effectively rule out meningitis (Attia, 1999).

The classically described Kernig and Brudzinski signs are not at all sensitive for the diagnosis of meningitis (~5-30%) but have relatively high specificities (70-100%) (Waghdhare, 2010; Uchihara, 1991; Thomas, 2002). Thus the presence of these symptoms should substantially increase a clinician’s suspicion for the presence of meningitis.

Lastly, the jolt accentuation test has had mixed findings. One prospective study found it to be 87% sensitive and 60% specific (Uchihara, 1991), but another more recent study has found almost the reverse, with a very low sensitivity (6%) and a high specificity (98%). (Waghdhare, 2010).

Duration of symptoms may be tempting to use as a means of ruling out bacterial meningitis, given that this disease is often rapidly fatal. Hence a complaint of severe headache for over a week seems to rule out this serious entity.  However, studies have not looked at this is as valid means of differentiating between bacterial and viral meningitis, and perhaps more importantly, some aseptic meningitides that require immediate treatment, such as cryptococcal meningitis or tuberculous meningitis, are typically subacute in onset.

In summary then, clinical signs and symptoms have very low yield in ruling out a diagnosis of meningitis, leave alone differentiating between viral and bacterial meningitis.  Therefore, in the absence of any contraindication, an LP should always be performed in patients in whom meningitis is suspected on clinical grounds (Viallon, 2011; Tunkel, 2004).

2.  Does a “normal” CSF reassure you that the patient does not have bacterial meningitis?

There are actually two important questions to ask here: 1) Does an entirely normal CSF, with negative gram stain, rule out bacterial meningitis (BM)? and 2) How useful are standard CSF parameters (cell count with differential, glucose, protein, and gram stain) in differentiating between bacterial and aseptic meningitis?

Regarding the first question it is very rare, but not unheard of, to have completely normal CSF results in the setting of acute bacterial meningitis. There are several case reports, and a few studies and reviews that describe such instances (Ray, B, 2009; Coll, 1994; Polk, 1987; Onorato, 1980). Most of them involve infants and young children, not adults. The bacteria most commonly isolated were N. meningitides, H. influenzae, and S. pneumonia.  In two pediatric studies, the incidence of BM with negative initial CSF was found to be 2.7% (Polk, 1987) and 10% (Coll, 1994).  The risk of a false-negative CSF is increased if the lumbar puncture is performed within 24 hours of onset of symptoms. If meningitis is clinically suspected, a repeat LP should be performed within 24-48 hours (Ray, B, 2009). Most reassuring is that almost all of the patients described in the various case reports and studies were either neonates, had a concerning rash, and/or were delirious or otherwise acutely ill, i.e. patients who would have been (and were) admitted and empirically treated irrespective of CSF findings.  In sum, in a non-toxic appearing, immunocompetent adult a completely normal CSF (including normal opening pressures) is highly reassuring in ruling-out bacterial meningitis, especially if the symptoms began more than 24 hours earlier.

But, what if your patient’s CSF is not entirely normal, but instead seems to suggest a viral meningitis? That is, it has a mild lymphocyte-predominant pleocytosis, a normal glucose and protein, and a negative gram stain.  Can you reassure your patient that he does not have bacterial meningitis? Unfortunately not.  While high WBC count ( >1500/mm^3), a very low glucose ( < 35 mg/dL) , a markedly elevated protein (>2.2g/L or 220 mg/dL) and/or a positive gram stain are highly suggestive of bacterial meningitis, more moderate values do not rule it out.

Several studies have shown that BM may present with CSF lymphocytosis in 15-30% of cases, especially when the WBC concentration is less than 1000/mm^3 (Lindquist, 1988; Spanos, 1989). Conversely, early viral meningitis may have PMNs predominate up to 40-50% of the time (Spanos, 1989; Archimbaud, 2009).  Glucose, protein and CSF/blood glucose ratio have also been evaluated; while some studies have found some predictive value in the glucose ratio and in CSF protein, none of these parameters has been shown to allow definite differentiation between bacterial and viral meningitis (Lindquist, 1988; Spanos, 1989).   In fact, in the original studies on CSF glucose, it was found that CSF glucose was decreased in roughly half the patients with BM.

Gram stains, while providing a definitive diagnosis of BM when positive, have also been shown to be negative in 20-40% of BM cases (Ray, P, 2007; Spanos, 1989; Viallon, 2011). In a prospective trial of adult ED patients with acute meningitis but negative gram stains, the most common bacteria identified were S. pneumoniae, L. monocytogenes, and N. meningitides.  Listeria species, in particular, have been found to be more likely to have negative gram stains (Hussein, 2000; Elmore, 1996).  Other pathogens that typically have a concentration below the diagnostic sensitivity of standard microbiologic stains include M. tuberculosis and Cryptococcus neoformans (Elmore, 1996). Therefore, in patients at risk for these pathogens (whether due to age, an immunocompromised state, travel, or other exposure history), one should consider these entities in the setting of a negative gram stain.

Bottom line: While CSF findings can be used to rule in bacterial meningitis in adult patients, they cannot be reliably used to differentiate between bacterial and viral meningitis.  Interestingly, several clinical decision rules that incorporate CSF findings have been established to help with this differentiation in the pediatric population, and one in particular (Bacterial Meningitis Score) has been retrospectively validated in several studies (Nigrovic, 2007; Dubos, 2006).  Unfortunately, no similar rule has been established and validated in the adult setting.

3.     Do you use CSF lactate or other cytochemical markers to differentiate between aseptic and bacterial meningitis?  Do you send anything beyond a standard meningitis panel for immunocompetent patients? When and what?

Given the lackluster performance of standard CSF parameters in differentiating viral and bacterial meningitis, many have sought other blood and CSF parameters to test.

Two of the most commonly studied parameters are serum procalcitonin (PCT) levels and CSF lactate.  While procalcitonin has been studied amply in the pediatric population, where it has been found to help discriminate between bacterial and non-bacterial meningitis, it has not been as thoroughly investigated in adults. One recent prospective study in adults by Viallon, et al., did show it to be highly discriminative between bacterial and viral meningitis. It found that at a level of 0.28 ng/mL, PCT was 97% sensitive and 100% specific for bacterial meningitis.  While other studies have also found PCT to be helpful, they used different cut-off levels and had less reassuring sensitivities and specificities (Schwarz, 2000; Jereb, 2001).

CSF Lactate has been more extensively investigated in the adult population.  Viallon, et al.’s study found it to have a sensitivity of 94% and specificity of 97% at a cut-off level of 3.8 mmol/L (34mg/dL). Two recent meta-analyses of 25 and 33 studies respectively have supported the usefulness of CSF lactate in this context as well. Sakushima, et al. (2011) found lactate to have a pooled sensitivity and specificity of 93% and 96%, with 3.9mmol/L (35 mg/dL) found to be the optimal cut-off.   It appeared useful in ruling out BM and in distinguishing between bacterial and viral meningitis when used in combination with other CSF characteristics, but it was also noted that pretreatment with antibiotics reduced its clinical accuracy. Huy, et al. (2010) found CSF lactate to have an excellent level of overall accuracy in differentiating bacterial and non-bacterial meningitis, with an area under the curve of 0.9840. They considered it to be a good single indicator, and a better marker compared to other conventional markers as discussed above, especially when the assay was positive (above the cut-off point defined).

Outside of the standard CSF panel, lactate and PCT, there are other studies one might consider sending to help elucidate the pathogen. As non-polio enteroviruses are the most common cause of acute viral meningitis (VM), sending an enterovirus (EV) PCR can be the quickest way to potentially rule in a viral etiology, thereby ruling out BM.  Several studies have shown that if quick turnaround times are available for EV-PCR hospital length-of-stay, and duration of antibiotics is minimized (Archimbauld, 2009; Tattevin, 2002).  HSV and viral cultures can also be sent, but will not be helpful in early differentiation, as they are less reliable than PCR, and also take up to ten days to be finalized. Latex agglutination tests can also help rule out bacterial causes of meningitis such as N. meningitides and H. Influenzae.  Whether to send other studies depends on the suspicion of specific etiologies based on clinical findings and history of possible exposures, recent/current rashes, travel history, sexual history, etc.  If the patient is altered at all and encephalitis is suspected, a broader encephalitis panel (including HSV, VZV, West Nile and others) may be sent for PCR analysis.   Other studies can include acid-fast stain and tuberculosis PCR/Culture,  VDRL, and antigen testing for cryptococcus.  HIV and Lyme disease testing may also be appropriate in patients with suspected viral meningitis.

In summary, CSF lactate may prove helpful in distinguishing bacterial from viral meningitis, especially when considered together with other CSF findings. Serum procalcitonin, while possibly helpful, is less well studied in adults.  EV PCR, where available, should be sent in patients suspected of viral meningitis. Other studies will largely depend on risk factors and presentation as assessed by the clinician.

4.  When do you admit a patient, post-LP, who appears to have viral meningitis? What anti-microbial agents do you administer these patients, if any?

There are no studies analyzing the ideal disposition of patients with suspected viral meningitis.  Traditional teaching states that patients at the extremes of age or with more severe disease, immunocompromise, suspicion of HSV or VZV meningitis, or potential nonviral causes should be hospitalized (Rosen’s, 6th ed.).  Some clinicians handle patients with classic presentations of viral meningitis as outpatients with follow-up within 24 hours, while others admit them until cultures are finalized as negative. Given the demonstrated difficulty in ruling out bacterial meningitis, this approach is understandable.

One study that evaluated the management of 168 patients who presented to the ED and were diagnosed with acute meningitis with a negative gram stain, found that 70% were admitted, 49% had cranial imaging (73% with normal findings) and 52% were treated with empiric antibiotics. Ultimately only 17% had established infectious causative agents that would have benefited from antibiotic treatment (Elmore, 1996). There were no deaths within one month of presentation. The authors conclude that better tests/clinical decision rules are needed to avoid unnecessary hospitalizations and associated costs.

Anti-microbial agent use varies similarly: some practitioners choose to cover patients with both antibiotic and antiviral agents until either bacterial cultures are negative, or a viral PCR returns with a positive result. Others who have low suspicion for bacterial meningitis may not use any antimicrobial agents or may choose to use only acyclovir. There are no studies to direct the empiric treatment of these patients.  The decision often comes down to the clinician’s assessment of the risk of either BM or HSV/VZV meningitis, with similar risk factors as those listed above for hospitalization taken into consideration. In a well-appearing, immunocompetent, young- to middle-aged adult with suspected viral meningitis, the decision of whether to admit the patient and whether to treat with empiric antibiotics and/or anti-viral agents is largely up to the clinician.  Admission is perfectly reasonable.  If the decision is made to send a patient home after a candid discussion with the patient about the potential risks is had, it is imperative that he/she be re-evaluated by a clinician within 24 hours.  Close follow-up is critical.

Posted in Uncategorized | Tagged , | Leave a comment

Viral Meningitis, Questions

1.)  Are there any elements on H&P that make you suspect viral meningitis in adult patients? Do you LP all patients you suspect have viral meningitis?

2.)  Does a “normal” CSF reassure you that the patient does not have bacterial meningitis?

3.)  Do you use CSF lactate or other cytochemical markers to differentiate between viral and bacterial meningitis?  Do you send anything beyond a standard meningitis panel for immunocompetent patients? When and what?

4.)  When do you admit a patient, post-LP, who appears to have viral meningitis? What anti-microbial agents do you administer these patients, if any?

Viral Meningitis Questions Poster

Posted in Uncategorized | Tagged , | 1 Comment