Clinical Chemistry: Principles Techniques Correlations

With clear explanations that balance analytic principles, techniques, and correlation of results with coverage of disease states, the book not only demonstrates the how of clinical testing, but also the what, why, and when of testing correlations to help students develop the knowledge and interpretive and analytic skills they’ll need in their future careers. Comprehensive and easy to understand, the 8th Edition now features an entirely new chapter, new and updated learning aids, and an unparalleled suite of teaching and learning resources. New Case Studies, which include scenarios, lab results, and questions, stimulate critical thinking and encourage students to apply content to clinical practice. Rewritten presentations condense large and difficult concepts into easier-to-understand presentations. Enhanced photos and illustrations clarify key concepts. Coverage of the latest equipment and technologies used in today’s modern lab prepares students for real-world practice. The basic principles of analytic procedures discussed reflect the most recent or commonly performed techniques in the clinical chemistry laboratory, while material on non-essential topics such as Phlebotomy and Specimen Collection has been moved online on thePoint. Insightful coverage of the impact of problem solving, quality assurance, and cost effectiveness on the laboratory professional prepares students for clinical practice. Updated in-text learning aids include chapter outlines and chapter objectives, tables that condense and augment theory coverage, and end-of-chapter questions that give students an opportunity to assess their level of mastery.

148 downloads 4K Views 53MB Size

Recommend Stories

Empty story

Idea Transcript


EIGHTH edition

Clinical Chemistry Principles, Techniques, and Correlations

EIGHTH edition

Clinical Chemistry Principles, Techniques, and Correlations Michael L. Bishop, MS, MLS(ASCP)CM Campus Department Chair Medical Laboratory Science Keiser University Orlando, Florida

Edward P. Fody, MD Clinical Professor Department of Pathology, Microbiology and Immunology Vanderbilt University School of Medicine Nashville, Tennessee Medical Director Department of Pathology Holland Hospital Holland, Michigan

Larry E. Schoeff, MS, MT(ASCP) Professor (Retired), Medical Laboratory Science Program Department of Pathology University of Utah School of Medicine Salt Lake City, Utah

Acquisitions Editor: Jonathan Joyce Product Development Editor: John Larkin Marketing Manager: Leah Thomson Production Project Manager: Kim Cox Design Coordinator: Joan Wendt Manufacturing Coordinator: Margie Orzech Prepress Vendor: SPi Global Eighth edition Copyright © 2018 Wolters Kluwer Copyright © 2013, 2010, 2005, 2000, 1996, 1992, 1985 Wolters Kluwer Health/Lippincott Williams & Wilkins. All rights reserved. This book is protected by copyright. No part of this book may be reproduced or transmitted in any form or by any means, including as photocopies or scanned-in or other electronic copies, or utilized by any information storage and retrieval system without written permission from the copyright owner, except for brief quotations embodied in critical articles and reviews. Materials appearing in this book prepared by individuals as part of their official duties as U.S. government employees are not covered by the above-mentioned copyright. To request permission, please contact Wolters Kluwer at Two Commerce Square, 2001 Market Street, Philadelphia, PA 19103, via email at [email protected], or via our website at lww.com (products and services). 9 8 7 6 5 4 3 2 1 Printed in China Library of Congress Cataloging-in-Publication Data Names: Bishop, Michael L., editor. | Fody, Edward P., editor. | Schoeff, Larry E., editor. Title: Clinical chemistry : principles, techniques, and correlations / [edited by] Michael L. Bishop, Edward P. Fody, Larry E. Schoeff. Other titles: Clinical chemistry (Bishop) Description: Eighth edition. | Philadelphia : Wolters Kluwer, [2018] | Includes bibliographical references and index. Identifiers: LCCN 2016031591 | ISBN 9781496335586 Subjects: | MESH: Clinical Chemistry Tests | Chemistry, Clinical—methods Classification: LCC RB40 | NLM QY 90 | DDC 616.07/56—dc23 LC record available at https://lccn.loc.gov/2016031591 This work is provided “as is,” and the publisher disclaims any and all warranties, express or implied, including any warranties as to accuracy, comprehensiveness, or currency of the content of this work. This work is no substitute for individual patient assessment based upon healthcare professionals' examination of each patient and consideration of, among other things, age, weight, gender, current or prior medical conditions, medication history, laboratory data and other factors unique to the patient. The publisher does not provide medical advice or guidance and this work is merely a reference tool. Healthcare professionals, and not the publisher, are solely responsible for the use of this work including all medical judgments and for any resulting diagnosis and treatments.

Given continuous, rapid advances in medical science and health information, independent professional verification of medical diagnoses, indications, appropriate pharmaceutical selections and dosages, and treatment options should be made and healthcare professionals should consult a variety of sources. When prescribing medication, healthcare professionals are advised to consult the product information sheet (the manufacturer's package insert) accompanying each drug to verify, among other things, conditions of use, warnings and side effects and identify any changes in dosage schedule or contraindications, particularly if the medication to be administered is new, infrequently used or has a narrow therapeutic range. To the maximum extent permitted under applicable law, no responsibility is assumed by the publisher for any injury and/or damage to persons or property, as a matter of products liability, negligence law or otherwise, or from any reference to or use by any person of this work. LWW.com

Not authorised for sale in United States, Canada, Australia, New Zealand, Puerto Rico, and U.S. Virgin Islands. Acquisitions Editor: Jonathan Joyce Product Development Editor: John Larkin Marketing Manager: Leah Thomson Production Project Manager: Kim Cox Design Coordinator: Joan Wendt Manufacturing Coordinator: Margie Orzech Prepress Vendor: SPi Global Eighth edition Copyright © 2018 Wolters Kluwer Copyright © 2013, 2010, 2005, 2000, 1996, 1992, 1985 Wolters Kluwer Health/Lippincott Williams & Wilkins. All rights reserved. This book is protected by copyright. No part of this book may be reproduced or transmitted in any form or by any means, including as photocopies or scanned-in or other electronic copies, or utilized by any information storage and retrieval system without written permission from the copyright owner, except for brief quotations embodied in critical articles and reviews. Materials appearing in this book prepared by individuals as part of their official duties as U.S. government employees are not covered by the above-mentioned copyright. To request permission, please contact Wolters Kluwer at Two Commerce Square, 2001 Market Street, Philadelphia, PA 19103, via email at [email protected], or via our website at lww.com (products and services). 9 8 7 6 5 4 3 2 1 Printed in China Library of Congress Cataloging-in-Publication Data Names: Bishop, Michael L., editor. | Fody, Edward P., editor. | Schoeff, Larry E., editor. Title: Clinical chemistry : principles, techniques, and correlations / [edited by] Michael L. Bishop, Edward P. Fody, Larry E. Schoeff. Other titles: Clinical chemistry (Bishop) Description: Eighth edition. | Philadelphia : Wolters Kluwer, [2018] | Includes bibliographical references and index. Identifiers: LCCN 2016031591 | ISBN 9781496365392 Subjects: | MESH: Clinical Chemistry Tests | Chemistry, Clinical—methods Classification: LCC RB40 | NLM QY 90 | DDC 616.07/56—dc23 LC record available at https://lccn.loc.gov/2016031591 This work is provided “as is,” and the publisher disclaims any and all warranties, express or implied, including any warranties as to accuracy, comprehensiveness, or currency of the content of this work. This work is no substitute for individual patient assessment based upon healthcare professionals' examination of each patient and consideration of, among other things, age, weight, gender, current or prior medical conditions, medication history, laboratory data and other factors unique to the patient. The publisher does not provide medical advice or guidance and this work is merely a reference tool. Healthcare

professionals, and not the publisher, are solely responsible for the use of this work including all medical judgments and for any resulting diagnosis and treatments. Given continuous, rapid advances in medical science and health information, independent professional verification of medical diagnoses, indications, appropriate pharmaceutical selections and dosages, and treatment options should be made and healthcare professionals should consult a variety of sources. When prescribing medication, healthcare professionals are advised to consult the product information sheet (the manufacturer's package insert) accompanying each drug to verify, among other things, conditions of use, warnings and side effects and identify any changes in dosage schedule or contraindications, particularly if the medication to be administered is new, infrequently used or has a narrow therapeutic range. To the maximum extent permitted under applicable law, no responsibility is assumed by the publisher for any injury and/or damage to persons or property, as a matter of products liability, negligence law or otherwise, or from any reference to or use by any person of this work. LWW.com

In memory of my mother, Betty Beck Bishop, for her constant support, guidance, and encouragement.

MLB To Nancy, my wife, for continuing support and dedication.

EPF To my wife, Anita, for continuing support.

LES

Contributors Dev Abraham, MD Professor of Medicine Division of Endocrinology University of Utah Salt Lake City, Utah Josephine Abraham, MD, MPH Professor of Medicine Division of Nephrology University of Utah Salt Lake City, Utah Michael J. Bennett, PhD, FRCPath, FACB, DABCC Professor of Pathology and Laboratory Medicine University of Pennsylvania Director, Metabolic Disease Laboratory Children's Hospital of Philadelphia Abramson Pediatric Research Center Philadelphia, Pennsylvania Takara L. Blamires, M.S., MLS(ASCP)CM Medical Laboratory Science Program Department of Pathology University of Utah School of Medicine Salt Lake City, Utah Maria G. Boosalis, PhD, MPH, RD, LD Professor, College of Health and Wellness Northwestern Health Sciences University Bloomington, Minnesota Raffick A. R. Bowen, PhD, MHA, MT(CSMLS), DClChem, FCACB, DABCC, FACB Clinical Associate Professor of Pathology Associate Director of Clinical Chemistry and Immunology Laboratory Department of Pathology

Stanford Health Care Stanford, California Janelle M. Chiasera, PhD Department Chair, Clinical and Diagnostic Sciences University of Alabama – Birmingham Birmingham, Alabama Heather Corn, MD Internal Medicine – Clinical Instructor Endocrinology and Diabetes Center University of Utah Salt Lake City, Utah Heather Crookston, PhD Point of Care Coordinator Nemours Children's Hospital Orlando, Florida Julia C. Drees, PhD, DABCC Clinical Research and Development Scientist Kaiser Permanente Regional Laboratory Berkeley, California Kathryn Dugan, MEd, MT(ASCP) Instructor Medical and Clinical Laboratory Sciences Auburn University at Montgomery Montgomery, Alabama Michael Durando, MD, PhD Research Track Resident, Internal Medicine and Hematology/Oncology Emory University Atlanta, Georgia Edward P. Fody, MD Clinical Professor Department of Pathology, Microbiology and Immunology Vanderbilt University School of Medicine Nashville, Tennessee Medical Director

Department of Pathology Holland Hospital Holland, Michigan Elizabeth L. Frank, PhD Associate Professor, Department of Pathology University of Utah School of Medicine Medical Director, Analytic Biochemistry and Calculi ARUP Laboratories, Inc. Salt Lake City, Utah Vicki S. Freeman, PhD, MLS(ASCP)CM SC, FACB Department Chair and Associate Professor, Clinical Laboratory Sciences University of Texas Medical Branch Galveston, Texas Linda S. Gorman, PhD Retired Associate Professor Medical Laboratory Science University of Kentucky Lexington, Kentucky Ryan W. Greer, MS, I&C(ASCP) Assistant Vice President, Group Manager Chemistry Group III, Technical Operations ARUP Laboratories, Inc. Salt Lake City, Utah Marissa Grotzke, MD Assistant Professor Endocrinology and Metabolism Internal Medicine University of Utah Salt Lake City, Utah Mahima Gulati, MD Endocrinologist Middlesex Hospital Danbury, Connecticut

Carrie J. Haglock-Adler, MSFS, C(ASCP) Research Scientist ARUP Institute for Clinical and Experimental Pathology Salt Lake City, Utah Matthew P. A. Henderson, PhD, FCACB Clinical Biochemist and Laboratory Director for the Children's Hospital of Eastern Ontario Ottawa Hospital Assistant Professor Department of Pathology and Laboratory Medicine at the University of Ottawa University of Ottawa Ottawa, Ontario, Canada Ronald R. Henriquez, PhD, NRCC Fellow, Clinical Chemistry University of North Carolina at Chapel Hill, Pathology and Laboratory Medicine University of North Carolina Chapel Hill, North Carolina Clinical Chemist, Department of Pathology Walter Reed National Military Medical Center, United States Army Bethesda, Maryland Laura M. Hickes, PhD Chemistry and Applications Support Manager Roche Diagnostics Greensboro, North Carolina Brian C. Jensen, MD Assistant Professor of Medicine and Pharmacology UNC Division of Cardiology UNC McAllister Heart Institute University of North Carolina Chapel Hill, North Carolina Kamisha L. Johnson-Davis, PhD, DABCC (CC, TC), FACB Assistant Professor (Clinical) Department of Pathology University of Utah

Medical Director, Clinical Toxicology ARUP Laboratories Salt Lake City, Utah Robert E. Jones, MD Professor of Medicine Endocrinology and Diabetes Center Division of Endocrinology University of Utah Salt Lake City, Utah Yachana Kataria, Phd Clinical Chemistry Fellow Department of Laboratory Medicine Boston Children's Hospital Research Fellow Harvard Medical School Boston, Massachusetts Mark D. Kellogg, PhD, MT(ASCP), DABCC, FACB Director of Quality Programs Associate Director of Chemistry Department of Laboratory Medicine Boston Children's Hospital Assistant Professor of Pathology Harvard Medical School Boston, Massachusetts Cindi Bullock Letsos, MT(ASCP) Lean Six Sigma Black Belt Consultant Retired from University of North Carolina Health Care Chapel Hill, North Carolina Kara L. Lynch, PhD Associate Division Chief, Chemistry and Toxicology Laboratory San Francisco General Hospital San Francisco, California J. Marvin McBride, MD, MBA

Assistant Clinical Professor Division of Geriatric Medicine UNC School of Medicine Chapel Hill, North Carolina Christoper R. McCudden, PhD Clinical Biochemist, Pathology and Laboratory Medicine The Ottawa Hospital Assistant Professor Department of Pathology and Laboratory Medicine at the University of Ottawa University of Ottawa Ottawa, Ontario, Canada Shashi Mehta, PhD Associate Professor Department of Clinical Laboratory Sciences School of Health Related Professions University of Medicine and Dentistry of New Jersey Newark, New Jersey James March Mistler, MS, MLS Lecturer Department of Medical Laboratory Science University of Massachusetts Dartmouth North Dartmouth, Massachusetts Matthew S. Petrie, PhD Clinical Chemistry Fellow, Department of Laboratory Medicine University of California San Francisco San Francisco, California Tracey G. Polsky, MD, PhD Assistant Professor of Clinical Pathology and Laboratory Medicine University of Pennsylvania Perelman School of Medicine Assistant Director of the Clinical Chemistry Laboratory Children's Hospital of Philadelphia Philadelphia, Pennsylvania Deepika S. Reddy, MD

Assistant Professor (Clinical) Internal Medicine Endocrinology and Diabetes Center University of Utah Salt Lake City, Utah Alan T. Remaley, MD, PhD Senior Staff, Department of Laboratory Medicine National Institutes of Health Bethesda, Maryland Kyle B. Riding, PhD, MLS(ASCP) Instructor, Medical Laboratory Science Teaching and Learning Center Coordinator Keiser University – Orlando Campus Orlando, Florida Michael W. Rogers, MT(ASCP), MBA Clinical Laboratory Quality Management Consultant Retired from University of North Carolina Health Care Chapel Hill, North Carolina Amar A. Sethi, PhD Chief Scientific Officer, Research and Development Pacific Biomarkers Seattle, Washington Joely A. Straseski, PhD Assistant Professor, Pathology University of Utah Salt Lake City, Utah Frederick G. Strathmann, PhD, DABCC (CC, TC) Assistant Professor Department of Pathology University of Utah Medical Director of Toxicology ARUP Laboratories Salt Lake City, Utah

Vishnu Sundaresh, MD, CCD Assistant Professor (Clinical) Internal Medicine Endocrinology and Diabetes Center University of Utah Salt Lake City, Utah Sara A. Taylor, PhD, MLS(ASCP), MB Associate Professor and Graduate Advisor Department of Medical Laboratory Science Tarleton State University Fort Worth, Texas Tolmie E. Wachter, MBA/HCM, SLS(ASCP) Assistant Vice President Director of Corporate Safety/RSO ARUP Laboratories Salt Lake City, Utah G. Russell Warnick, MS, MBA Chief Scientific Officer Health Diagnostic Laboratory Richmond, Virginia Elizabeth Warning, MS, MLS(ASCP)CM Adjunct Faculty MLS Program University of Cincinnati Cincinnati, Ohio Monte S. Willis, MD, PhD, FCAP, FASCP, FAHA Associate Professor, Vice Chair of Academic Affairs Department of Pathology & Laboratory Medicine, Director, Campus Health Services Laboratory University of North Carolina at Chapel Hill, Pathology & Laboratory Medicine Director, Sweat Chloride Testing Assistant Director Clinical Core (Chemistry) Laboratory Services University of North Carolina Healthcare University of North Carolina Chapel Hill, North Carolina

Alan H. B. Wu, PhD, DABCC Director, Clinical Chemistry Laboratory, San Francisco General Hospital Professor, Laboratory Medicine, University of California, San Francisco San Francisco, California Xin Xu, MD, PhD, MLS(ASCP) Division of Pulmonary, Allergy, and Critical Care MedicineDepartment of Medicine University of Alabama at Birmingham Birmingham, AL

foreword to the eighth edition For many years, the health care and medical laboratory communities have been preparing for an impending workforce shortage that could threaten to compromise patient care and safety. It is vital that the medical laboratory community continue to educate and prepare credentialed professionals who can work efficiently, have essential analytical and critical thinking skills, and can communicate test results and test needs to health care providers. While the shortage of qualified laboratory practitioners has been in the forefront of our collective thoughts, a more insidious shortage has also arisen among the ranks of faculty within medical laboratory education programs. As a profession, we have long been blessed with dedicated faculty who strive to impart their knowledge and experience onto countless students. We know, however, that many of these dedicated faculty members have and will continue to step aside to pursue other passions through retirement. As new dedicated faculty step up to take over, we must support these new educators in their roles as program directors and content specialists. Furthermore, we must provide tools for these educators about techniques and theories that are appropriate as they continue to develop their curricula. One potential tool to assist educators is the American Society for Clinical Laboratory Science (ASCLS) Entry Level Curriculum. At the 2016 ASCLS Annual Meeting, the House of Delegates adopted a newly formatted version of the entry level curriculum for MLT and MLS programs. This document had not been updated since 2002, and a subcommittee of the ASCLS Education Scientific Assembly was charged with editing the document to better represent the field's expectations of new graduates. Subcommittee members solicited feedback from educators and professionals in all subdisciplines and reviewed the content for currency. New material was added to reflect techniques and theories that have emerged since the last edition while material was removed if it was no longer deemed relevant. After this extensive process, the final document is reflective of what the industry demands of a new professional. Similarly, the material presented throughout Clinical Chemistry: Principles, Techniques, and Correlations has always kept current with changes in the laboratory industry. This exceptional ability of authors and editors to keep pace with the needs of an ever-changing profession has not diminished through its, now, eight editions. The content inherent to the discipline of clinical chemistry is

foundational to all other areas of laboratory medicine. The eighth edition of this textbook is ideal for students learning the principles of clinical chemistry while helping them build connections to other areas of the laboratory. The chapters have a perfect blend of basic theory and practical information that allows the student to comprehend each area of clinical chemistry. The text is well organized to help MLT and MLS educators distinguish what each unique student population needs to be successful in the marketplace. The online materials, powerpoints, and exam questions, for educators, are an invaluable resource for those creating a new course or revising a current course. As we face this transition of laboratory practitioners who perform testing and faculty who train and educate our students, products that stay current with the times and help facilitate better understanding of the unique levels of practice within our field are the most essential element to success. The eighth edition of Clinical Chemistry: Principles, Techniques, and Correlations accomplishes this and serves as an invaluable tool for any new educator looking for guidance, or seasoned educator looking to refresh their teachings. As educators we are thrilled that students continue to find the field of medical laboratory science an avenue to build a professional career. We wish all students and educators who use this book the best to carry on a tradition of excellence! Joan Polancic, MSEd, MLS(ASCP)CM Director, School of Medical Laboratory Science Denver Health Medical Center Denver, Colorado Kyle B. Riding, PhD, MLS(ASCP) Instructor, Medical Laboratory Science Teaching and Learning Center Coordinator Keiser University – Orlando Campus Orlando, Florida

foreword to the seventh edition You should not be surprised to learn that the delivery of health care has been undergoing major transformation for several decades. The clinical laboratory has been transformed in innumerable ways as well. At one time, the laboratory students' greatest asset was motor ability. That is not the case any longer. Now the need is for a laboratory professional who is well educated, an analytical thinker, and problem solver, and one who can add value to the information generated in the laboratory regarding a specific patient. This change impacts the laboratory professional in a very positive manner. Today the students' greatest asset is their mental skill and their ability to acquire and apply knowledge. The laboratory professional is now considered a knowledge worker, and a student's ability to successfully become this knowledge worker depends on their instruction and exposure to quality education. Herein lies the need for the seventh edition of Clinical Chemistry: Principles, Techniques, and Correlations. It contributes to the indispensable solid science foundation in medical laboratory sciences and the application of its principles in improving patient outcomes needed by the laboratory professional of today. This edition provides not only a comprehensive understanding of clinical chemistry but also the foundation upon which all the other major laboratory science disciplines can be further understood and integrated. It does so by providing a strong discussion of organ function and a solid emphasis on pathophysiology, clinical correlations, and differential diagnosis. This information offers a springboard to better understand the many concepts related to the effectiveness of a particular test for a particular patient. Reduction of health care costs, while ensuring quality patient care, remains the goal of health care reform efforts. Laboratory information is a critical element of such care. It is estimated that $65 billion is spent each year to perform more than 4.3 billion laboratory tests. This impressive figure has also focused a bright light on laboratory medicine, and appropriate laboratory test utilization is now under major scrutiny. The main emphasis is on reducing costly overutilization and unnecessary diagnostic testing; however, the issue of under- and misutilization of laboratory tests must be a cause for concern as well. The role of laboratorians in providing guidance to clinicians regarding appropriate test utilization is becoming not only accepted but also welcomed as clinicians try to maneuver their way through an increasingly complex and expensive test menu. These new roles lie in the pre- and postanalytic functions of laboratorians. The authors of this text have successfully described the importance of these phases as well as the more traditional analytic phase. It does not matter how precise or accurate a test is during the analytic phase if the sample has been compromised or if an inappropriate test has been ordered on the patient. In addition, the validation of results with respect to a patient's condition is an important step in the postanalytic phase. Participation with other health care providers in the proper interpretation of test results and appropriate follow-up will be important abilities of future graduates as the profession moves into providing greater consultative services for a patient-centered medical delivery system. Understanding these principles is a necessary requirement of

the knowledge worker in the clinical laboratory. This significant professional role provides effective laboratory services that will improve medical decision making and thus patient safety while reducing medical errors. This edition of Clinical Chemistry: Principles, Techniques, and Correlations is a crucial element in graduating such professionals. Diana Mass, MA, MT(ASCP) Clinical Professor and Director (Retired) Clinical Laboratory Sciences Program Arizona State University Tempe, Arizona President Associated Laboratory Consultants Valley Center, California

Make no mistake: There are few specialties in medicine that have a wider impact on today's health care than laboratory medicine. For example, in the emergency room, a troponin result can not only tell an ER physician if a patient with chest pain has had a heart attack but also assess the likelihood of that patient suffering an acute myocardial infarction in 30 days. In the operating room during a parathyroidectomy, a parathyroid hormone assay can tell a surgeon that it is appropriate to close the procedure because he has successfully removed all of the affected glands or go back and look for more glands to excise. In labor and delivery, testing for pulmonary surfactants from amniotic fluid can tell an obstetrician if a child can be safely delivered or if the infant is likely to develop life-threatening respiratory distress syndrome. In the neonatal intensive care unit, measurement of bilirubin in a premature infant is used to determine when the ultraviolet lights can be turned off. These are just a handful of the thousands of medical decisions that are made each day based on results from clinical laboratory testing. Despite our current success, there is still much more to learn and do. For example, there are no good laboratory tests for the diagnosis of stroke or traumatic brain injury. The work on Alzheimer's and Parkinson's disease prediction and treatment is in the early stages. And when it comes to cancer, while our laboratory tests are good for monitoring therapy, they fail in the detection of early cancer, essential for improving treatment and prolonging survival. Finally, personalized medicine including pharmacogenomics will play an increasingly important role in the future. Pharmacogenomic testing will be used to select the right drug at the best dose for a particular patient in order to maximize efficacy and minimize side effects. If you are reading this book, you are probably studying to be a part of the field. As a clinical chemist for over 30 years, I welcome you to our profession.

Alan H. B. Wu, PhD, DABCC Director, Clinical Chemistry Laboratory, San Francisco General Hospital Professor, Laboratory Medicine, University of California, San Francisco San Francisco, California

Preface Clinical chemistry continues to be one of the most rapidly advancing areas of laboratory medicine. Since the initial idea for this textbook was discussed in a meeting of the Biochemistry/Urinalysis section of ASMT (now ASCLS) in the late 1970s, the only constant has been changed. New technologies and analytical techniques have been introduced, with a dramatic impact on the practice of clinical chemistry and laboratory medicine. In addition, the health care system is rapidly changing. There is ever increasing emphasis on improving the quality of patient care, individual patient outcomes, financial responsibility, and total quality management. Now, more than ever, clinical laboratorians need to be concerned with disease correlations, interpretations, problem solving, quality assurance, and cost-effectiveness; they need to know not only the how of tests but more importantly the what, why, and when. The editors of Clinical Chemistry: Principles, Techniques, and Correlations have designed the eighth edition to be an even more valuable resource to both students and practitioners. Now almost 40 years since the initiation of this effort, the editors have had the privilege of completing the eighth edition with another diverse team of dedicated clinical laboratory professionals. In this era of focusing on metrics, the editors would like to share the following information. The 330 contributors in the 8 editions represent 70 clinical laboratory science programs, 83 clinical laboratories, 13 medical device companies, 4 government agencies, and 3 professional societies. One hundred and thirty contributors were clinical laboratory scientists with advanced degrees. With today's global focus, the previous editions of the text have been translated into at least six languages. By definition, a profession is a calling requiring specialized knowledge and intensive academic preparation to define its scope of work and produce its own literature. The profession of Clinical Laboratory Science has evolved significantly over the past four decades. The eighth edition of Clinical Chemistry: Principles, Techniques, and Correlations is comprehensive, up-to-date, and easy to understand for students at all levels. It is also intended to be a practically organized resource for both instructors and practitioners. The editors have tried to maintain the book's readability and further improve its content. Because clinical laboratorians use their interpretative and analytic skills in the daily practice of clinical chemistry, an effort has been made to maintain an appropriate balance between analytic principles, techniques, and the correlation of results with disease states.

In this edition, the editors have maintained features in response to requests from our readers, students, instructors, and practitioners. Ancillary materials have been updated and expanded. Chapters now include current, more frequently encountered case studies and practice questions or exercises. To provide a thorough, up-to-date study of clinical chemistry, all chapters have been updated and reviewed by professionals who practice clinical chemistry and laboratory medicine on a daily basis. The basic principles of the analytic procedures discussed in the chapters reflect the most recent or commonly performed techniques in the clinical chemistry laboratory. Detailed procedures have been omitted because of the variety of equipment and commercial kits used in today's clinical laboratories. Instrument manuals and kit package inserts are the most reliable reference for detailed instructions on current analytic procedures. All chapter material has been updated, improved, and rearranged for better continuity and readability. thePoint*, a Web site with additional case studies, review questions, teaching resources, teaching tips, additional references, and teaching aids for instructors and students, is available from the publisher to assist in the use of this textbook. Michael L. Bishop Edward P. Fody Larry E. Schoeff

Acknowledgments A project as large as this requires the assistance and support of many clinical laboratorians. The editors wish to express their appreciation to the contributors of all the editions of Clinical Chemistry: Principles, Techniques, and Correlations—the dedicated laboratory professionals and educators whom the editors have had the privilege of knowing and exchanging ideas with over the years. These individuals were selected because of their expertise in particular areas and their commitment to the education of clinical laboratorians. Many have spent their professional careers in the clinical laboratory, at the bench, teaching students, or consulting with clinicians. In these frontline positions, they have developed a perspective of what is important for the next generation of clinical laboratorians. We extend appreciation to our students, colleagues, teachers, and mentors in the profession who have helped shape our ideas about clinical chemistry practice and education. Also, we want to thank the many companies and professional organizations that provided product information and photographs or granted permission to reproduce diagrams and tables from their publications. Many Clinical and Laboratory Standards Institute (CLSI) documents have also been important sources of information. These documents are directly referenced in the appropriate chapters. The editors would like to acknowledge the contribution and effort of all individuals to previous editions. Their efforts provided the framework for many of the current chapters. Finally, we gratefully acknowledge the cooperation and assistance of the staff at Wolters Kluwer for their advice and support. The editors are continually striving to improve future editions of this book. We again request and welcome our readers' comments, criticisms, and ideas for improvement.

Contents Contributors Foreword to the Eighth Edition Foreword to the Seventh Edition Preface Acknowledgments

PART one Basic Principles and Practice of Clinical Chemistry 1 Basic Principles and Practices Kathryn Dugan and Elizabeth Warning UNITS OF MEASURE REAGENTS Chemicals Reference Materials Water Specifications Solution Properties Concentration Colligative Properties Redox Potential Conductivity pH and Buffers CLINICAL LABORATORY SUPPLIES Thermometers/Temperature Glassware and Plasticware

Desiccators and Desiccants Balances CENTRIFUGATION LABORATORY MATHEMATICS AND CALCULATIONS Significant Figures Logarithms Concentration Dilutions Water of Hydration Graphing and Beer's Law SPECIMEN CONSIDERATIONS Types of Samples Sample Processing Sample Variables Chain of Custody Electronic and Paper Reporting of Results QUESTIONS REFERENCES 2 Laboratory Safety and Regulations Tolmie E. Wachter LABORATORY SAFETY AND REGULATIONS Occupational Safety and Health Act Other Regulations and Guidelines SAFETY AWARENESS FOR CLINICAL LABORATORY PERSONNEL Safety Responsibility Signage and Labeling

SAFETY EQUIPMENT Chemical Fume Hoods and Biosafety Cabinets Chemical Storage Equipment PPE and Hygiene BIOLOGIC SAFETY General Considerations Spills Bloodborne Pathogens Airborne Pathogens Shipping CHEMICAL SAFETY Hazard Communication Safety Data Sheet OSHA Laboratory Standard Toxic Effects from Hazardous Substances Storage and Handling of Chemicals RADIATION SAFETY Environmental Protection Personal Protection Nonionizing Radiation FIRE SAFETY The Chemistry of Fire Classification of Fires Types and Applications of Fire Extinguishers CONTROL OF OTHER HAZARDS Electrical Hazards Compressed Gas Hazards

Cryogenic Materials Hazards Mechanical Hazards Ergonomic Hazards DISPOSAL OF HAZARDOUS MATERIALS Chemical Waste Radioactive Waste Biohazardous Waste ACCIDENT DOCUMENTATION AND INVESTIGATION QUESTIONS BIBLIOGRAPHY AND SUGGESTED READING 3 Method Evaluation and Quality Control Michael W. Rogers, Cindi Bullock Letsos, Matthew P. A. Henderson, Monte S. Willis, and Christoper R. MCCudden BASIC CONCEPTS Descriptive Statistics: Measures of Center, Spread, and Shape Descriptive Statistics of Groups of Paired Observations Inferential Statistics METHOD EVALUATION Regulatory Aspects of Method Evaluation (Alphabet Soup) Method Selection Method Evaluation First Things First: Determine Imprecision and Inaccuracy Measurement of Imprecision Interference Studies COM Studies Allowable Analytical Error Method Evaluation Acceptance Criteria

QUALITY CONTROL QC Charts Operation of a QC System Multirules RULE! Proficiency Testing REFERENCE INTERVAL STUDIES Establishing Reference Intervals Selection of Reference Interval Study Individuals Preanalytic and Analytic Considerations Determining Whether to Establish or Transfer and Verify Reference Intervals Analysis of Reference Values Data Analysis to Establish a Reference Interval Data Analysis to Transfer and Verify a Reference Interval DIAGNOSTIC EFFICIENCY Measures of Diagnostic Efficiency SUMMARY PRACTICE PROBLEMS Problem 3-1. Calculation of Sensitivity and Specificity Problem 3-2. A Quality Control Decision Problem 3-3. Precision (Replication) Problem 3-4. Recovery Problem 3-5. Interference Problem 3-6. Sample Labeling Problem 3-7. QC Program for POCT Testing Problem 3-8. QC Rule Interpretation Problem 3-9. Reference Interval Study Design

QUESTIONS ONLINE RESOURCES REFERENCES 4 Lean Six Sigma Methodology Basics and Quality Improvement in the Clinical Chemistry Laboratory Cindi Bullock Letsos, Michael W. Rogers, Christoper R. McCudden, and Monte S. Willis LEAN SIX SIGMA METHODOLOGY ADOPTION AND IMPLEMENTATION OF LEAN SIX SIGMA PROCESS IMPROVEMENT MEASUREMENTS OF SUCCESS USING LEAN AND SIX SIGMA LEAN SIX SIGMA APPLICATIONS IN THE LABORATORY AND THE GREATER HEALTH CARE SYSTEM PRACTICAL APPLICATION OF SIX SIGMA METRICS Detecting Laboratory Errors Defining the Sigma Performance of an Assay Choosing the Appropriate Westgard Rules PRESENT AND EVOLVING APPROACHES TO QUALITY IMPROVEMENT AND QUALITY ASSURANCE IN THE CLINICAL LABORATORY: FROM QCP TO IQCP AND ISO5189 Today's Standard: Quality Control Plan QUALITY CONTROL PLAN BASED ON RISK MANAGEMENT: CLSI EP23-A DOCUMENT QUALITY ASSESSMENT INDIVIDUAL QUALITY CONTROL PROGRAM: AN OPTION TO STREAMLINE QCP ISO5189—QUALITY MANAGEMENT IN MEDICAL

LABORATORIES: ADDING RISK ASSESSMENT TO THE FORMULA INTERNATIONALLY (GLOBAL INITIATIVE) CONCLUSIONS ACKNOWLEDGMENTS QUESTIONS REFERENCES RESOURCES FOR FURTHER INFORMATION ON 5 Analytic Techniques Julia C. Drees, Matthew S. Petrie, and Alan H. B. Wu SPECTROPHOTOMETRY Beer Law Spectrophotometric Instruments Components of a Spectrophotometer Spectrophotometer Quality Assurance Atomic Absorption Spectrophotometer Flame Photometry Fluorometry Basic Instrumentation Chemiluminescence Turbidity and Nephelometry Laser Applications ELECTROCHEMISTRY Galvanic and Electrolytic Cells Half-Cells Ion-Selective Electrodes pH Electrodes Gas-Sensing Electrodes

Enzyme Electrodes Coulometric Chloridometers and Anodic Stripping Voltammetry ELECTROPHORESIS Procedure Support Materials Treatment and Application of Sample Detection and Quantitation Electroendosmosis Isoelectric Focusing Capillary Electrophoresis Two-Dimensional Electrophoresis OSMOMETRY Freezing Point Osmometer SURFACE PLASMON RESONANCE QUESTIONS REFERENCES 6 Chromatography and Mass Spectrometry Julia C. Drees, Matthew S. Petrie, and Alan H. B. Wu CHROMATOGRAPHY Modes of Separation Chromatographic Procedures High-Performance Liquid Chromatography Gas Chromatography MASS SPECTROMETRY Sample Introduction and Ionization Mass Analyzer Detector

APPLICATIONS OF MS IN THE CLINICAL LABORATORY Small Molecule Analysis Mass Spectrometry in Proteomics and Pathogen Identification Mass Spectrometry at the Point of Care QUESTIONS REFERENCES 7 Principles of Clinical Chemistry Automation Ryan W. Greer and Joely A. Straseski HISTORY OF AUTOMATED ANALYZERS DRIVING FORCES TOWARD MORE AUTOMATION BASIC APPROACHES TO AUTOMATION STEPS IN AUTOMATED ANALYSIS Specimen Preparation and Identification Specimen Measurement and Delivery Reagent Systems and Delivery Chemical Reaction Phase Measurement Phase Signal Processing and Data Handling SELECTION OF AUTOMATED ANALYZERS TOTAL LABORATORY AUTOMATION Preanalytic Phase (Sample Processing) Analytic Phase (Chemical Analyses) Postanalytic Phase (Data Management) FUTURE TRENDS IN AUTOMATION QUESTIONS REFERENCES 8 Immunochemical Techniques

Alan H. B. Wu IMMUNOASSAYS General Considerations Unlabeled Immunoassays Labeled Immunoassays Future Directions for Immunoassays QUESTIONS REFERENCES 9 Molecular Theory and Techniques Shashi Mehta NUCLEIC ACID–BASED TECHNIQUES Nucleic Acid Chemistry Nucleic Acid Extraction Hybridization Techniques DNA Sequencing DNA Chip Technology Target Amplification Probe Amplification Signal Amplification Nucleic Acid Probe Applications QUESTIONS REFERENCES 10 Point-of-Care Testing Heather Crookston LABORATORY REGULATIONS Accreditation

POCT Complexity IMPLEMENTATION Establishing Need POCT Implementation Protocol Personnel Requirements QUALITY MANAGEMENT Accuracy Requirements QC and Proficiency Testing POC APPLICATIONS INFORMATICS AND POCT QUESTIONS REFERENCES

PART two Clinical Correlations and Analytic Procedures 11 Amino Acids and Proteins Takara L. Blamires AMINO ACIDS Overview Basic Structure Metabolism Essential Amino Acids Nonessential Amino Acids Recently Identified Amino Acids Aminoacidopathies Methods of Analysis

PROTEINS Overview Basic Structure General Chemical Properties Synthesis Catabolism and Nitrogen Balance Classification PLASMA PROTEINS Prealbumin Albumin Globulins OTHER PROTEINS OF CLINICAL SIGNIFICANCE Myoglobin Cardiac Troponin Brain Natriuretic Peptide and N-Terminal–Brain Natriuretic Peptide Fibronectin Adiponectin β-Trace Protein Cross-Linked C-Telopeptides Cystatin C Amyloid TOTAL PROTEIN ABNORMALITIES Hypoproteinemia Hyperproteinemia METHODS OF ANALYSIS Total Nitrogen

Total Protein Fractionation, Identification, and Quantitation of Specific Proteins Serum Protein Electrophoresis High-Resolution Protein Electrophoresis Capillary Electrophoresis Isoelectric Focusing Immunochemical Methods PROTEINS IN OTHER BODY FLUIDS Urinary Protein CSF Protein QUESTIONS REFERENCES 12 Nonprotein Nitrogen Compounds Elizabeth L. Frank UREA Biochemistry Clinical Application Analytical Methods Pathophysiology URIC ACID Biochemistry Clinical Application Analytical Methods Pathophysiology CREATININE/CREATINE Biochemistry

Clinical Application Analytical Methods Pathophysiology AMMONIA Biochemistry Clinical Application Analytical Methods Pathophysiology QUESTIONS REFERENCES 13 Enzymes Kamisha L. Johnson-Davis GENERAL PROPERTIES AND DEFINITIONS ENZYME CLASSIFICATION AND NOMENCLATURE ENZYME KINETICS Catalytic Mechanism of Enzymes Factors That Influence Enzymatic Reactions Measurement of Enzyme Activity Calculation of Enzyme Activity Measurement of Enzyme Mass Enzymes as Reagents ENZYMES OF CLINICAL SIGNIFICANCE Creatine Kinase Lactate Dehydrogenase Aspartate Aminotransferase Alanine Aminotransferase Alkaline Phosphatase

Acid Phosphatase γ-Glutamyltransferase Amylase Lipase Glucose-6-Phosphate Dehydrogenase Drug-Metabolizing Enzymes QUESTIONS REFERENCES 14 Carbohydrates Vicki S. Freeman GENERAL DESCRIPTION OF CARBOHYDRATES Classification of Carbohydrates Stereoisomers Monosaccharides, Disaccharides, and Polysaccharides Chemical Properties of Carbohydrates Glucose Metabolism Fate of Glucose Regulation of Carbohydrate Metabolism HYPERGLYCEMIA Diabetes Mellitus Pathophysiology of Diabetes Mellitus Criteria for Testing for Prediabetes and Diabetes Criteria for the Diagnosis of Diabetes Mellitus Criteria for the Testing and Diagnosis of GDM HYPOGLYCEMIA Genetic Defects in Carbohydrate Metabolism ROLE OF LABORATORY IN DIFFERENTIAL DIAGNOSIS

AND MANAGEMENT OF PATIENTS WITH GLUCOSE METABOLIC ALTERATIONS Methods of Glucose Measurement Self-Monitoring of Blood Glucose Glucose Tolerance and 2-Hour Postprandial Tests Glycosylated Hemoglobin/HbA1c Ketones Albuminuria Islet Autoantibody and Insulin Testing QUESTIONS REFERENCES 15 Lipids and Lipoproteins Raffick A. R. Bowen, Amar A. Sethi, G. Russell Warnick, and Alan T. Remaley LIPID CHEMISTRY Fatty Acids Triglycerides Phospholipids Cholesterol GENERAL LIPOPROTEIN STRUCTURE Chylomicrons Very-Low-Density Lipoproteins Intermediate-Density Lipoproteins Low-Density Lipoproteins Lipoprotein (a) High-Density Lipoproteins Lipoprotein X

LIPOPROTEIN PHYSIOLOGY AND METABOLISM Lipid Absorption Exogenous Pathway Endogenous Pathway Reverse Cholesterol Transport Pathway LIPID AND LIPOPROTEIN POPULATION DISTRIBUTIONS Dyslipidemia and Children National Cholesterol Education Program National Heart, Lung, and Blood Institute DIAGNOSIS AND TREATMENT OF LIPID DISORDERS Arteriosclerosis Hyperlipoproteinemia Hypercholesterolemia PCSK9 Hypertriglyceridemia Combined Hyperlipidemia Lp(a) Elevation Non–HDL Cholesterol Hypobetalipoproteinemia Hypoalphalipoproteinemia LIPID AND LIPOPROTEIN ANALYSES Lipid Measurement Cholesterol Measurement Triglyceride Measurement Lipoprotein Methods HDL Methods LDL Methods

Compact Analyzers Apolipoprotein Methods Phospholipid Measurement Fatty Acid Measurement STANDARDIZATION OF LIPID AND LIPOPROTEIN ASSAYS Precision Accuracy Matrix Interactions CDC Cholesterol Reference Method Laboratory Network Analytic Performance Goals Quality Control Specimen Collection QUESTIONS REFERENCES 16 Electrolytes James March Mistler WATER Osmolality THE ELECTROLYTES Sodium Potassium Chloride Bicarbonate Magnesium Calcium Phosphate Lactate

ANION GAP ELECTROLYTES AND RENAL FUNCTION QUESTIONS REFERENCES 17 Blood Gases, pH, and Buffer Systems Yachana Kataria and Mark D. Kellogg ACID–BASE BALANCE Maintenance of H+ Buffer Systems: Regulation of H+ and the HendersonHasselbalch Equation Regulation of Acid–Base Balance: Lungs and Kidneys (Transport of Carbon Dioxide) ASSESSMENT OF ACID–BASE HOMEOSTASIS The Bicarbonate Buffering System Acid–Base Disorders: Acidosis and Alkalosis OXYGEN AND GAS EXCHANGE Oxygen and Carbon Dioxide Oxygen Transport Assessment of a Patient's Oxygen Status Hemoglobin–Oxygen Dissociation MEASUREMENT Spectrophotometric Determination of Oxygen Saturation (COOximetry) Blood Gas Analyzers: pH, pCO2, and pO2 Measurement of pO2 Measurement of pH and pCO2 Types of Electrochemical Sensors

Optical Sensors Calibration Correction for Temperature Calculated Parameters QUALITY ASSURANCE Preanalytic Considerations Analytic Assessments: Quality Control and Proficiency Testing QUESTIONS REFERENCES 18 Trace and Toxic Elements Frederick G. Strathmann and Carrie J. Haglock-Adler OVERVIEW AND OBJECTIVES INSTRUMENTATION AND METHODS Sample Collection and Processing Atomic Emission Spectroscopy Atomic Absorption Spectroscopy Inductively Coupled Plasma Mass Spectrometry Interferences Elemental Speciation Alternative Analytical Techniques ALUMINUM Introduction Absorption, Transport, and Excretion Health Effects and Toxicity Laboratory Evaluation of Aluminum Status ARSENIC Introduction

Health Effects and Toxicity Absorption, Transport, and Excretion Laboratory Evaluation of Arsenic Status CADMIUM Introduction Absorption, Transport, and Excretion Health Effects and Toxicity Laboratory Evaluation of Cadmium Status CHROMIUM Introduction Absorption, Transport, and Excretion Health Effects, Deficiency, and Toxicity Laboratory Evaluation of Chromium Status COPPER Introduction Absorption, Transport, and Excretion Health Effects, Deficiency, and Toxicity Laboratory Evaluation of Copper Status IRON Introduction Absorption, Transport, and Excretion Health Effects, Deficiency, and Toxicity Laboratory Evaluation of Iron Status LEAD Introduction Absorption, Transport, and Excretion Health Effects and Toxicity

Laboratory Evaluation of Lead Status MERCURY Introduction Absorption, Transport, and Excretion Health Effects and Toxicity Laboratory Evaluation of Mercury Status MANGANESE Introduction Absorption, Transport, and Excretion Health Effects, Deficiency, and Toxicity Laboratory Evaluation of Manganese Status MOLYBDENUM Introduction Absorption, Transport, and Excretion Health Effects, Deficiency, and Toxicity Laboratory Evaluation of Molybdenum Status SELENIUM Introduction Absorption, Transport, and Excretion Health Effects, Deficiency, and Toxicity Laboratory Evaluation of Selenium Status ZINC Introduction Absorption, Transport, and Excretion Health Effects, Deficiency, and Toxicity Laboratory Evaluation of Zinc Status QUESTIONS

BIBLIOGRAPHY REFERENCES 19 Porphyrins and Hemoglobin Elizabeth L. Frank and Sara A. Taylor PORPHYRINS Porphyrin Properties Biochemistry: Synthesis of Heme Pathophysiology: Disorders of Heme Biosynthesis Clinical Application Analytical Methods HEMOGLOBIN Role in the Body Structure of Hemoglobin Synthesis and Degradation of Hemoglobin Clinical Significance and Disease Correlation Analytical Methods DNA Technology MYOGLOBIN Structure and Role in the Body Clinical Significance Analytical Methods QUESTIONS REFERENCES

PART three Assessment of Organ System Functions

20 Hypothalamic and Pituitary Function Robert E. Jones and Heather Corn EMBRYOLOGY AND ANATOMY FUNCTIONAL ASPECTS OF THE HYPOTHALAMIC– HYPOPHYSEAL UNIT HYPOPHYSIOTROPIC OR HYPOTHALAMIC HORMONES ANTERIOR PITUITARY HORMONES PITUITARY TUMORS GROWTH HORMONE Actions of GH Testing Acromegaly GH Deficiency PROLACTIN Prolactinoma Other Causes of Hyperprolactinemia Clinical Evaluation of Hyperprolactinemia Management of Prolactinoma Idiopathic Galactorrhea HYPOPITUITARISM Etiology of Hypopituitarism Treatment of Panhypopituitarism POSTERIOR PITUITARY HORMONES Oxytocin Vasopressin QUESTIONS REFERENCES

21 Adrenal Function Vishnu Sundaresh and Deepika S. Reddy THE ADRENAL GLAND: AN OVERVIEW EMBRYOLOGY AND ANATOMY THE ADRENAL CORTEX BY ZONE Cortex Steroidogenesis Congenital Adrenal Hyperplasia PRIMARY ALDOSTERONISM Overview Etiology Diagnosis Treatment Isolated Hypoaldosteronism ADRENAL CORTICAL PHYSIOLOGY ADRENAL INSUFFICIENCY Overview Symptoms Diagnosis Treatment HYPERCORTISOLISM (CUSHING'S SYNDROME) Overview Etiology Diagnosis Treatment ADRENAL ANDROGENS Androgen Excess Diagnosis

Treatment THE ADRENAL MEDULLA Embryology Biosynthesis, Storage, and Secretion of Catecholamines Metabolism and Excretion of Catecholamines PHEOCHROMOCYTOMA AND PARAGANGLIOMA Overview Epidemiology Clinical Presentation Diagnosis Interfering medications Biochemical Testing Plasma-Free Metanephrines 24-Hour Urine Fractionated Metanephrines and Catecholamines Normal Results Case Detection 24-Hour Urine Fractionated Metanephrines and Catecholamines Plasma-Fractionated Metanephrines Radiographic Localization Treatment Outcome, Prognosis, and Follow-up Genetic Testing ADRENAL INCIDENTALOMA CASE STUDIES QUESTIONS REFERENCES 22 Gonadal Function

Mahima Gulati THE TESTES Functional Anatomy of the Male Reproductive Tract Physiology of the Testicles Disorders of Sexual Development and Testicular Hypofunction Diagnosis of Hypogonadism Testosterone Replacement Therapy Monitoring Testosterone Replacement Therapy THE OVARIES Early Ovarian Development Functional Anatomy of the Ovaries Hormonal Production by the Ovaries The Menstrual Cycle Hormonal Control of Ovulation Pubertal Development in the Female Precocious Sexual Development Menstrual Cycle Abnormalities Hirsutism Estrogen Replacement Therapy QUESTIONS REFERENCES 23 The Thyroid Gland Marissa Grotzke THE THYROID Thyroid Anatomy and Development Thyroid Hormone Synthesis

Protein Binding of Thyroid Hormone Control of Thyroid Function Actions of Thyroid Hormone TESTS FOR THYROID FUNCTION Blood Tests OTHER TOOLS FOR THYROID EVALUATION Nuclear Medicine Evaluation Thyroid Ultrasound Fine-Needle Aspiration DISORDERS OF THE THYROID Hypothyroidism Thyrotoxicosis Graves' Disease Toxic Adenoma and Multinodular Goiter DRUG-INDUCED THYROID DYSFUNCTION Amiodarone-Induced Thyroid Disease Subacute Thyroiditis NONTHYROIDAL ILLNESS THYROID NODULES QUESTIONS REFERENCES 24 Calcium Homeostasis and Hormonal Regulation Josephine Abraham and Dev Abraham CALCIUM HOMEOSTASIS HORMONAL REGULATION OF CALCIUM METABOLISM Vitamin D Parathyroid Hormone

ORGAN SYSTEM REGULATION OF CALCIUM METABOLISM GI Regulation Role of Kidneys Bone Physiology HYPERCALCEMIA Causes of Hypercalcemia Primary Hyperparathyroidism Familial Hypocalciuric Hypercalcemia Hyperthyroidism Addison's Disease Milk Alkali Syndrome Medications That Cause Hypercalcemia HYPOCALCEMIA Causes of Hypocalcemia METABOLIC BONE DISEASES Rickets and Osteomalacia Osteoporosis SECONDARY HYPERPARATHYROIDISM IN RENAL FAILURE QUESTIONS REFERENCES 25 Liver Function Janelle M. Chiasera and Xin Xu ANATOMY Gross Anatomy Microscopic Anatomy

BIOCHEMICAL FUNCTIONS Excretory and Secretory Metabolism Detoxification and Drug Metabolism LIVER FUNCTION ALTERATIONS DURING DISEASE Jaundice Cirrhosis Tumors Reye's Syndrome Drug- and Alcohol-Related Disorders ASSESSMENT OF LIVER FUNCTION/LIVER FUNCTION TESTS Bilirubin METHODS Urobilinogen in Urine and Feces Serum Bile Acids Enzymes Tests Measuring Hepatic Synthetic Ability Tests Measuring Nitrogen Metabolism Hepatitis QUESTIONS REFERENCES 26 Laboratory Markers of Cardiac Damage and Function Ronald R. Henriquez, Michael Durando, Brian C. Jensen, Christoper R. McCudden, and Monte S. Willis CARDIAC ISCHEMIA, ANGINA, AND HEART ATTACKS THE PATHOPHYSIOLOGY OF ATHEROSCLEROSIS, THE

DISEASE PROCESS UNDERLYING MI MARKERS OF CARDIAC DAMAGE Initial Markers of Cardiac Damage Cardiac Troponins CK-MB and Troponin I/Troponin T Considerations in Kidney Disease Patients Other Markers of Cardiac Damage CARDIAC INJURY OCCURS IN MANY DISEASE PROCESSES, BEYOND MI THE LABORATORY WORKUP OF PATIENTS SUSPECTED OF HEART FAILURE AND THE USE OF CARDIAC BIOMARKERS IN HEART FAILURE THE USE OF NATRIURETIC PEPTIDES AND TROPONINS IN THE DIAGNOSIS AND RISK STRATIFICATION OF HEART FAILURE Cardiac Troponins MARKERS OF CHD RISK C-Reactive Protein Homocysteine MARKERS OF PULMONARY EMBOLISM Use of D-Dimer Detection in PE Value of Assaying Troponin and BNP in Acute PE SUMMARY QUESTIONS REFERENCES 27 Renal Function Kara L. Lynch and Alan H. B. Wu RENAL ANATOMY

RENAL PHYSIOLOGY Glomerular Filtration Tubular Function Elimination of Nonprotein Nitrogen Compounds Water, Electrolyte, and Acid–Base Homeostasis Endocrine Function ANALYTIC PROCEDURES Creatinine Clearance Estimated GFR Cystatin C β2-Microglobulin Myoglobin Albuminuria Neutrophil Gelatinase–Associated Lipocalin NephroCheck Urinalysis PATHOPHYSIOLOGY Glomerular Diseases Tubular Diseases Urinary Tract Infection/Obstruction Renal Calculi Renal Failure QUESTIONS REFERENCES 28 Pancreatic Function and Gastrointestinal Function Edward P. Fody PHYSIOLOGY OF PANCREATIC FUNCTION

DISEASES OF THE PANCREAS TESTS OF PANCREATIC FUNCTION Secretin/CCK Test Fecal Fat Analysis Sweat Electrolyte Determinations Serum Enzymes FECAL ELASTASE PHYSIOLOGY AND BIOCHEMISTRY OF GASTRIC SECRETION CLINICAL ASPECTS OF GASTRIC ANALYSIS TESTS OF GASTRIC FUNCTION Measuring Gastric Acid in Basal and Maximal Secretory Tests Measuring Gastric Acid Plasma Gastrin INTESTINAL PHYSIOLOGY CLINICOPATHOLOGIC ASPECTS OF INTESTINAL FUNCTION TESTS OF INTESTINAL FUNCTION Lactose Tolerance Test D-Xylose Absorption Test D-Xylose Test Serum Carotenoids Other Tests of Intestinal Malabsorption QUESTIONS SUGGESTED READING REFERENCES 29 Body Fluid Analysis Kyle B. Riding

CEREBROSPINAL FLUID SEROUS FLUIDS Pleural Fluid Pericardial Fluid Peritoneal Fluid AMNIOTIC FLUID Hemolytic Disease of the Newborn Neural Tube Defects Fetal Lung Maturity Phosphatidylglycerol Lamellar Body Counts SWEAT SYNOVIAL FLUID QUESTIONS REFERENCES

PART four Specialty Areas of Clinical Chemistry 30 Therapeutic Drug Monitoring Takara L. Blamires OVERVIEW ROUTES OF ADMINISTRATION DRUG ABSORPTION DRUG DISTRIBUTION FREE VERSUS BOUND DRUGS DRUG METABOLISM

DRUG ELIMINATION PHARMACOKINETICS SPECIMEN COLLECTION PHARMACOGENOMICS CARDIOACTIVE DRUGS Digoxin Quinidine Procainamide Disopyramide ANTIBIOTICS Aminoglycosides Teicoplanin Vancomycin ANTIEPILEPTIC DRUGS Phenobarbital and Primidone Phenytoin and Fosphenytoin Valproic Acid Carbamazepine Ethosuximide Felbamate Gabapentin Lamotrigine Levetiracetam Oxcarbazepine Tiagabine Topiramate Zonisamide

PSYCHOACTIVE DRUGS Lithium Tricyclic Antidepressants Clozapine Olanzapine IMMUNOSUPPRESSIVE DRUGS Cyclosporine Tacrolimus Sirolimus Mycophenolic Acid ANTINEOPLASTICS Methotrexate BRONCHODILATORS Theophylline QUESTIONS SUGGESTED READINGS REFERENCES 31 Toxicology Takara L. Blamires XENOBIOTICS, POISONS, AND TOXINS ROUTES OF EXPOSURE DOSE–RESPONSE RELATIONSHIP Acute and Chronic Toxicity ANALYSIS OF TOXIC AGENTS TOXICOLOGY OF SPECIFIC AGENTS Alcohols Carbon Monoxide

Caustic Agents Cyanide Metals and Metalloids Pesticides TOXICOLOGY OF THERAPEUTIC DRUGS Salicylates Acetaminophen TOXICOLOGY OF DRUGS OF ABUSE Amphetamines Anabolic Steroids Cannabinoids Cocaine Opiates Phencyclidine Sedatives–Hypnotics QUESTIONS REFERENCES 32 Circulating Tumor Markers: Basic Concepts and Clinical Applications Christoper R. McCudden and Monte S. Willis TYPES OF TUMOR MARKERS APPLICATIONS OF TUMOR MARKER DETECTION Screening and Susceptibility Testing Prognosis Monitoring Effectiveness of Therapy and Disease Recurrence LABORATORY CONSIDERATIONS FOR TUMOR MARKER MEASUREMENT

Immunoassays High-Performance Liquid Chromatography Immunohistochemistry and Immunofluorescence Enzyme Assays FREQUENTLY ORDERED TUMOR MARKERS α-Fetoprotein METHODOLOGY Cancer Antigen 125 Carcinoembryonic Antigen Human Chorionic Gonadotropin Prostate-Specific Antigen FUTURE DIRECTIONS QUESTIONS SUGGESTED READING REFERENCES 33 Nutrition Assessment Linda S. Gorman and Maria G. Boosalis NUTRITION CARE PROCESS: OVERVIEW NUTRITION ASSESSMENT BIOCHEMICAL MARKERS: MACRONUTRIENTS Protein Fat Carbohydrate BIOCHEMICAL MARKERS: MISCELLANEOUS Parenteral Nutrition Electrolytes Urine Testing

Organ Function BIOCHEMICAL MARKERS: MICRONUTRIENTS Vitamins Conditionally Essential Nutrients Minerals Trace Elements QUESTIONS REFERENCES 34 Clinical Chemistry and the Geriatric Patient Laura M. Hickes and J. Marvin McBride THE AGING OF AMERICA AGING AND MEDICAL SCIENCE GENERAL PHYSIOLOGIC CHANGES WITH AGING Muscle Bone Gastrointestinal System Kidney/Urinary System Immune System Endocrine System Sex Hormones Glucose Metabolism EFFECTS OF AGE ON LABORATORY TESTING Muscle Bone Gastrointestinal System Urinary System Immune System

Endocrine System Sex Hormones Glucose Metabolism ESTABLISHING REFERENCE INTERVALS FOR THE ELDERLY PREANALYTICAL VARIABLES UNIQUE TO GERIATRIC PATIENTS DISEASES PREVALENT IN THE ELDERLY AGE-ASSOCIATED CHANGES IN DRUG METABOLISM Absorption Distribution Metabolism Elimination ATYPICAL PRESENTATIONS OF COMMON DISEASES Geriatric Syndromes THE IMPACT OF EXERCISE AND NUTRITION ON CHEMISTRY RESULTS IN THE ELDERLY QUESTIONS REFERENCES 35 Clinical Chemistry and the Pediatric Patient Tracey G. Polsky and Michael J. Bennett DEVELOPMENTAL CHANGES FROM NEONATE TO ADULT Respiration and Circulation Growth Organ Development Problems of Prematurity and Immaturity PHLEBOTOMY AND CHOICE OF INSTRUMENTATION FOR

PEDIATRIC SAMPLES Phlebotomy Preanalytic Concerns Choice of Analyzer POINT-OF-CARE ANALYSIS IN PEDIATRICS REGULATION OF BLOOD GASES AND PH IN NEONATES AND INFANTS Blood Gas and Acid–Base Measurement REGULATION OF ELECTROLYTES AND WATER: RENAL FUNCTION Disorders Affecting Electrolytes and Water Balance DEVELOPMENT OF LIVER FUNCTION Physiologic Jaundice Energy Metabolism Diabetes Nitrogen Metabolism Nitrogenous End Products as Markers of Renal Function Liver Function Tests CALCIUM AND BONE METABOLISM IN PEDIATRICS Hypocalcemia and Hypercalcemia ENDOCRINE FUNCTION IN PEDIATRICS Hormone Secretion Hypothalamic–Pituitary–Thyroid System Hypothalamic–Pituitary–Adrenal Cortex System Growth Factors Endocrine Control of Sexual Maturation DEVELOPMENT OF THE IMMUNE SYSTEM Basic Concepts of Immunity

Components of the Immune System Neonatal and Infant Antibody Production Immunity Disorders GENETIC DISEASES Cystic Fibrosis Newborn Screening for Whole Populations Diagnosis of Metabolic Disease in the Clinical Setting DRUG METABOLISM AND PHARMACOKINETICS Therapeutic Drug Monitoring Toxicologic Issues in Pediatric Clinical Chemistry QUESTIONS REFERENCES Index

PART one Basic Principles and Practice of Clinical Chemistry

1 Basic Principles and Practices KATHRYN DUGAN and ELIZABETH WARNING

Chapter Outline Units of Measure Reagents Chemicals Reference Materials Water Specifications Solution Properties Concentration Colligative Properties Redox Potential Conductivity pH and Buffers

Clinical Laboratory Supplies Thermometers/Temperature Glassware and Plasticware Desiccators and Desiccants Balances Centrifugation

Laboratory Mathematics and Calculations Significant Figures Logarithms Concentration Dilutions Water of Hydration Graphing and Beer's Law

Specimen Considerations Types of Samples Sample Processing Sample Variables Chain of Custody Electronic and Paper Reporting of Results

Questions References Chapter Objectives Upon completion of this chapter, the clinical laboratorian should be able to do the following: Convert results from one unit format to another using the SI and traditional systems. Describe the classifications used for reagent grade water. Identify the varying chemical grades used in reagent preparation and indicate their correct use. Define primary standard and standard reference materials. Describe the following terms that are associated with solutions and, when appropriate, provide the respective units: percent, molarity, normality, molality, saturation, colligative properties, redox potential, and conductivity. Define a buffer and give the formula for pH and pK calculations. Use the Henderson-Hasselbalch equation to determine the missing variable when given either the pK and pH or the pK and concentration of the weak acid and its conjugate base. List and describe the types of thermometers used in the clinical laboratory. Classify the type of pipette when given an actual pipette or its description. Demonstrate the proper use of a measuring and volumetric pipette. Describe two ways to calibrate a pipetting device. Define a desiccant and discuss how it is used in the clinical laboratory. Describe how to properly care for and balance a centrifuge. Correctly perform the laboratory mathematical calculations provided in this chapter. Identify and describe the types of samples used in clinical chemistry. Outline the general steps for processing blood samples. Apply Beer's law to determine the concentration of a sample when the absorbance or change in absorbance is provided. Identify the preanalytic variables that can adversely affect laboratory results as presented in this chapter.

Key Terms Analyte Anhydrous Arterial blood Beer's law Buffer Centrifugation Cerebrospinal fluid (CSF) Colligative property Conductivity Deionized water Deliquescent substance Delta absorbance Density Desiccant

Desiccator Dilution Dilution factor Distilled water Equivalent weight Erlenmeyer flasks Filtration Graduated cylinder Griffin Beaker Hemolysis Henderson-Hasselbalch equation Hydrate Hygroscopic Icterus International unit Ionic strength Lipemia Molality Molarity Normality One-point calibration Osmotic pressure Oxidized Oxidizing agent Percent solution pH Pipette Primary standard Ratio Reagent grade water Redox potential Reduced Reducing agent Reverse osmosis Serial dilution Serum Significant figures Solute Solution Solvent Specific gravity Standard Standard reference materials (SRMs) Système International d'Unités (SI) Thermistor

Ultrafiltration Valence Whole blood

The primary purpose of a clinical chemistry laboratory is to perform analytic procedures that yield accurate and precise information, aiding in patient diagnosis and treatment. The achievement of reliable results requires that the clinical laboratory scientist be able to correctly use basic supplies and equipment and possess an understanding of fundamental concepts critical to any analytic procedure. The topics in this chapter include units of measure, basic laboratory supplies, and introductory laboratory mathematics, plus a brief discussion of specimen collection, processing, and reporting.

UNITS OF MEASURE Any meaningful quantitative laboratory result consists of two components: the first component represents the number related to the actual test value and the second is a label identifying the units. The unit defines the physical quantity or dimension, such as mass, length, time, or volume.1 Not all laboratory tests have well-defined units, but whenever possible, the units used should be reported. Although several systems of units have traditionally been utilized by various scientific divisions, the Système International d'Unités (SI), adopted internationally in 1960, is preferred in scientific literature and clinical laboratories and is the only system employed in many countries. This system was devised to provide the global scientific community with a uniform method of describing physical quantities. The SI system units (referred to as SI units) are based on the metric system. Several subclassifications exist within the SI system, one of which is the basic unit. There are seven basic units (Table 1.1), with length (meter), mass (kilogram), and quantity of a substance (mole) being the units most frequently encountered. Another set of SI-recognized units is termed derived units. A derived unit, as the name implies, is a derivative or a mathematical function describing one of the basic units. An example of an SIderived unit is meters per second (m/s), used to express velocity. Some non-SI units are so widely used that they have become acceptable for use within the SI system (Table 1.1). These include long-standing units such as hour, minute, day, gram, liter, and plane angles expressed as degrees. The SI uses standard prefixes that, when added to a given basic unit, can indicate decimal fractions or multiples of that unit (Table 1.2). For example, 0.001 liter can be expressed using the prefix milli, or 10−3, and since it requires moving the decimal point

three places to the right, it can then be written as 1 milliliter, or abbreviated as 1 mL. It may also be written in scientific notation as 1 × 10−3 L. Likewise, 1,000 liters would use the prefix of kilo (103) and could be written as 1 kiloliter or expressed in scientific notation as 1 × 103 L. TABLE 1.1 SI Units

TABLE 1.2 Prefixes Used with SI Units

Prefixes are used to indicate a subunit or multiple of a basic SI unit. It is important to understand the relationship these prefixes have to the basic unit. The highlighted upper portion of Table 1.2 indicates prefixes that are smaller than the basic unit and are frequently used in clinical laboratories. When converting between prefixes, simply note the relationship between the two prefixes based on whether you are changing to a smaller or larger prefix. For example, if converting from one liter (1.0 × 100 or 1.0) to milliliters (1.0 × 10−3 or 0.001), the starting unit (L) is larger than the desired unit by a factor of 1,000 or 103. This means that the decimal place would be moved to the right three places, so 1.0 liter (L) equals 1,000 milliliters (mL). When changing 1,000

milliliters (mL) to 1.0 liter (L), the process is reversed and the decimal point would be moved three places to the left to become 1.0 L. Note that the SI term for mass is kilogram; it is the only basic unit that contains a prefix as part of its name. Generally, the standard prefixes for mass use the term gram rather than kilogram.

Example 1: Convert 1.0 L to μL 1.0 L (1 × 100) = ? μL (micro = 10−6); move the decimal place six places to the right and it becomes 1,000,000 μL; reverse the process to determine the expression in L (move the decimal six places to the left of 1,000,000 μL to get 1.0 L).

Example 2: Convert 5 mL to μL 5 mL (milli = 10−3, larger) = ? μL (micro = 10−6, smaller); move the decimal by three places to the right and it becomes 5,000 μL.

Example 3: Convert 5.3 mL to dL 5.3 mL (milli = 10−3, smaller) = ? dL (deci = 10−1, larger); move the

decimal place by two places to the left and it becomes 0.053 dL. Reporting of laboratory results is often expressed in terms of substance concentration (e.g., moles) or the mass of a substance (e.g., mg/dL, g/dL, g/L, mmol/L, and IU) rather than in SI units. These familiar and traditional units can cause confusion during interpretation. Appendix D (on thePoint), Conversion of Traditional Units to SI Units for Common Clinical Chemistry Analytes, lists both reference and SI units together with the conversion factor from traditional to SI units for common analytes. As with other areas of industry, the laboratory and the rest of medicine are moving toward adopting universal standards promoted by the International Organization for Standardization, often referred to as ISO. This group develops standards of practice, definitions, and guidelines that can be adopted by everyone in a given field, providing for more uniform terminology and less confusion. Many national initiatives have recommended common units for laboratory test results, but none have been widely adopted.2 As with any transition, clinical laboratory scientists should be familiar with all the terms currently used in their field.

REAGENTS In today's highly automated laboratory, there seems to be little need for reagent preparation by the clinical laboratory scientist. Most instrument manufacturers make the reagents in a ready-to-use form or “kit” where all necessary reagents and respective storage containers are prepackaged as a unit requiring only the addition of water or buffer to the prepackaged components for reconstitution. A heightened awareness of the hazards of certain chemicals and the numerous regulatory agency requirements has caused clinical chemistry laboratories to readily eliminate massive stocks of chemicals and opt instead for the ease of using prepared reagents. Periodically, especially in hospital laboratories involved in research and development, biotechnology applications, specialized analyses, or method validation, the laboratorian may still face preparing various reagents or solutions.

Chemicals Analytic chemicals exist in varying grades of purity: analytic reagent (AR); ultrapure, chemically pure (CP); United States Pharmacopeia (USP); National Formulary (NF); and technical or commercial grade.3 A committee of the American Chemical Society (ACS) established specifications for AR grade

chemicals, and chemical manufacturers will either meet or exceed these requirements. Labels on reagents state the actual impurities for each chemical lot or list the maximum allowable impurities. The labels should be clearly printed with the percentage of impurities present and either the initials AR or ACS or the term For laboratory use or ACS Standard-Grade Reference Materials. Chemicals of this category are suitable for use in most analytic laboratory procedures. Ultrapure chemicals have been put through additional purification steps for use in specific procedures such as chromatography, atomic absorption, immunoassays, molecular diagnostics, standardization, or other techniques that require extremely pure chemicals. These reagents may carry designations of HPLC (high-performance liquid chromatography) or chromatographic on their labels. Because USP and NF grade chemicals are used to manufacture drugs, the limitations established for this group of chemicals are based only on the criterion of not being injurious to individuals. Chemicals in this group may be pure enough for use in most chemical procedures; however, it should be recognized that the purity standards are not based on the needs of the laboratory and, therefore, may or may not meet all assay requirements. Reagent designations of CP or pure grade indicate that the impurity limitations are not stated and that preparation of these chemicals is not uniform. It is not recommended that clinical laboratories use these chemicals for reagent preparation unless further purification or a reagent blank is included. Technical or commercial grade reagents are used primarily in manufacturing and should never be used in the clinical laboratory. Organic reagents also have varying grades of purity that differ from those used to classify inorganic reagents. These grades include a practical grade with some impurities; CP, which approaches the purity level of reagent grade chemicals; spectroscopic (spectrally pure) and chromatographic grade organic reagents, with purity levels attained by their respective procedures; and reagent grade (ACS), which is certified to contain impurities below certain levels established by the ACS. As in any analytic method, the desired organic reagent purity is dictated by the particular application. Other than the purity aspects of the chemicals, laws related to the Occupational Safety and Health Administration (OSHA)4 require manufacturers to indicate any physical or biologic health hazards and precautions needed for the safe use, storage, and disposal of any chemical. A manufacturer is required to provide technical data sheets for each chemical manufactured on a document

called a Safety Data Sheet (SDS).

Reference Materials Unlike other areas of chemistry, clinical chemistry is involved in the analysis of biochemical by-products found in biological fluids, such as serum, plasma, or urine, making purification and a known exact composition of the material almost impossible. For this reason, traditionally defined standards used in analytical chemistry do not readily apply in clinical chemistry. A primary standard is a highly purified chemical that can be measured directly to produce a substance of exact known concentration and purity. The ACS has purity tolerances for primary standards, because most biologic constituents are unavailable within these tolerance limitations; the National Institute of Standards and Technology (NIST)-certified standard reference materials (SRMs) are used instead of ACS primary standard materials.5, 6, 7 The NIST developed certified reference materials/SRMs for use in clinical chemistry laboratories. They are assigned a value after careful analysis, using state-of-the-art methods and equipment. The chemical composition of these substances is then certified; however, they may not possess the purity equivalent of a primary standard. Because each substance has been characterized for certain chemical or physical properties, it can be used in place of an ACS primary standard in clinical work and is often used to verify calibration or accuracy/bias assessments. Many manufacturers use an NIST SRM when producing calibrator and standard materials, and in this way, these materials are considered “traceable to NIST” and may meet certain accreditation requirements. There are SRMs for a number of routine analytes, hormones, drugs, and blood gases, with others being added.5

Water Specifications8 Water is the most frequently used reagent in the laboratory. Because tap water is unsuitable for laboratory applications, most procedures, including reagent and standard preparation, use water that has been substantially purified. There are various methods for water purification including distillation, ion exchange, reverse osmosis, ultrafiltration, ultraviolet light, sterilization, and ozone treatment. Laboratory requirements generally call for reagent grade water that, according to the Clinical and Laboratory Standards Institute (CLSI), is classified into one of six categories based on the specifications needed for its use rather

than the method of purification or preparation.9,10 These categories include clinical laboratory reagent water (CLRW), special reagent water (SRW), instrument feed water, water supplied by method manufacturer, autoclave and wash water, and commercially bottled purified water. Laboratories need to assess whether the water meets the specifications needed for its application. Most water-monitoring parameters include at least microbiological count, pH, resistivity (measure of resistance in ohms and influenced by the number of ions present), silicate, particulate matter, and organics. Each category has a specific acceptable limit. A long-held convention for categorizing water purity was based on three types, I through III, with type I water having the most stringent requirements and generally suitable for routine laboratory use. Prefiltration can remove particulate matter from municipal water supplies before any additional treatments. Filtration cartridges are composed of glass; cotton; activated charcoal, which removes organic materials and chlorine; and submicron filters (≤0.2 mm), which remove any substances larger than the filter's pores, including bacteria. The use of these filters depends on the quality of the municipal water and the other purification methods used. For example, hard water (containing calcium, iron, and other dissolved elements) may require prefiltration with a glass or cotton filter rather than activated charcoal or submicron filters, which quickly become clogged and are expensive to use. The submicron filter may be better suited after distillation, deionization, or reverse osmosis treatment. Distilled water has been purified to remove almost all organic materials, using a technique of distillation much like that found in organic chemistry laboratory distillation experiments in which water is boiled and vaporized. Many impurities do not rise in the water vapor and will remain in the boiling apparatus so that the water collected after condensation has less contamination. Water may be distilled more than once, with each distillation cycle removing additional impurities. Ultrafiltration and nanofiltration, like distillation, are excellent in removing particulate matter, microorganisms, and any pyrogens or endotoxins. Deionized water has some or all ions removed, although organic material may still be present, so it is neither pure nor sterile. Generally, deionized water is purified from previously treated water, such as prefiltered or distilled water. Deionized water is produced using either an anion or a cation exchange resin, followed by replacement of the removed ions with hydroxyl or hydrogen ions. The ions that are anticipated to be removed from the water will dictate the type of ion exchange resin to be used. One column cannot service all ions present in water. A combination of several resins will produce different grades of deionized

water. A two-bed system uses an anion resin followed by a cation resin. The different resins may be in separate columns or in the same column. This process is excellent in removing dissolved ionized solids and dissolved gases. Reverse osmosis is a process that uses pressure to force water through a semipermeable membrane, producing water that reflects a filtered product of the original water. It does not remove dissolved gases. Reverse osmosis may be used for the pretreatment of water. Ultraviolet oxidation, which removes some trace organic material or sterilization processes at specific wavelengths, when used in combination with ozone treatment, can destroy bacteria but may leave behind residual products. These techniques are often used after other purification processes have been completed. Production of reagent grade water largely depends on the condition of the feed water. Generally, reagent grade water can be obtained by initially filtering it to remove particulate matter, followed by reverse osmosis, deionization, and a 0.2-mm filter or more restrictive filtration process. Type III/autoclave wash water is acceptable for glassware washing but not for analysis or reagent preparation. Traditionally, type II water was acceptable for most analytic requirements, including reagent, quality control, and standard preparation, while type I water was used for test methods requiring minimum interference, such as trace metal, iron, and enzyme analyses. Use with HPLC may require less than a 0.2-mm final filtration step and falls into the SRW category. Some molecular diagnostic or mass spectrophotometric techniques may require special reagent grade water; some reagent grade water should be used immediately, so storage is discouraged because the resistivity changes. Depending on the application, CLRW should be stored in a manner that reduces any chemical or bacterial contamination and for short periods. Testing procedures to determine the quality of reagent grade water include measurements of resistance, pH, colony counts on selective and nonselective media for the detection of bacterial contamination, chlorine, ammonia, nitrate or nitrite, iron, hardness, phosphate, sodium, silica, carbon dioxide, chemical oxygen demand, and metal detection. Some accreditation agencies11 recommend that laboratories document culture growth, pH, and specific resistance on water used in reagent preparation. Resistance is measured because pure water, devoid of ions, is a poor conductor of electricity and has increased resistance. The relationship of water purity to resistance is linear. Generally, as purity increases, so does resistance. This one measurement does not suffice for determination of

true water purity because a nonionic contaminant may be present that has little effect on resistance. Note that reagent water meeting specifications from other organizations, such as the ASTM, may not be equivalent to those established by the CLSI, and care should be taken to meet the assay procedural requirements for water type requirements.

Solution Properties In clinical chemistry, substances found in biologic fluids including serum, plasma, urine, and spinal fluid are quantified. A substance that is dissolved in a liquid is called a solute; in laboratory science, these biologic solutes are also known as analytes. The liquid in which the solute is dissolved—in this instance, a biologic fluid—is the solvent. Together they represent a solution. Any chemical or biologic solution is described by its basic properties, including concentration, saturation, colligative properties, redox potential, conductivity, density, pH, and ionic strength.

Concentration Analyte concentration in solution can be expressed in many ways. Routinely, concentration is expressed as percent solution, molarity, molality, or normality, Note that these are non-SI units, and the SI expression for the amount of a substance is the mole. Percent solution is expressed as the amount of solute per 100 total units of solution. Three expressions of percent solutions are weight per weight (w/w), volume per volume (v/v), and weight per volume (w/v). Weight per weight (% w/w) refers to the number of grams of solute per 100 g of solution. Volume per volume (% v/v) is used for liquid solutes and gives the milliliters of solute in 100 mL of solution. For v/v solutions, it is recommended that grams per deciliter (g/dL) be used instead of % v/v. Weight per volume (% w/v) is the most commonly used percent solution in the clinical laboratory and is defined as the number of grams of solute in 100 mL of solution. This is not the same as molarity and care must be taken to not confuse the two. Molarity (M) is expressed as the number of moles per 1 L of solution. One mole of a substance equals its gram molecular weight (gmw), so the customary units of molarity (M) are moles/liter. The SI representation for the traditional molar concentration is moles of solute per volume of solution, with the volume of the solution given in liters. The SI expression for concentration should be represented as moles per liter (mol/L), millimoles per liter (mmol/L),

micromoles per liter (μmol/L), and nanomoles per liter (nmol/L). The familiar concentration term molarity has not been adopted by the SI as an expression of concentration. It should also be noted that molarity depends on volume, and any significant physical changes that influence volume, such as changes in temperature and pressure, will also influence molarity. Molality (m) represents the amount of solute per 1 kg of solvent. Molality is sometimes confused with molarity; however, it can be easily distinguished from molarity because molality is always expressed in terms of moles per kilogram (weight per weight) and describes moles per 1,000 g (1 kg) of solvent. Note that the common abbreviation (m) for molality is a lowercase “m,” while the uppercase (M) refers to molarity. The preferred expression for molality is moles per kilogram (mol/kg) to avoid any confusion. Unlike molarity, molality is not influenced by temperature or pressure because it is based on mass rather than volume. Normality is the least likely of the four concentration expressions to be encountered in clinical laboratories, but it is often used in chemical titrations and chemical reagent classification. It is defined as the number of gram equivalent weights per 1 L of solution. An equivalent weight is equal to the gmw of a substance divided by its valence. The valence is the number of units that can combine with or replace 1 mole of hydrogen ions for acids and hydroxyl ions for bases and the number of electrons exchanged in oxidation–reduction reactions. Normality is always equal to or greater than the molarity of the compound. Normality was previously used for reporting electrolyte values, such as sodium [Na+], potassium [K+], and chloride [Cl−], expressed as milliequivalents per liter (mEq/L); however, this convention has been replaced with the more familiar units of millimoles per liter (mmol/L). Solution saturation gives little specific information about the concentration of solutes in a solution. A solution is considered saturated when no more solvent can be dissolved in the solution. Temperature, as well as the presence of other ions, can influence the solubility constant for a solute in a given solution and thus affect the saturation. Routine terms in the clinical laboratory that describe the extent of saturation are dilute, concentrated, saturated, and supersaturated. A dilute solution is one in which there is relatively little solute or one that has a lower solute concentration per volume of solvent than the original, such as when making a dilution. In contrast, a concentrated solution has a large quantity of solute in solution. A solution in which there is an excess of undissolved solute particles can be referred to as a saturated solution. As the name implies, a supersaturated solution has an even greater concentration of undissolved solute

particles than a saturated solution of the same substance. Because of the greater concentration of solute particles, a supersaturated solution is thermodynamically unstable. The addition of a crystal of solute or mechanical agitation disturbs the supersaturated solution, resulting in crystallization of any excess material out of solution. An example is seen when measuring serum osmolality by freezing point depression.

Colligative Properties Colligative properties are those properties related to the number of solute particles per solvent molecules, not on the type of particles present. The behavior of particles or solutes in solution demonstrates four repeatable properties, osmotic pressure, vapor pressure, freezing point, and boiling point, these are called colligative properties. Vapor pressure is the pressure exerted by the vapor when the liquid solvent is in equilibrium with the vapor. Freezing point is the temperature at which the first crystal (solid) of solvent forms in equilibrium with the solution. Boiling point is the temperature at which the vapor pressure of the solvent reaches atmospheric pressure (usually one atmosphere). Osmotic pressure is the pressure that opposes osmosis when a solvent flows through a semipermeable membrane to establish equilibrium between compartments of differing concentration. The osmotic pressure of a dilute solution is directly proportional to the concentration of the molecules in solution. The expression for concentration is the osmole. One osmole of a substance equals the molarity or molality multiplied by the number of particles, not the kind of particle, at dissociation. If molarity is used, the resulting expression would be termed osmolarity; if molality is used, the expression changes to osmolality. Osmolality is preferred since it depends on the weight rather than volume and is not readily influenced by temperature and pressure changes. When a solute is dissolved in a solvent, the colligative properties change in a predictable manner for each osmole of substance present. In the clinical setting, freezing point and vapor pressure depression can be measured as a function of osmolality. Freezing point is preferred since vapor pressure measurements can give inaccurate readings when some substances, such as alcohols, are present in the samples.

Redox Potential Redox potential, or oxidation–reduction potential, is a measure of the ability of a solution to accept or donate electrons. Substances that donate electrons are

called reducing agents; those that accept electrons are considered oxidizing agents. The pneumonic—LEO (lose electrons oxidized) the lion says GER (gain electrons reduced)—may prove useful when trying to recall the relationship between reducing/oxidizing agents and redox potential.

Conductivity Conductivity is a measure of how well electricity passes through a solution. A solution's conductivity quality depends principally on the number of respective charges of the ions present. Resistivity, the reciprocal of conductivity, is a measure of a substance's resistance to the passage of electrical current. The primary application of resistivity in the clinical laboratory is for assessing the purity of water. Resistivity or resistance is expressed as ohms and conductivity is expressed as ohms−1.

pH and Buffers Buffers are weak acids or bases and their related salts that, as a result of their dissociation characteristics, minimize changes in the hydrogen ion concentration. Hydrogen ion concentration is often expressed as pH. A lowercase p in front of certain letters or abbreviations operationally means the “negative logarithm of” or “inverse log of” that substance. In keeping with this convention, the term pH represents the negative or inverse log of the hydrogen ion concentration. Mathematically, pH is expressed as

(Eq. 1-1) where [H+] equals the concentration of hydrogen ions in moles per liter (M). The pH scale ranges from 0 to 14 and is a convenient way to express hydrogen ion concentration. Unlike a strong acid or base, which dissociates almost completely, the dissociation constant for a weak acid or base solution (like a buffer) tends to be very small, meaning little dissociation occurs. The dissociation of acetic acid (CH3COOH), a weak acid, can be illustrated as follows:

(Eq. 1-2) HA = weak acid, A− = conjugate base, H+ = hydrogen ions, [] = concentration of anything in the bracket. Sometimes, the conjugate base (A−) will be referred to as a “salt” since, physiologically, it will be associated with some type of cation such as sodium (Na+). Note that the dissociation constant, Ka, for a weak acid may be calculated using the following equation:

(Eq. 1-3) Rearrangement of this equation reveals

(Eq. 1-4) Taking the log of each quantity and then multiplying by minus 1 (−1), the equation can be rewritten as

(Eq. 1-5) By convention, lowercase p means “negative log of”; therefore, −log[H+] may be written as pH, and −Ka may be written as pKa. The equation now becomes

(Eq. 1-6)

Eliminating the minus sign in front of the log of the quantity results in an equation known as the Henderson-Hasselbalch equation, which mathematically describes the dissociation characteristics of weak acids (pKa) and

bases (pKb) and the effect on pH:

(Eq. 1-7) When the ratio of [A−] to [HA] is 1, the pH equals the pK and the buffer has its greatest buffering capacity. The dissociation constant Ka, and therefore the pKa, remains the same for a given substance. Any changes in pH are solely due to the ratio of conjugate base [A−] concentration to weak acid [HA] concentration. Ionic strength is another important aspect of buffers, particularly in separation techniques. Ionic strength is the concentration or activity of ions in a solution or buffer. Increasing ionic strength increases the ionic cloud surrounding a compound and decreases the rate of particle migration. It can also promote compound dissociation into ions effectively increasing the solubility of some salts, along with changes in current, which can also affect electrophoretic separation.

CLINICAL LABORATORY SUPPLIES In today's clinical chemistry laboratory, many different types of equipment are in use. Most of the manual techniques have been replaced by automation, but it is still necessary for the laboratory scientist to be knowledgeable in the operation and use of certain equipment. The following is a brief discussion of the composition and general use of common equipment found in a clinical chemistry laboratory, including thermometers, pipettes, flasks, beakers, and dessicators.

Thermometers/Temperature The predominant practice for temperature measurement uses the Celsius (°C) scale; however, Fahrenheit (°F) and Kelvin (°K) scales are also used.12 The SI designation for temperature is the Kelvin scale. Table 1.3 gives the conversion formulas between Fahrenheit and Celsius scales and Appendix C (thePoint) lists the various conversion formulas. TABLE 1.3 Common Temperature Conversions

All analytic reactions occur at an optimal temperature. Some laboratory procedures, such as enzyme determinations, require precise temperature control, whereas others work well over a wide range of temperatures. Reactions that are temperature dependent use some type of heating/cooling cell, heating/cooling block, or water/ice bath to provide the correct temperature environment. Laboratory refrigerator temperatures are often critical and need periodic verification. Thermometers either are an integral part of an instrument or need to be placed in the device for temperature maintenance. The two types of thermometers discussed include liquid-in-glass and electronic thermometer or thermistor probe; however, several other types of temperature-indicating devices are in use. Regardless of which is being used, all temperature-reading devices must be calibrated to ascertain accuracy. Liquid-in-glass thermometers use a colored liquid (red or other colored material), encased in plastic or glass material. They usually measure temperatures between 20°C and 400°C. Visual inspection of the liquid-in-glass thermometer should reveal a continuous line of liquid, free from separation or bubbles. The accuracy range for a thermometer used in clinical laboratories is determined by the specific application. Liquid-in-glass thermometers should be calibrated against an NIST-certified or NIST-traceable thermometer for critical laboratory applications.13 NIST has an SRM thermometer with various calibration points (0°C, 25°C, 30°C, and 37°C) for use with liquid-in-glass thermometers. Gallium, another SRM, has a known melting point and can also be used for thermometer verification. As automation advances and miniaturizes, the need for an accurate, fastreading electronic thermometer (thermistor) has increased and is now routinely incorporated in many devices. The advantages of a thermistor over the more traditional liquid-in-glass thermometers are size and millisecond response time. Similar to the liquid-in- glass thermometers, the thermistor can be calibrated

against an SRM thermometer.

Glassware and Plasticware Until recently, laboratory supplies (e.g., pipettes, flasks, beakers, and burettes) consisted of some type of glass and could be correctly termed glassware. As plastic material was refined and made available to manufacturers, plastic has been increasingly used to make laboratory equipment. Before discussing general laboratory supplies, a brief summary of the types and uses of glass and plastic commonly seen today in laboratories is given (see Appendices G, H, and I on thePoint). Regardless of design, most laboratory supplies must satisfy certain tolerances of accuracy and fall into two classes of precision tolerance, either Class A or Class B as given by the American Society for Testing and Materials (ASTM).14,15 Those that satisfy Class A ASTM precision criteria are stamped with the letter “A” on the glassware and are preferred for laboratory applications. Class B glassware generally have twice the tolerance limits of Class A, even if they appear identical, and are often found in student laboratories where durability is needed. Vessels holding or transferring liquid are designed either to contain (TC) or to deliver (TD) a specified volume. As the names imply, the major difference is that TC devices do not deliver that same volume when the liquid is transferred into a container, whereas the TD designation means that the labware will deliver that amount. Glassware used in the clinical laboratory usually fall into one of the following categories: Kimax/Pyrex (borosilicate), Corex (aluminosilicate), high silica, Vycor (acid and alkali resistant), low actinic (amber colored), or flint (soda lime) glass used for disposable material.16 Whenever possible, routinely used clinical chemistry glassware should consist of high thermal borosilicate or aluminosilicate glass and meet the Class A tolerances recommended by the NIST/ASTM/ISO 9000. The manufacturer is the best source of information about specific uses, limitations, and accuracy specifications for glassware. Plasticware is beginning to replace glassware in the laboratory setting. The unique high resistance to corrosion and breakage, as well as varying flexibility, has made plasticware appealing. Relatively inexpensive, it allows most items to be completely disposable after each use. The major types of resins frequently used in the clinical chemistry laboratory are polystyrene, polyethylene, polypropylene, Tygon, Teflon, polycarbonate, and polyvinyl chloride. Again, the individual manufacturer is the best source of information concerning the proper use and limitations of any plastic material.

In most laboratories, glass or plastic that is in direct contact with biohazardous material is usually disposable. If not, it must be decontaminated according to appropriate protocols. Should the need arise, however, cleaning of glass or plastic may require special techniques. Immediately rinsing glass or plastic supplies after use, followed by washing with a powder or liquid detergent designed for cleaning laboratory supplies and several distilled water rinses, may be sufficient. Presoaking glassware in soapy water is highly recommended whenever immediate cleaning is impractical. Many laboratories use automatic dishwashers and dryers for cleaning. Detergents and temperature levels should be compatible with the material and the manufacturer's recommendations. To ensure that all detergent has been removed from the labware, multiple rinses with appropriate grade water is recommended. Check the pH of the final rinse water and compare it with the initial pH of the prerinse water. Detergentcontaminated water will have a more alkaline pH as compared with the pH of the appropriate grade water. Visual inspection should reveal spotless vessel walls. Any biologically contaminated labware should be disposed of according to the precautions followed by that laboratory. Some determinations, such as those used in assessing heavy metals or assays associated with molecular testing, require scrupulously clean or disposable glassware. Some applications may require plastic rather than glass because glass can absorb metal ions. Successful cleaning solutions are acid dichromate and nitric acid. It is suggested that disposable glass and plastic be used whenever possible. Dirty reusable pipettes should be placed immediately in a container of soapy water with the pipette tips up. The container should be long enough to allow the pipette tips to be covered with solution. A specially designed pipette-soaking jar and washing/drying apparatus are recommended. For each final water rinse, fresh reagent grade water should be provided. If possible, designate a pipette container for final rinses only. Cleaning brushes are available to fit almost any size glassware and are recommended for any articles that are washed routinely. Although plastic material is often easier to clean because of its nonwettable surface, it may not be appropriate for some applications involving organic solvents or autoclaving. Brushes or harsh abrasive cleaners should not be used on plasticware. Acid rinses or washes are not required. The initial cleaning procedure described in Appendix J (thePoint) can be adapted for plasticware as well. Ultrasonic cleaners can help remove debris coating the surfaces of glass or plasticware. Properly cleaned laboratory ware should be completely dried before using.

Laboratory Vessels Flasks, beakers, and graduated cylinders are used to hold solutions. Volumetric and Erlenmeyer flasks are two types of containers in general use in the clinical laboratory. A volumetric flask is calibrated to hold one exact volume of liquid (TC). The flask has a round, lower portion with a flat bottom and a long, thin neck with an etched calibration line. Volumetric flasks are used to bring a given reagent to its final volume with the prescribed diluent. When bringing the bottom of the meniscus to the calibration mark, a pipette should be used when adding the final drops of diluent to ensure maximum control is maintained and the calibration line is not missed. Erlenmeyer flasks and Griffin beakers are designed to hold different volumes rather than one exact amount. Because Erlenmeyer flasks and Griffin beakers are often used in reagent preparation, flask size, chemical inertness, and thermal stability should be considered. The Erlenmeyer flask has a wide bottom that gradually evolves into a smaller, short neck. The Griffin beaker has a flat bottom, straight sides, and an opening as wide as the flat base, with a small spout in the lip. Graduated cylinders are long, cylindrical tubes usually held upright by an octagonal or circular base. The cylinder has calibration marks along its length and is used to measure volumes of liquids. Graduated cylinders do not have the accuracy of volumetric labware. The sizes routinely used are 10, 25, 50, 100, 500, 1,000, and 2,000 mL. All laboratory utensils used in critical measurement should be Class A whenever possible to maximize accuracy and precision and thus decrease calibration time (Fig. 1.1 illustrates representative laboratory glassware).

FIGURE 1.1 Laboratory glassware.

Pipettes Pipettes are glass or plastic equipment used to transfer liquids; they may be reusable or disposable. Although pipettes may transfer any volume, they are usually used for volumes of 20 mL or less; larger volumes are usually transferred or dispensed using automated pipetting devices. Table 1.4 outlines the classification applied here. TABLE 1.4 Pipette Classification

Similar to other laboratory equipment, pipettes are designed to contain (TC) or to deliver (TD) a particular volume of liquid. The major difference is the amount of liquid needed to wet the interior surface of the pipette and the amount of any residual liquid left in the pipette tip. Most manufacturers stamp TC or TD near the top of the pipette to alert the user as to the type of pipette. Like other TC-designated labware, a TC pipette holds or contains a particular volume but does not dispense that exact volume, whereas a TD pipette will dispense the volume indicated. When using either pipette, the tip must be immersed in the intended transfer liquid to a level that will allow the tip to remain in solution after the volume of liquid has entered the pipette—without touching the vessel

walls. The pipette is held upright, not at an angle (Fig. 1.2). Using a pipette bulb or similar device, a slight suction is applied to the opposite end until the liquid enters the pipette and the meniscus is brought above the desired graduation line (Fig. 1.3A), and suction is then stopped. While the meniscus level is held in place, the pipette tip is raised slightly out of the solution and wiped with a laboratory tissue of any adhering liquid. The liquid is allowed to drain until the bottom of the meniscus touches the desired calibration mark (Fig. 1.3B). With the pipette held in a vertical position and the tip against the side of the receiving vessel, the pipette contents are allowed to drain into the vessel (e.g., test tube, cuvette, or flask). A blowout pipette has a continuous etched ring or two small, close, continuous rings located near the top of the pipette. This means that the last drop of liquid should be expelled into the receiving vessel. Without these markings, a pipette is self-draining, and the user allows the contents of the pipette to drain by gravity. The tip of the pipette should not be in contact with the accumulating fluid in the receiving vessel during drainage. With the exception of the Mohr pipette, the tip should remain in contact with the side of the vessel for several seconds after the liquid has drained. The pipette is then removed (Fig. 1.2).

FIGURE 1.2 Correct and incorrect pipette positions.

FIGURE 1.3 Pipetting technique. (A) Meniscus is brought above the desired graduation line. (B) Liquid is allowed to drain until the bottom of the meniscus touches the desired calibration mark.

FIGURE 1.4 Disposable transfer pipettes. Measuring or graduated pipettes are capable of dispensing several different volumes. Measuring pipettes are used to transfer reagents and to make dilutions and can be used to repeatedly transfer a particular solution. Because the graduation lines located on the pipette may vary, they should be indicated on the top of each pipette. For example, a 5-mL pipette can be used to measure 5, 4, 3, 2, or 1 mL of liquid, with further graduations between each milliliter. The pipette is designated as 5 in 1/10 increments (Fig. 1.5) and could deliver any volume in tenths of a milliliter, up to 5 mL. Another pipette, such as a 1-mL pipette, may be designed to dispense 1 mL and have subdivisions of hundredths of a milliliter. The markings at the top of a measuring or graduated pipette indicate the volume(s) it is designed to dispense. The subgroups of measuring or graduated pipettes are Mohr, serologic, and micropipettes. A Mohr pipette does not have graduations to the tip. It is a self-draining pipette, but the tip should not be allowed to touch the vessel while the pipette is draining. A serologic pipette has graduation marks to the tip and is generally a blowout pipette. A micropipette is a pipette with a total holding volume of less than 1 mL; it may be designed as either a Mohr or a serologic pipette.

FIGURE 1.5 Volume indication of a pipette.

The next major category is the transfer pipettes. These pipettes are designed to dispense one volume without further subdivisions. Ostwald-Folin pipettes are used with biologic fluids having a viscosity greater than that of water. They are blowout pipettes, indicated by two etched continuous rings at the top. The volumetric pipette is designed to dispense or transfer aqueous solutions and is always self-draining. The bulb-like enlargement in the pipette stem easily identifies the volumetric pipette. This type of pipette usually has the greatest degree of accuracy and precision and should be used when diluting standards, calibrators, or quality control material. They should only be used once prior to cleaning. Disposable transfer pipettes may or may not have calibration marks and are used to transfer solutions or biologic fluids without consideration of a specific volume. These pipettes should not be used in any quantitative analytic techniques (Fig. 1.4). The automatic pipette is the most routinely used pipette in today's clinical chemistry laboratory. Automatic pipettes come in a variety of types including fixed volume, variable volume, and multichannel. The term automatic, as used here, implies that the mechanism that draws up and dispenses the liquid is an integral part of the pipette. It may be a fully automated/self-operating, semiautomatic, or completely manually operated device. Automatic and semiautomatic pipettes have many advantages, including safety, stability, ease of use, increased precision, the ability to save time, and less cleaning required as a result of the contaminated portions of the pipette (e.g., the tips) often being disposable. Figure 1.6 illustrates many common automatic pipettes. A pipette associated with only one volume is termed a fixed volume, and models able to select different volumes are termed variable; however, only one volume may be used at a time. The available range of volumes is 1 μL to 5,000 mL. The widest volume range usually seen in a single pipette is 0.5 μL to 1,000 μL. A pipette with a pipetting capability of less than 1 mL is considered a micropipette, and a pipette that dispenses greater than 1 mL is called an automatic macropipette. Multichannel pipettes are able to attach multiple pipette tips to a single handle and can then be used to dispense a fixed volume of fluid to multiple wells, such as in delivery to a multiwell microtiter plate. In addition to classification by volume delivery amounts, automatic pipettes can also be categorized according to their mechanism: air-displacement, positive-displacement, and dispenser pipettes. An air-displacement pipette relies on a piston for creating suction to draw the sample into a disposable tip that must be changed after each use. The piston does not come in contact with the liquid. A positive-displacement pipette operates by moving the piston in the pipette tip or barrel, much like a

hypodermic syringe. It does not require a different tip for each use. Because of carryover concerns, rinsing and blotting between samples may be required. Dispensers and dilutor/dispensers are automatic pipettes that obtain the liquid from a common reservoir and dispense it repeatedly. The dispensing pipettes may be bottle-top, motorized, handheld, or attached to a dilutor. The dilutor often combines sampling and dispensing functions. Many automated pipettes use a wash between samples to eliminate carryover problems. However, to minimize carryover contamination with manual or semiautomatic pipettes, careful wiping of the tip may remove any liquid that adhered to the outside of the tip before dispensing any liquid. Care should be taken to ensure that the orifice of the pipette tip is not blotted, drawing sample from the tip. Another precaution in using manually operated semiautomatic pipettes is to move the plunger in a continuous and steady manner. These pipettes should be used according to the individual manufacturer's directions.

FIGURE 1.6 (A) Adjustable volume pipette. (B) Fixed volume pipette with disposable tips. (C) Multichannel pipette. (D) Multichannel pipette in use. Disposable one-use pipette tips are designed for use with air-displacement pipettes. The laboratory scientist should ensure that the pipette tip is seated snugly onto the end of the pipette and free from any deformity. Plastic tips used on air-displacement pipettes can vary. Different brands can be used for one particular pipette but they do not necessarily perform in an identical manner. Tips for positive-displacement pipettes are made of straight columns of glass or plastic. These tips must fit snugly to avoid carryover and can be used repeatedly without being changed after each use. As previously mentioned, these devices may need to be rinsed and dried between samples to minimize carryover. Class A pipettes, like all other Class A labware, do not need to be recalibrated by the laboratory. Automatic pipetting devices, as well as non–Class A materials, do need recalibration.17,18 Calibration of pipettes is done to verify

accuracy and precision of the device and may be required by the laboratory's accrediting agency. A gravimetric method (see Box 1-1) can accomplish this task by delivering and weighing a solution of known specific gravity, such as water. A currently calibrated analytic balance and at least Class 2 weights should be used. A pipette should be used only if it is within ±1.0% of the expected value following calibration.

BOX 1-1 Gravimetric Pipette Calibration Materials Pipette 10 to 20 pipette tips, if needed Balance capable of accuracy and resolution to ±0.1% of dispensed volumetric weight Weighing vessel large enough to hold volume of liquid Type I/CLRW Thermometer and barometer

Procedure 1. Record the weight of the vessel. Record the temperature of the water. It is recommended that all materials be at room temperature. Obtain the barometric pressure. 2. Place a small volume (0.5 mL) of the water into the container. To prevent effects from evaporation, it is desirable to loosely cover each container with a substance such as Parafilm. Avoid handling of the containers. 3. Weigh each container plus water to the nearest 0.1 mg or set the

balance to zero. 4. Using the pipette to be tested, draw up the specified amount. Carefully wipe the outside of the tip. Care should be taken not to touch the end of the tip; this will cause liquid to be wicked out of the tip, introducing an inaccuracy as a result of technique. 5. Dispense the water into the weighed vessel. Touch the tip to the side. 6. Record the weight of the vessel. 7. Subtract the weight obtained in step 3 from that obtained in step 6. Record the result. 8. If plastic tips are used, change the tip between each dispensing. Repeat steps 1 to 6 for a minimum of nine additional times. 9. curacy or the ability of the pipette to dispense the Obtain the average or mean of the weight of the water. Multiply the mean weight by the corresponding density of water at the given temperature and pressure. At 20°C, the density of water is 0.9982. 10. Determine the accuracy or the ability of the pipette to dispense the expected (selected or stated) volume according to the following formula: (Eq. 1-8) The manufacturer usually gives acceptable limitations for a particular pipette, but they should not be used if the value differs by more than 1.0% from the expected value. Precision can be indicated as the percent coefficient of variation (%CV) or standard deviation (SD) for a series of repetitive pipetting steps. A discussion of %CV and SD can be found in Chapter 3. The equations to calculate the SD and %CV are as follows:

(Eq. 1-9) Required imprecision is usually ±1 SD. The %CV will vary with the expected volume of the pipette, but the smaller the %CV value, the greater the precision. When n is large, the data are more statistically

valid.19,23

Although gravimetric validation is the most desirable method, pipette calibration may also be accomplished by using photometric methods, particularly for automatic pipetting devices. When a spectrophotometer is used, the molar absorptivity of a compound, such as potassium dichromate, is obtained. After an aliquot of diluent is pipetted, the change in concentration will reflect the volume of the pipette. Another photometric technique used to assess pipette accuracy compares the absorbances of dilutions of potassium dichromate, or another colored liquid with appropriate absorbance spectra, using Class A volumetric labware versus equivalent dilutions made with the pipetting device. These calibration techniques are time consuming and, therefore, impractical for use in daily checks. It is recommended that pipettes be checked initially and subsequently three or four times per year, or as dictated by the laboratory's accrediting agency. Many companies offer calibration services; the one chosen should also satisfy any accreditation requirements. A quick, daily check for many larger volume automatic pipetting devices involves the use of volumetric flasks. For example, a bottle-top dispenser that routinely delivers 2.5 mL of reagent may be checked by dispensing four aliquots of the reagent into a 10-mL Class A volumetric flask. The bottom of the meniscus should meet with the calibration line on the volumetric flask.

Syringes Syringes are sometimes used for transfer of small volumes (< 500 μL) in blood gas analysis or in separation techniques such as chromatography or electrophoresis (Fig. 1.7). The syringes are glass and have fine barrels. The plunger is often made of a fine piece of wire. Tips are not used when syringes are used for injection of sample into a gas chromatographic or high-pressure liquid chromatographic system. In electrophoresis work, however, disposable Teflon tips may be used.

FIGURE 1.7 Microliter glass syringe.

Desiccators and Desiccants Many compounds combine with water molecules to form loose chemical crystals. The compound and the associated water are called a hydrate. When the water of crystallization is removed from the compound, it is said to be anhydrous. Substances that take up water on exposure to atmospheric conditions are called hygroscopic. Materials that are very hygroscopic can remove moisture from the air as well as from other materials. These materials make excellent drying substances and are sometimes used as desiccants (drying agents) to keep other chemicals from becoming hydrated. If these compounds absorb enough water from the atmosphere to cause dissolution, they are known as deliquescent substances. Closed and sealed containers that contain desiccant material are referred to as desiccators and may be used to store more hygroscopic substances. Many sealed packets or shipping containers, often those that require refrigeration, include some type of small packet of desiccant material to prolong storage.

Balances A properly operating balance is essential in producing high-quality reagents and standards. However, because many laboratories discontinued in-house reagent preparation, balances may no longer be as widely used. Balances are classified according to their design, number of pans (single or double), and whether they are mechanical or electronic or classified by operating ranges. Analytic and electronic balances are currently the most popular in the

clinical laboratory. Analytic balances (Fig. 1.8) are required for the preparation of any primary standard. It has a single pan enclosed by sliding transparent doors, which minimize environmental influences on pan movement, a tared weighing vessel, on the sample pan. An optical scale allows the operator to visualize the mass of the substance. The weight range for certain analytic balances is from 0.01 mg to 160 g.

FIGURE 1.8 Analytic balance. Electronic balances (Fig. 1.9) are single-pan balances that use an electromagnetic force to counterbalance the weighed sample's mass. Their measurements equal the accuracy and precision of any available mechanical balance, with the advantage of a fast response time (< 10 seconds).

FIGURE 1.9 Electronic top-loading balance. Test weights used for calibrating balances should be selected from the appropriate ANSI/ASTM Classes 1 through 4.20 The frequency of calibration is dictated by the accreditation/licensing guidelines for a specific laboratory. Balances should be kept scrupulously clean and be located in an area away from heavy traffic, large pieces of electrical equipment, and open windows. The level checkpoint should always be corrected before weighing occurs.

CENTRIFUGATION Centrifugation is a process in which centrifugal force is used to separate solid matter from a liquid suspension. It is used to prepare samples, blood and body fluids, in clinical chemistry for analysis and also to concentrate urine sediment in urinalysis for microscopic viewing. When samples are not properly centrifuged, small fibrin clots and cells can cause erroneous results during analysis. The centrifuge separates the mixture based on mass and density of the component parts. It consists of a head or rotor, carriers, or shields that are attached to the vertical shaft of a motor or air compressor and enclosed in a metal covering. The centrifuge always has a lid; new models will have a locking lid for safety and an on/off switch; however, many models include a brake or a built-in tachometer, which indicates speed, and some centrifuges are refrigerated. Centrifugal force depends on three variables: mass, speed, and radius. The speed is expressed in revolutions per minute (rpm), and the centrifugal force generated is expressed in

terms of relative centrifugal force (RCF) or gravities (g). The speed of the centrifuge is related to the RCF by the following equation: (Eq. 1-10) where 1.118 × 10−5 is a constant, determined from the angular velocity, and r is the radius in centimeters, measured from the center of the centrifuge axis to the bottom of the test tube shield or bucket. The RCF value may also be obtained from a nomogram similar to that found in Appendix F on thePoint. Centrifuge classification is based on several criteria, including benchtop (Fig. 1.10A) or floor model; refrigeration, rotor head (e.g., fixed, hematocrit, cytocentrifuge, swinging bucket [Fig. 1.10B], or angled Fig. 1.10); or maximum speed attainable (i.e., ultracentrifuge). Centrifuges are generally used to separate serum or plasma from the blood cells as the blood samples are being processed; to separate a supernatant from a precipitate during an analytic reaction; to separate two immiscible liquids, such as a lipid-laden sample; or to expel air.

FIGURE 1.10 (A) Benchtop centrifuge. (B) Swinging-bucket rotor. Centrifuge care includes daily cleaning of any spills or debris, such as blood or glass, and ensuring that the centrifuge is properly balanced and free from any excessive vibrations. Balancing the centrifuge load is critical (Fig. 1.11). Many newer centrifuges will automatically decrease their speed if the load is not evenly distributed, but more often, the centrifuge will shake and vibrate or make more noise than expected. A centrifuge needs to be balanced based on equalizing both the volume and weight distribution across the centrifuge head. Many laboratories will make up “balance” tubes that approximate routinely used volumes and tube sizes, including the stopper on phlebotomy tubes, which can be used to match those needed from patient samples. A good rule of thumb is one of even placement and one of “opposition” (Fig. 1.12). Exact positioning of tubes depends on the design of the centrifuge holders.

FIGURE 1.11 Properly balanced centrifuge. Colored circles represent counterbalanced positions for sample tubes.

FIGURE 1.12 Properly loaded centrifuge. The centrifuge cover should remain closed until the centrifuge has come to a complete stop to avoid any aerosol contamination. It is recommended that the timer, brushes (if present), and speed be periodically checked. The brushes, which are graphite bars attached to a retainer spring, create an electrical contact

in the motor. The specific manufacturer's service manual should be consulted for details on how to change brushes and on lubrication requirements. The speed of a centrifuge is easily checked using a tachometer or strobe light. The hole located in the lid of many centrifuges is designed for speed verification using these devices but may also represent an aerosol biohazard if the hole is uncovered. Accreditation agencies require periodic verification of centrifuge speeds.

LABORATORY MATHEMATICS CALCULATIONS

AND

Significant Figures Significant figures are the minimum number of digits needed to express a particular value in scientific notation without loss of accuracy. There are several rules in regard to identifying significant figures: 1. All nonzero numbers are significant (1, 2, 3, 4, 5, 6, 7, 8, 9). 2. All zeros between nonzero numbers are significant. 3. All zeros to the right of the decimal are not significant when followed by a nonzero number. 4. All zeros to the left of the decimal are not significant. The number 814.2 has four significant figures, because in scientific notation, it is written as 8.142 × 102. The number 0.000641 has three significant figures, because the scientific notation expression for this value is 6.41 × 10−4. The zeros to the right of the decimal preceding the nonzero digits are merely holding decimal places and are not needed to properly express the number in scientific notation. However, by convention, zeros following a decimal point are considered significant. For example, 10.00 has four significant figures.

Logarithms Logarithms are the inverse of exponential functions and can be related as such:

This is then read as B is the log base a of X, where B must be a positive number, A is a positive number, and A cannot be equal to 1. Calculators with a

log function do not require conversion to scientific notation. To determine the original number from a log value, the process is done in reverse. This process is termed the antilogarithm. Most calculators require that you enter this value, use an inverse or secondary/shift function, and enter log. If given a log of 3.1525, the resulting value should be 1.424 × 103. Consult the specific manufacturer's directions of the calculator to become acquainted with the proper use of these functions.

pH (Negative Logarithms) In certain circumstances, the laboratory scientist must deal with negative logs. Such is the case with pH or pKa. As previously stated, the pH of a solution is defined as the negative log of the hydrogen ion concentration. The following is a convenient formula to determine the negative logarithm when working with pH or pKa:

(Eq. 1-11) where x is negative exponent base 10 expressed and N is the decimal portion of the scientific notation expression. For example, if the hydrogen ion concentration of a solution is 5.4 × 10−6, then x = 6 and N = 5.4. Substitute this information into Equation 1-11, and it becomes (Eq. 1-12) The logarithm of N (5.4) is equal to 0.7324, or 0.73. The pH becomes (Eq. 1-13) The same formula can be applied to obtain the hydrogen ion concentration of a solution when only the pH is given. Using a pH of 5.27, the equation becomes (Eq. 1-14) In this instance, the x term is always the next largest whole number. For this example, the next largest whole number is 6. Substituting for x, the equation becomes

(Eq. 1-15) A shortcut is to simply subtract the pH from x (6 − 5.27 = 0.73) and take the antilog of that answer 5.73. The final answer is 5.73 × 10−6. Note that rounding, while allowed, can alter the answer. A more algebraically correct approach follows in Equations 1-16 through 1-18. Multiply all the variables by −1:

(Eq. 1-16) Solve the equation for the unknown quantity by adding a positive 6 to both sides of the equal sign and the equation becomes (Eq. 1-17) The result is 0.73, which is the antilogarithm value of N, which is 5.37, or 5.4: (Eq. 1-18) The hydrogen ion concentration for a solution with a pH of 5.27 is 5.4 × 10−6. Many scientific calculators have an inverse function that allows for more direct calculation of negative logarithms.

Concentration A detailed description of each concentration term (e.g., molarity and normality) may be found at the beginning of this chapter. The following discussion focuses on the basic mathematical expressions needed to prepare reagents of a stated concentration.

Percent Solution A percent solution is determined in the same manner regardless of whether weight/weight, volume/volume, or weight/volume units are used. Percent implies “parts per 100,” which is represented as percent (%) and is independent of the molecular weight of a substance.

Example 1.1Weight/Weight (w/w)

To make up 250 g of a 5% aqueous solution of hydrochloric acid (using 12 M HCl), multiply the total amount by the percent expressed as a decimal. The 5% aqueous solution can be expressed as (Eq. 1-19) Therefore, the calculation becomes (Eq. 1-20) Another way of arriving at the answer is to set up a ratio so that Desired solution concentration = Final product of 12 M HCl

(Eq. 1-21)

Example 1.2Weight/Volume (w/v) The most frequently used term for a percent solution is weight per volume, which is often expressed as grams per 100 mL of the diluent. To make up 1,000 mL of a 10% (w/v) solution of NaOH, use the preceding approach. Restate the w/v as a fraction:

Then, the calculation becomes 0.10 × 1,000 mL = 100 g or setting up a ratio so that

(Eq. 1-22) Therefore, add 100 g of 10% NaOH to a 1,000-mL volumetric Class A flask and dilute to the calibration mark with reagent grade water.

Example 1.3Volume/Volume (v/v)

Make up 50 mL of a 2% (v/v) concentrated hydrochloric acid solution.

Then, the calculation becomes

or using a ratio

(Eq. 1-23) Therefore, add 40 mL of reagent grade water to a 50-mL Class A volumetric flask, add 1 mL of concentrated HCl, mix, and dilute up to the calibration mark with reagent grade water. Remember, always add acid to water!

Molarity Molarity (M) is routinely expressed in units of moles per liter (mol/L) or sometimes millimoles per milliliter (mmol/mL). Remember that 1 mol of a substance is equal to the gmw (gram molecular weight) of that substance. When trying to determine the amount of substance needed to yield a particular concentration, initially decide what final concentration units are needed. For molarity, the final units will be moles per liter (mol/L) or millimoles per milliliter (mmol/mL). The second step is to consider the existing units and the relationship they have to the final desired units. Essentially, try to put as many units as possible into like terms and arrange so that the same units cancel each other out, leaving only those needed in the final answer. To accomplish this, it is important to remember what units are used to define each concentration term. It is key to understand the relationship between molarity (moles/liter), moles, and gmw. While molarity is given in these examples, the approach for molality is the same except that one molal is expressed as one mole of solute per kilogram of solvent. For water, one kilogram is proportional to one liter, so molarity and molality are equivalent.

Example 1.4

How many grams are needed to make 1 L of a 2 M solution of HCl? Step 1: Which units are needed in the final answer? Answer: Grams per liter (g/L). Step 2: Assess other mass/volume terms used in the problem. In this case, moles are also needed for the calculation: How many grams are equal to 1 mole? The gmw of HCl, which can be determined from the periodic table, will be equal to 1 mole. For HCl, the gmw is 36.5, so the equation may be written as

(Eq. 1-24) Cancel out like units, and the final units should be grams per liter. In this example, 73 g HCl per liter is needed to make up a 2 M solution of HCl.

Example 1.5 A solution of NaOH is contained within a Class A 1-L volumetric flask filled to the calibration mark. The content label reads 24 g of NaOH. Determine the molarity. Step 1: What units are ultimately needed? Answer: Moles per liter (mol/L). Step 2: The units that exist are grams and L. NaOH may be expressed as moles and grams. The gmw of NaOH is calculated to equal 40 g/mol. Rearrange the equation so that grams can be canceled and the remaining units reflect those needed in the answer, which are mole/L. Step 3: The equation becomes

(Eq. 1-25) By canceling out like units and performing the appropriate calculations, the final answer of 0.6 M or 0.6 mol/L is derived.

Example 1.6 Make up 250 mL of a 4.8 M solution of HCl.

Step 1: Units needed? Answer: Grams (g). Step 2: Determine the gmw of HCl (36.5 g), which is needed to calculate the molarity. Step 3: Set up the equation, cancel out like units, and perform the appropriate calculations:

(Eq. 1-26) In a 250-mL Class A volumetric flask, add 200 mL of reagent grade water. Add 43.8 g of HCl and mix. Dilute up to the calibration mark with reagent grade water. Although there are various methods to calculate laboratory mathematical problems, this technique of canceling like units can be used in most clinical chemistry situations, regardless of whether the problem requests molarity and normality or exchanging one concentration term for another. However, it is necessary to recall the interrelationship between all the units in the expression.

Normality Normality (N) is expressed as the number of equivalent weights per liter (Eq/L) or milliequivalents per milliliter (mmol/mL). Equivalent weight is equal to gmw divided by the valence (V). Normality has often been used in acid–base calculations because an equivalent weight of a substance is also equal to its combining weight. Another advantage in using equivalent weight is that an equivalent weight of one substance is equal to the equivalent weight of any other chemical.

Example 1.7 Give the equivalent weight, in grams, for each substance listed below. 1. NaCl (gmw = 58 g/mol, valence = 1) (Eq. 1-27)

2. H2SO4 (gmw = 98 g/mol, valence = 2) (Eq. 1-28)

Example 1.8 What is the normality of a 500 mL solution that contains 7 g of H2SO4? The approach used to calculate molarity could be used to solve this problem as well. Step 1: Units needed? Answer: Normality expressed as equivalents per liter (Eq/L). Step 2: Units you have? Answer: Milliliters and grams. Now, determine how they are related to equivalents per liter. (There are 49 g per equivalent —see Equation 1-28 above.) Step 3: Rearrange the equation so that like terms cancel out, leaving Eq/L. This equation is

(Eq. 1-29) Because 500 mL is equal to 0.5 L, the final equation could be written by substituting 0.5 L for 500 mL, eliminating the need to include the 1,000mL/L conversion factor in the equation.

Example 1.9 What is the normality of a 0.5 M solution of H2SO4? Continuing with the previous approach, the final equation is

(Eq. 1-30) When changing molarity into normality or vice versa, the following conversion formula may be applied:

(Eq. 1-31) where V is the valence of the compound. Using this formula, Example 1.9 becomes (Eq. 1-32)

Example 1.10 What is the molarity of a 2.5 N solution of HCl? This problem may be solved in several ways. One way is to use the stepwise approach in which existing units are exchanged for units needed. The equation is

(Eq. 1-33) The second approach is to use the normality-to-molarity conversion formula. The equation now becomes

(Eq. 1-34) When the valence of a substance is 1, the molarity will equal the normality. As previously mentioned, normality either equals or is greater than the molarity.

Specific Gravity Density is expressed as mass per unit volume of a substance. The specific gravity is the ratio of the density of a material when compared with the density of pure water at a given temperature and allows the laboratory scientist a means of expressing density in terms of volume. The units for density are grams per milliliter. Specific gravity is often used with very concentrated materials, such as commercial acids (e.g., sulfuric and hydrochloric acids). The density of a concentrated acid can also be expressed in terms of an assay or percent purity. The actual concentration is equal to the specific gravity

multiplied by the assay or percent purity value (expressed as a decimal) stated on the label of the container.

Example 1.11 What is the actual weight of a supply of concentrated HCl whose label reads specific gravity 1.19 with an assay value of 37%? (Eq. 1-35)

Example 1.12 What is the molarity of this stock solution? The final units desired are moles per liter (mol/L). The molarity of the solution is

(Eq. 1-36)

Conversions To convert one unit into another, the same approach of canceling out like units can be applied. In some instances, a chemistry laboratory may report a given analyte using two different concentration units—for example, calcium. The recommended SI unit for calcium is millimoles per liter. The better known and more traditional units are milligrams per deciliter (mg/dL). Again, it is important to understand the relationship between the units given and those needed in the final answer.

Example 1.13 Convert 8.2 mg/dL calcium to millimoles per liter (mmol/L). The gmw of calcium is 40 g. So, if there are 40 g per mol, then it follows that there are 40 mg per mmol. The final units wanted are mmol/L. The equation becomes

(Eq. 1-37) Once again, the systematic stepwise approach of deleting similar units can be used for this conversion problem. A frequently encountered conversion problem or, more precisely, a dilution problem occurs when a weaker concentration or different volume is needed than the stock substance available, but the concentration terms are the same. The following formula is used where V1 is the volume of the first substance, C1 is the concentration of the first substance, V2 is the volume of the second substance, and C2 is the concentration of the second substance: (Eq. 1-38) This formula is useful only if the concentration and volume units between the substances are the same and if three of four variables are known.

Example 1.14 What volume is needed to make 500 mL of a 0.1 M solution of Tris buffer from a solution of 2 M Tris buffer? Identify the known values: Concentration of initial substance (C1) = 2 M Volume of the product (V2) = 500 mL Concentration of the product (C2) = 0.1 M And the equation becomes:

(Eq. 1-39)

It requires 25 mL of the 2 M solution to make up 500 mL of a 0.1 M solution. This problem differs from the other conversions in that it is actually a dilution of a stock solution. While this approach will provide how much stock is needed when making the solution, the laboratory scientist must subtract that volume from the final volume to determine the amount of diluent needed, in this case 475 mL. A more involved discussion of dilution problems follows.

Dilutions A dilution represents the ratio of concentrated or stock material to the total final volume of a solution and consists of the volume or weight of the concentrate plus the volume of the diluent, with the concentration units remaining the same. This ratio of concentrated or stock solution to the total solution volume equals the dilution factor. Because a dilution is made by adding a more concentrated substance to a diluent, the dilution is always less concentrated than the original substance. The relationship of the dilution factor to concentration is an inverse one; thus, the dilution factor increases as the concentration decreases. To determine the dilution factor, simply take the concentration needed and divide by the stock concentration, leaving it in a reduced fraction form.

Example 1.15 What is the dilution factor needed to make a 100 mmol/L sodium solution from a 3,000 mmol/L stock solution? The dilution factor becomes

(Eq. 1-40) The dilution factor indicates that the ratio of stock material is 1 part stock made to a total volume of 30 mL. To actually make this dilution, 1 mL of stock is added to 29 mL of diluent to achieve a total final volume of 30 mL. Note that the dilution factor indicates the parts per total amount; however, in making the dilution, the sum of the amount of the stock material plus the amount of the diluent must equal the total volume or dilution fraction denominator. The dilution factor may be correctly written as either a fraction or a ratio.

Confusion arises when distinction is not made between a ratio and a dilution, which by its very nature is a ratio of stock to diluent. A ratio is always expressed using a colon; a dilution can be expressed as either a fraction or a ratio.21 Many directions in the laboratory are given orally. For example, making a “1-in-4” dilution means adding one part stock to a total of four parts. That is, one part of stock would be added to three parts of diluent. The dilution factor would be 1/4. Analyses performed on the diluted material would need to be multiplied by 4, the dilution factor, to get the final concentration. The dilution factor is the reciprocal of the dilution. Stating the dilution factor is ¼ is very different from saying make a “1-to4” dilution! In this instance, the dilution factor would be 1/5! It is important during procedures that you fully understand the meaning of these expressions. Patient sample or stock dilutions should be made using reagent grade water, saline, or method-specific diluent using Class A glassware. The sample and diluent should be thoroughly mixed before use. It is not recommended that sample dilutions be made in smaller volume sample cups or holders. Any total volume can be used as long as the fraction reduces to give the dilution factor.

Example 1.16 If in the preceding example 150 mL of the 100 mmol/L sodium solution was required, the dilution ratio of stock to total volume must be maintained. Set up a ratio between the desired total volume and the dilution factor to determine the amount of stock needed. The equation becomes

(Eq. 1-41) Note that 5/150 reduces to the dilution factor of 1/30. To make up this solution, 5 mL of stock is added to 145 mL of the appropriate diluent, making the ratio of stock volume to diluent volume equal to 5/145. Recall that the dilution factor includes the total volume of both stock plus diluent in the denominator and differs from the amount of diluent to stock that is needed.

Example 1.17

Many laboratory scientists like using (V1)(C1) = (V2)(C2) for simple dilution calculations. This is acceptable, as long as you recall that you will need to subtract the stock volume from the total final volume for the correct diluent volume.

(Eq. 1-42)

Simple Dilutions When making a simple dilution, the laboratory scientist must decide on the total volume desired and the amount of stock to be used.

Example 1.18 A 1:10 (1/10) dilution of serum can be achieved by using any of the following approaches. A ratio of 1:9—one part serum and nine parts diluent (saline): 100 μL of serum added to 900 μL of saline 20 μL of serum added to 180 μL of saline 1 mL of serum added to 9 mL of saline 2 mL of serum added to 18 mL of saline Note that the sum of the ratio of serum to diluent (1:9) needed to make up each dilution satisfies the dilution factor (1:10 or 1/10) of stock material to total volume. When thinking about the stock to diluent volume, subtract the parts of stock needed from the total volume or parts to get the number of diluent “parts” needed. Once the volume of each part, usually stock, is known, multiply the diluent parts needed to obtain the correct volume.

Example 1.19 You have a 10 g/dL stock of protein standard. You need a 2 g/dL standard. You only have 0.200 mL of 10 g/dL stock to use. The procedure requires

0.100 mL. Solution:

(Eq. 1-43) You will need 1 part or volume of stock of a total of 5 parts or volumes. Subtracting 1 from 5 yields that 4 parts or volumes of diluent is needed (Fig. 1.13). In this instance, you need at least 0.100 mL for the procedure. You have 0.200 mL of stock. You can make the dilution in various ways, as seen in Example 1.20.

FIGURE 1.13 Simple dilution. Consider this diagram depicting a substance having a 1/5 dilution factor. The dilution factor represents that 1 part of stock is needed from a total of 5 parts. To make this dilution, you would determine the volume of 1 “part,” usually the stock or patient sample. The remainder of the “parts” or total would constitute the amount of diluent needed or four times the volume used for the stock.

Example 1.20 There are several ways to make a 1/5 dilution having only 0.200 mL of stock and needing a total minimum volume of 0.100 mL. Add 0.050 mL stock (1 part) to 0.200 mL of diluent (4 parts × 0.050 mL). Add 0.100 mL of stock (1 part) to 0.400 mL of diluent (4 parts × 0.100 mL). Add 0.200 mL of stock (1 part) to 0.800 mL of diluent (4 parts × 0.200 mL).

The dilution factor is also used to determine the final concentration of a dilution by multiplying the original concentration by the inverse of the dilution factor or the dilution factor denominator when it is expressed as a fraction.

Example 1.21 Determine the concentration of a 200 μmol/mL human chorionic gonadotropin (hCG) standard that was diluted 1/50. This value is obtained by multiplying the original concentration, 200 μmol/mL hCG, by the dilution factor, 1/50. The result is 4 μmol/mL hCG. Quite often, the concentration of the original material is needed.

Example 1.22 A 1:2 dilution of serum with saline had a creatinine result of 8.6 mg/dL. Calculate the actual serum creatinine concentration. Dilution factor: 1/2 Dilution result = 8.6 mg/dL Because this result represents 1/2 of the concentration, the inverse of the dilution is used, and the actual serum creatinine value is (Eq. 1-44)

Serial Dilutions A serial dilution may be defined as multiple progressive dilutions ranging from more concentrated solutions to less concentrated solutions. Serial dilutions are extremely useful when the volume of concentrate or diluent is in short supply and needs to be minimized or a number of dilutions are required, such as in determining a titer. The volume of patient sample available to the laboratory may be small (e.g., pediatric samples), and a serial dilution may be needed to ensure that sufficient sample is available. The serial dilution is initially made in the same manner as a simple dilution. Subsequent dilutions will then be made from each preceding dilution. When a serial dilution is made, certain criteria may need to be satisfied. The criteria vary with each situation but usually include such considerations as the total volume desired, the amount of diluent or concentrate

available, the dilution factor, the final concentration needed, and the support materials required.

Example 1.23 A three tube, twofold serial dilution is to be made on a sample. To start, the tubes must be labeled. It is arbitrarily decided that the total volume for each dilution is to be 1 mL. Into the first tube, 0.5 mL of diluent is added and then 0.5 mL of patient sample. This satisfies the “twofold” or 1:2 dilution for tube 1. In the next tube, 0.5 mL of diluent is again added, along with 0.5 mL of well-mixed liquid from tube 1. This satisfies the 1:2 dilution in tube 2, bringing the total tube dilution to 1:4. For the third tube, 0.5 mL of diluent is added, along with 0.5 mL of well-mixed liquid from tube 2. This satisfies the 1:2 dilution within the tube but brings the total tube dilution to 1:8. The calculation for these values is

(Eq. 1-45) Making a 1:2 dilution of the 1:4 dilution will result in the next dilution (1:8) in Tube 3. To establish the dilution factor needed for subsequent dilutions, it is helpful to solve the following equation for (x):

(Eq. 1-46) Refer to Figure 1.14 for an illustration of this serial dilution.

FIGURE 1.14 Serial dilution.

Example 1.24 Another type of dilution combines several dilution factors that are not multiples of one another. In our previous example, 1:2, 1:4, and 1:8 dilutions are all related to one another by a factor of 2. Consider the situation when 1:10, 1:20, 1:100, and 1:200 dilution factors are required. There are several approaches to solving this type of dilution problem. One method is to treat the 1:10 and 1:20 dilutions as one serial dilution problem, the 1:20 and 1:100 dilutions as a second serial dilution, and the 1:100 and 1:200 dilutions as the last serial dilution. Another approach is to consider what dilution factor of the concentrate is needed to yield the final dilution. In this example, the initial dilution is 1:10, with subsequent dilutions of 1:20, 1:100, and 1:200. The first dilution may be accomplished by adding 1 mL of stock to 9 mL of diluent. The total volume of solution is 10 mL. Our initial dilution factor has been satisfied. In making the remaining dilutions, 2 mL of diluent is added to each test tube.

Solve for (x). Using the dilution factors listed above and solving for (x), the equations become

(Eq. 1-47) In practice, the 1:10 dilution must be diluted by a factor of 2 to obtain a subsequent 1:20 dilution. Because the second tube already contains 2 mL of diluent, 2 mL of the 1:10 dilution should be added (1 part stock to 1 part diluent). In preparing the 1:100 dilution from this, a 1:5 dilution factor of the 1:20 mixture is required (1 part stock to 4 parts diluent). Because this tube already contains 2 mL, the volume of diluent in the tube is divided by its parts, which is 4; thus, 500 μL, or 0.500 mL, of stock should be added. The 1:200 dilution is prepared in the same manner using a 1:2 dilution factor (1 part stock to 1 part diluent) and adding 2 mL of the 1:100 to the 2

mL of diluent already in the tube.

Water of Hydration Some compounds are available in a hydrated form. To obtain a correct gmw for these chemicals, the attached water molecule(s) must be included.

Example 1.25 How much CuSO4·5H2O must be weighed to prepare 1 L of 0.5 M CuSO4? When calculating the gmw of this substance, the water weight must be considered so that the gmw is 250 g rather than gmw of CuSO4 alone (160 g). Therefore,

(Eq. 1-48) Cancel out like terms to obtain the result of 125 g/L. A reagent protocol often designates the use of an anhydrous form of a chemical; frequently, however, all that is available is a hydrated form.

Example 1.26 A procedure requires 0.9 g of CuSO4. All that is available is CuSO4·5H2O. What weight of CuSO4·5H2O is needed? Calculate the percentage of CuSO4 present in CuSO4·5H2O. The percentage is (Eq. 1-49) Therefore, 1 g of CuSO4·5H2O contains 0.64 g of CuSO4, so the equation becomes

(Eq. 1-50)

Graphing and Beer's Law The Beer-Lambert law (Beer's law) mathematically establishes the relationship between concentration and absorbance in many photometric determinations. Beer's law is expressed as (Eq. 1-51) where A is absorbance; a is the absorptivity constant for a particular compound at a given wavelength under specified conditions of temperature, pH, and so on; b is the length of the light path; and c is the concentration. If a method follows Beer's law, then absorbance is proportional to concentration as long as the length of the light path and the absorptivity of the absorbing species remain unaltered during the analysis. In practice, however, there are limits to the predictability of a linear response. In automated systems, adherence to Beer's law is often determined by checking the linearity of the test method over a wide concentration range. The limits of linearity often represent the reportable range of an assay. This term should not be confused with the reference ranges associated with clinical significance of a test. Assays measuring absorbance generally obtain the concentration results by using a graph of Beer's law, known as a standard graph or curve. This graph is made by plotting absorbance versus the concentration of known standards (Fig. 1.15). Because most photometric assays set the initial absorbance to zero (0) using a reagent blank, the initial data points are 0,0. Graphs should be labeled properly and the concentration units must be given. The horizontal axis is referred to as the xaxis, whereas the vertical line is the y-axis. By convention in the clinical laboratory, concentration is usually plotted on the x-axis. On a standard graph, only the standard and their associated absorbances are plotted.

FIGURE 1.15 Standard curve. Once a standard graph has been established, it is permissible to run just one standard, or calibrator, as long as the system remains the same. One-point calculation or calibration refers to the calculation of the comparison of the known standard/calibrator concentration and its corresponding absorbance to the absorbance of the unknown value according to the following ratio:

(Eq. 1-52) Solving for the concentration of the unknown, the equation becomes

(Eq. 1-53)

Example 1.27 The biuret protein assay is very stable and follows Beer's law. Rather than make up a completely new standard graph, one standard (6 g/dL) was assayed. The absorbance of the standard was 0.400, and the absorbance of the unknown was 0.350. Determine the value of the unknown in g/dL.

(Eq. 1-54) This method of calculation is acceptable as long as everything in the system, including the instrument and lot of reagents, remains the same. If anything in the system changes, a new standard graph should be done. Verification of linearity and/or calibration is required whenever a system changes or becomes unstable. Regulatory agencies often prescribe the condition of verification as well as how often the linearity needs to be checked.

Enzyme Calculations

Another application of Beer's law is the calculation of enzyme assay results. When calculating enzyme results, the rate of change in absorbance is often monitored continuously during the reaction to give the difference in absorbance, known as the delta absorbance, or ∆A. Instead of using a standard graph or a one-point calculation, the molar absorptivity of the product is used. If the absorptivity constant and absorbance, in this case ∆A, are given, Beer's law can be used to calculate the enzyme concentration directly without initially needing a standard graph, as follows:

(Eq. 1-55) When the absorptivity constant (a) is given in units of grams per liter (moles) through a 1-centimeter (cm) light path, the term molar absorptivity (ε) is used. Substitution of ε for a and ∆A for A produces the following Beer's law formula: (Eq. 1-56) For reporting enzyme activity, the IU, or international unit, is defined as the amount of enzyme that will catalyze 1 μmol of substrate per minute per liter. These units were often expressed as units per liter (U/L). The designations IU, U, and IU/L were adopted by many clinical laboratories to represent the IU. Although the reporting unit is the same, unless the analysis conditions are identical, use of the IU does not standardize the actual enzyme activity, and therefore, results between different methods of the same enzyme do not result in equivalent activity of the enzyme. For example, an alkaline phosphatase performed at 37°C will catalyze more substrate than if it is run at lower temperature, such as 25°C, even though the unit of expression, U/L, will be the same. The SI recommended unit is the katal, which is expressed as moles per liter per second. Whichever unit is used, calculation of the activity using Beer's law requires inclusion of the dilution and, depending on the reporting unit, possible conversion to the appropriate term (e.g., μmol to mol, mL to L, minute to second, and temperature factors). Beer's law for the IU now becomes

(Eq. 1-57)

where TV is the total volume of sample plus reagents in mL and SV is the sample volume used in mL. The 10−6 converts moles to μmol for the IU. If another unit of activity is used, such as the katal, conversion into liters and seconds would be needed, but the conversions to and from micromoles are excluded.

Example 1.28 The ∆A per minute for an enzyme reaction is 0.250. The product measured has a molar absorptivity of 12.2 × 103 at 425 nm at 30°C. The incubation and reaction temperature are also kept at 30°C. The assay calls for 1 mL of reagent and 0.050 mL of sample. Give the enzyme activity results in international units. Applying Beer's law and the necessary conversion information, the equation becomes

(Eq. 1-58) Note: b is usually given as 1 cm; because it is a constant, it may not be considered in the calculation.

SPECIMEN CONSIDERATIONS The process of specimen collection, handling, and processing remains one of the primary areas of preanalytic error. Careful attention to each phase is necessary to ensure proper subsequent testing and reporting of meaningful results. All accreditation agencies require laboratories to clearly define and delineate the procedures used for proper collection, transport, and processing of patient samples and the steps used to minimize and detect any errors, along with the documentation of the resolution of any errors. The Clinical Laboratory Improvement Amendments Act of 1988 (CLIA 88)22 specifies that procedures for specimen submission and proper handling, including the disposition of any specimen that does not meet the laboratories' criteria of acceptability, be documented.

Types of Samples

Phlebotomy, or venipuncture, is the act of obtaining a blood sample from a vein using a needle attached to a collection device or a stoppered evacuated tube. These tubes come in different volume sizes: from pediatric sizes (≈150 μL) to larger 5 mL tubes. The most frequent site for venipuncture is the medial antecubital vein of the arm. A tourniquet made of pliable nonlatex rubber flat band or tubing is wrapped around the arm, causing cessation of blood flow and dilation of the veins, making for easier detection. The gauge of the needle is inversely related to the size of the needle; the larger the number, the smaller the needle bore and length. Routine venipuncture uses a 23- or 21-gauge needle. An intravenous (IV) infusion set, sometimes referred to as a butterfly because of the appearance of the setup, may be used whenever the veins are fragile, small, or hard to reach or find. The butterfly is attached to a piece of tubing, which is then attached to a hub or barrel. Because of potential needlesticks and cost of the product, this practice may be discouraged. Sites adjacent to IV therapy should be avoided; however, if both arms are involved in IV therapy and the IV cannot be discontinued for a short time, a site below the IV site should be sought. The initial sample drawn (5 mL) should be discarded because it is most likely contaminated with IV fluid and only subsequent sample tubes should be used for analytic purposes. In addition to venipuncture, blood samples can be collected using a skin puncture technique that customarily involves the outer area of the bottom of the foot (a heel stick) for infants or the fleshy part of the middle of the last phalanx of the third or fourth (ring) finger (finger stick). A sharp lancet device is used to pierce the skin and an appropriate capillary or microtainer tubes are used for sample collection.23 Analytic testing of blood involves the use of whole blood, serum, or plasma. Whole blood, as the name implies, uses both the liquid portion of the blood called plasma and the cellular components (red blood cells, white blood cells, and platelets). This requires blood collection into a vacuum tube containing an anticoagulant. Complete mixing of the blood immediately following venipuncture is necessary to ensure the anticoagulant can adequately inhibit the blood's clotting factors. As whole blood sits or is centrifuged, the cells fall toward the bottom, leaving a clear yellow supernate on top called plasma. If a tube does not contain an anticoagulant, the blood's clotting factors are active in forming a clot incorporating the cells. The clot is encapsulated by the large protein fibrinogen. The remaining liquid is called serum rather than plasma (Fig. 1.16). Most testing in the clinical chemistry laboratory is performed on either plasma or serum. The major difference between plasma and serum is that serum

does not contain fibrinogen (i.e., there is less protein in serum than plasma) and some potassium is released from platelets (serum potassium is slightly higher in serum than in plasma). It is important that serum samples be allowed to completely clot (≈20 minutes) before being centrifuged. Plasma samples also require centrifugation but do not need to allow for clotting time and their use can decrease turnaround time for reporting results.

FIGURE 1.16 Blood sample. (A) Whole blood. (B) Whole blood after separation. Centrifugation of the sample accelerates the process of separating the plasma and cells. Specimens should be centrifuged for approximately 10 minutes at an RCF of 1,000 to 2,000 g but should avoid mechanical destruction of red cells that can result in hemoglobin release, called hemolysis. Arterial blood samples measure blood gases (partial pressures of oxygen and carbon dioxide) and pH. Syringes containing heparin anticoagulant are used instead of evacuated tubes because of the pressure in an arterial blood vessel. The radial, brachial, and femoral arteries are the primary arterial sites. Arterial punctures are more difficult to perform because of inherent arterial pressure, difficulty in stopping bleeding afterward, and the undesirable development of a hematoma, which cuts off the blood supply to the surrounding tissue.24 Continued metabolism may occur if the serum or plasma remains in contact with the cells for any period. Evacuated tubes may incorporate plastic, gel-like material that serves as a barrier between the cells and the plasma or serum and seals these compartments from one another during centrifugation. Some gels can

interfere with certain analytes, notably trace metals and drugs such as the tricyclic antidepressants. Proper patient identification is the first step in sample collection. The importance of using the proper collection tube, avoiding prolonged tourniquet application, drawing tubes in the proper order, and proper labeling of tubes cannot be stressed strongly enough. Prolonged tourniquet application causes a stasis of blood flow and an increase in hemoconcentration and anything bound to proteins or the cells. Having patients open and close their fist during phlebotomy is of no value and may cause an increase in potassium and, therefore, should be avoided. IV contamination should be considered if a large increase occurs in the substances being infused, such as glucose, potassium, sodium, and chloride, with a decrease of other analytes such as urea and creatinine. In addition, the proper antiseptic must be used. Isopropyl alcohol wipes, for example, are used for cleaning and disinfecting the collection site; however, this is not the proper antiseptic for disinfecting the site when drawing blood alcohol levels (which is then soap and water only as the disinfectant). Blood is not the only sample analyzed in the clinical chemistry laboratory. Urine is the next most common fluid for determination. Most quantitative analyses of urine require a timed sample (usually 24 hours); a complete sample (all urine must be collected in the specified time) can be difficult because many timed samples are collected by the patient in an outpatient situation. Creatinine analysis is often used to assess the completeness of a 24-hour urine sample because creatinine output is relatively free from interference and is stable, with little change in output within individuals. The average adult excretes 1 to 2 g of creatinine per 24 hours. Urine volume differs widely among individuals; however, a 4-L container is adequate (average output is ≈2 L). It should be noted that this analysis differs from the creatinine clearance test used to assess glomerular filtration rate, which compares urine creatinine output with that in the serum or plasma in a specified time interval and urine volume (often correcting for the surface area). Other body fluids analyzed by the clinical chemistry laboratory include cerebrospinal fluid (CSF), paracentesis fluids (pleural, pericardial, and peritoneal), and amniotic fluids. The color and characteristics of the fluid before centrifugation should be noted for these samples. Before centrifugation, a laboratorian should also verify that the sample is designated for clinical chemistry analysis only because a single fluid sample may be shared among several departments (i.e., hematology or microbiology) and centrifugation could invalidate certain tests in those areas.

CSF is an ultrafiltrate of the plasma and will, ordinarily, reflect the values seen in the plasma. For glucose and protein analysis (total and specific proteins), it is recommended that a blood sample be analyzed concurrently with the analysis of those analytes in the CSF. This will assist in determining the clinical utility of the values obtained on the CSF sample. This is also true for lactate dehydrogenase and protein assays requested on paracentesis fluids. All fluid samples should be handled immediately without delay between sample procurement, transport, and analysis. Amniotic fluid is used to assess fetal lung maturity, congenital diseases, hemolytic diseases, genetic defects, and gestational age. The laboratory scientist should verify the specific handling of this fluid with the manufacturer of the testing procedure(s).

Sample Processing When samples arrive in the laboratory, they are first processed. In the clinical chemistry laboratory, this means correctly matching the blood collection tube(s) with the appropriate test requisition and patient identification labels. This is a particularly sensitive area of preanalytic error. Bar code labels on primary sample tubes are vital in detecting errors and to minimizing clerical errors at this point of the processing. The laboratory scientist must also ascertain if the sample is acceptable for further processing. The criteria used depend on the test involved but usually include volume considerations (i.e., is there sufficient volume for testing needs?), use of proper anticoagulants or preservatives (i.e., was it collected in the correct evacuated tube), whether timing is clearly indicated and appropriate for timed testing, and whether the specimen is intact and has been properly transported (e.g., cooled or on ice, within a reasonable period, protected from light). Unless a whole blood analysis is being performed, the sample is then centrifuged as previously described and the serum or plasma should be separated from the cells if not analyzed immediately. Once processed, the laboratory scientist should note the presence of any serum or plasma characteristics such as hemolysis and icterus (increased bilirubin pigment) or the presence of turbidity often associated with lipemia (increased lipids). Samples should be analyzed within 4 hours; to minimize the effects of evaporation, samples should be properly capped and kept away from areas of rapid airflow, light, and heat. If testing is to occur after that time, samples should be appropriately stored. For most, this means refrigeration at 4°C for 8 hours. Many analytes are stable at this temperature, with the exception of

alkaline phosphatase (increases) and lactate dehydrogenase (decreases as a result of temperature labile fractions). Samples may be frozen at −20°C and stored for longer periods without deleterious effects on the results. Repeated cycles of freezing and thawing, like those that occur in so-called frost-free freezers, should be avoided.

Sample Variables Sample variables include physiologic considerations, proper patient preparation, and problems in collection, transportation, processing, and storage. Although laboratorians must include mechanisms to minimize the effect of these variables on testing and must document each preanalytic incident, it is often frustrating to try to control the variables that largely depend on individuals outside of the laboratory. The best course of action is to critically assess or predict the weak areas, identify potential problems, and put an action plan in place that contains policies, procedures, or checkpoints throughout the sample's journey to the laboratory scientist who is actually performing the test. Good communication with all personnel involved helps ensure that whatever plans are in place meet the needs of the laboratory and, ultimately, the patient and physician. Most accreditation agencies require that laboratories consider all aspects of preanalytic variation as part of their quality assurance plans, including effective problem solving and documentation. Physiologic variation refers to changes that occur within the body, such as cyclic changes (diurnal or circadian variation) or those resulting from exercise, diet, stress, gender, age, underlying medical conditions (e.g., fever, asthma, and obesity), drugs, or posture (Table 1.5). Samples may be drawn on patients who are fasting (usually overnight for at least 8 hours). When fasting, many patients may avoid drinking water and they may become dehydrated, which can be reflected in higher than expected results. Patient preparation for timed samples or those requiring specific diets or other instructions must be well written and verbally explained to patients. Elderly patients often misunderstand or are overwhelmed by the directions given to them by physician office personnel. Drugs can affect various analytes.25 It is important to ascertain what, if any, medications the patient is taking that may interfere with the test. Unfortunately, many laboratorians do not have access to this information and the interest in this type of interference only arises when the physician questions a result. Some frequently encountered influences are smoking, which causes an increase in glucose as a result of the action of nicotine, growth hormone, cortisol, cholesterol, triglycerides, and urea. High amounts or chronic consumption of

alcohol causes hypoglycemia, increased triglycerides, and an increase in the enzyme gamma-glutamyltransferase and other liver function tests. Intramuscular injections increase the enzyme creatine kinase and the skeletal muscle fraction of lactate dehydrogenase. Opiates, such as morphine or meperidine, cause increases in liver and pancreatic enzymes, and oral contraceptives may affect many analytic results. Many drugs affect liver function tests. Diuretics can cause decreased potassium and hyponatremia. Thiazide-type medications can cause hyperglycemia and prerenal azotemia secondary to the decrease in blood volume. Postcollection variations are related to those factors discussed under specimen processing. Clerical errors are the most frequently encountered, followed by inadequate separation of cells from serum, improper storage, and collection. TABLE 1.5 Variables Affecting Select Chemistry Analytes

ALP, alkaline phosphatase; CK, creatine kinase; AST, aspartate aminotransferase; ACTH, adrenocorticotropic hormone; ACP, acid phosphatase; PTH, parathyroid hormone; TSH, thyroid-stimulating hormone; ALT, alanine aminotransferase; LD, lactate dehydrogenase; TP, total protein, K+, potassium.

Chain of Custody

When laboratory tests are likely linked to a crime or accident, they become forensic in nature. In these cases, documented specimen identification is required at each phase of the process. Each facility has its own forms and protocols; however, the patient, and usually a witness, must identify the sample. It should be collected and then sealed with a tamper-proof seal. Any individual in contact with the sample must document receipt of the sample, the condition of the sample at the time of receipt, and the date and time it was received. In some instances, one witness verifies the entire process and cosigns as the sample moves along. Any analytic test could be used as part of legal testimony; therefore, the laboratory scientist should give each sample—even without the documentation—the same attention given to a forensic sample.

Electronic and Paper Reporting of Results Electronic transmission of laboratory data and the more routine use of an electronic medical record, coding, billing, and other data management systems have caused much debate regarding appropriate standards needed in terms of both reporting guidelines and safeguards to ensure privacy of the data and records. Complicating matters is that there are many different data management systems in use by health care agencies that all use laboratory information. For example, the Logical Observation Identifiers Names and Codes (LOINC) system, International Federation of Clinical Chemistry/International Union of Pure and Applied Chemistry (IFCC/IUPAC), ASTM, Health Level 7 (HL7), and Systematized Nomenclature of Medicine, Reference Technology (SNOMED RT) are databases that use their own coding systems for laboratory observations. There are also additional proprietary systems in use, adding to the confusion. In an attempt to standardize these processes and to protect the confidentiality of patient information as required by the Health Insurance Portability and Accountability Act (HIPAA), the Healthcare Common Procedure Coding System (HCPCS) test and services coding system was developed to be recognized by all insurers for reimbursement purposes. The International Classification of Diseases (ICD) developed by the World Health Organization (WHO) uses codes identifying patient diseases and conditions. In the United States, ICD-10 is currently in place. The clinical modifications are maintained by the National Center for Health Statistics. Incorporated into the HCPCS system is the Current Procedural Terminology (CPT) codes, developed by the American Medical Association, which identify almost all laboratory tests and procedures. The CPT codes are divided into different subcategories, with tests or services assigned five-digit numbers followed by the name of the test or service. Together, these

standard coding systems help patient data and tracking of disease transmission between all stakeholders such as physicians, patients, epidemiologists, and insurers. Clinical laboratory procedures are found in CPT Category I with coding numbers falling between 80,000 and 89,000. There can be several codes for a given test based on the reason and type of testing and there are codes given for common profiles or array of tests that represent each test's separate codes. For example, blood glucose testing includes the codes 82947 (quantitative except for strip reading), 82948 (strip reading), and 82962 (self-monitoring by FDAcleared device) and the comprehensive metabolic panel (80053) includes albumin, alkaline phosphatase, total bilirubin, blood urea nitrogen, total calcium, carbon dioxide, chloride, creatinine, glucose, potassium, total protein, sodium, and alanine and aspartate transaminases and their associated codes. At a minimum, any system must include a unique patient identifier, test name, and code that relates back to the HCPCS and ICD databases. For reporting purposes, whether paper or electronic, the report should include the unique patient identifier and test name including any appropriate abbreviations, the test value with the unit of measure, date and time of collection, sample information, reference ranges, plus any other pertinent information for proper test interpretation. Results that are subject to autoverification should be indicated in the report. Table 1.6 lists the information that is often required by accreditation agencies.26 TABLE 1.6 Minimum Elements of Paper or Electronic Patient Reports

For additional student http://thepoint.lww.com

resources,

please

visit



at

questions 1. What is the molarity for a solution containing 100 g of NaCl made up to 500 mL with distilled water? Assume a gram molecular weight (from periodic table) of approximately 58 grams. a. 3.45 M b. 1.72 M c. 290 M d. 5.27 M 2. What is the normality for a solution containing 100 g of NaCl made up to 500 mL with distilled water? Assume a gram molecular weight (from periodic table) of approximately 58 g. a. 3.45

b. 0.86 c. 1.72 d. 6.9 3. What is the percent (w/v) for a solution containing 100 g of NaCl made up to 500 mL with distilled water? a. 20% b. 5% c. 29% d. 58% 4. What is the dilution factor for a solution containing 100 g of NaCl made up to 500 mL with distilled water? a. 1:5 or 1/5 b. 5 c. 50 or 1/50 d. 10 5. What is the value in mg/dL for a solution containing 10 mg of CaCl2 made with 100 mL of distilled water? a. 10 b. 100 c. 50 d. Cannot determine without additional information 6. What is the molarity of a solution containing 10 mg of CaCl2 made with 100 mL of distilled water? Assume a gram molecular weight from the periodic table of approximately 111 g. a. 9 × 10−4 b. 1.1 × 10−3 c. 11.1 d. 90 7. You must make 1 L of 0.2 M acetic acid (CH3COOH). All you have available is concentrated glacial acetic acid (assay value, 98%; specific gravity, 1.05 g/mL). It will take milliliters of acetic acid to make this solution. Assume a gram molecular weight of 60.05 grams. a. 11.7

b. 1.029 c. 3.42 d. 12.01 8. What is the hydrogen ion concentration of an acetate buffer having a pH of 3.85? a. 1.41 × 10−4 b. 3.90 × 10−1 c. 0.048 d. 0.15 × 10−6 9. Using the Henderson-Hasselbalch equation, give the ratio of salt to weak acid for a Veronal buffer with a pH of 8.6 and a pKa of 7.43. a. 14.7/1 b. 1/8.6 c. 1.17/1 d. 1/4.3 10. The pKa for acetic acid is 4.76. If the concentration of salt is 2 mmol/L and that of acetic acid is 6 mmol/L, what is the expected pH? a. 4.43 b. 6.19 c. 104 d. 56 11. The hydrogen ion concentration of a solution is 0.000439. What is the pH? a. 3.36 b. 4.39 × 10−5 c. 4.39 d. 8.03 12. Perform the following conversions: a. 4 × 104 mg = g b. 1.3 × 102 mL = dL c. 0.02 mL = μL d. 5 × 10−3 mL = μL e. 5 × 10−2 L = mL

f. 4 cm = mm 13. What volume of 14 N H2SO4 is needed to make 250 mL of 3.2 M H2SO4 solution? Assume a gram molecular weight of 98.08 g. a. 114 mL b. 1.82 mL c. 1.75 mL d. 7 mL 14. A 24-hour urine has a total volume of 1,200 mL. A 1:200 dilution of the urine specimen gives a creatinine result of 0.8 mg/dL. The serum value is 1.2 mg/dL. What is the final value of creatinine in mg/dL in the undiluted urine sample? a. 160 b. 0.8 c. 960 d. 860 15. A 24-hour urine has a total volume of 1,200 mL. A 1:200 dilution of the urine specimen gives a creatinine result of 0.8 mg/dL. The serum value is 1.2 mg/dL. What is the result in terms of grams per 24 hours? a. 1.92 b. 0.08 c. 80 d. 19 16. A new medical technologist was selecting analyte standards to develop a standard curve for a high- performance liquid chromatography (HPLC) procedure. This analyte must have a 100% purity level and must be suitable for HPLC. Which of the following labels would be most appropriate for this procedure? a. ACS with no impurities listed b. USP c. NF d. CP e. ACS with impurities listed 17. When selecting quality control reagents for measuring an analyte in urine, the medical technologist should select:

a. A quality control reagent prepared in a urine matrix. b. A quality control reagent prepared in a serum matrix. c. A quality control reagent prepared in deionized water. d. The matrix does not matter; any quality control reagent as long as the analyte of measure is chemically pure. 18. A patient's serum sample was placed on the chemistry analyzer and the output indicated “out of range” for the measurement of creatine kinase (CK) enzyme. A dilution of the patient serum was required. Which of the following should be used to prepare a dilution of patient serum? a. Deionized water b. Tap water c. Another patient's serum with confirmed, low levels of CK d. Type III water e. Type I water 19. True or False? Laboratory liquid-in-glass thermometers should be calibrated against an NIST-certified thermometer. 20. Which of the following containers is calibrated to hold only one exact volume of liquid? a. Volumetric flask b. Erlenmeyer flask c. Griffin beaker d. Graduated cylinder 21. Which of the following does NOT require calibration in the clinical laboratory? a. Electronic balance b. Liquid-in-glass thermometer c. Centrifuge d. Volumetric flask e. Air-displacement pipette 22. Which of the following errors is NOT considered a preanalytical error? a. During a phlebotomy procedure, the patient is opening and clenching his fist multiple times. b. The blood was not permitted to clot and spun in a centrifuge after 6

minutes of collection. c. The patient was improperly identified leading to a mislabeled blood sample. d. The serum sample was diluted with tap water. e. During phlebotomy, the EDTA tube was collected prior to the red clot tube.

references 1. National Institute of Standards and Technology (NIST). Reference on Constants, Units, and Uncertainty. Washington, DC: U.S. Department of Commerce; 1991/1993. http://physics.nist.gov/cuu/Units/introduction.html. Accessed December 2, 2015 [Adapted from Special Publications (SP) Nos. 811 and 330]. 2. Miller WG, Tate JR, Barth J, et al. Harmonization: the sample, the measurement, and the report. Annals of Laboratory Medicine. 2014;34(3):187–197, doi: 10.33431/alm.2014.34.3. 3. American Chemical Society (ACS). Reagent Chemicals. 9th ed. Washington, DC: ACS Press; 2000. 4. Department of Labor and Occupational Safety and Health Administration (OSHA). Occupational Exposure to Hazardous Chemicals in Laboratories. Washington, DC: OSHA; 2011. (Federal Register, 29 CFR, Part 1910.1450.) 5. National Institute of Standards and Technology (NIST). Standard Reference Materials Program. Washington, DC: U.S. Department of Commerce; 2008. www.nist.gov/srm/definitons.cfm. Accessed December 10, 2015. 6. IUPAC. Compendium of Chemical Terminology. 2nd ed. (the “Gold Book”). Compiled by A.D. McNaught and A. Wilkinson. Oxford: Blackwell Scientific Publications; 1997. XML on-line corrected version: http://goldbook.iupac.org (2006-) created by M. Nic, J. Jirat, Kosata; updates compiled by A. Jenkins, ISBN 0-9678550-9-8. Doi: 10.1351/goldbook. Last update 2014-02-24; version 2.3.3 7. International Organization for Standardization, European Committee for Standardization (ISO). In Vitro Diagnostic Medical Devices—Measurement of Quantities in Samples of Biological Origin— Metrological Traceability of Values Assigned to Calibrators and Control Material. Geneva, Switzerland: ISO; 2003. (Publication No. ISO/TC 212/WG2 N65/EN 17511.) 8. Carreiro-Lewandowski E. Basic principles and practice of clinical chemistry. In: Bishop M, Schoeff L, Fody P, eds. Clinical Chemistry, Principles, Procedures, and Correlations. 7th ed. Baltimore, MD: Lippincott Williams & Wilkins; 2010:3–32. 9. Clinical Laboratory Standards Institute/National Committee for Clinical Laboratory Standards. Preparation and Testing of Reagent Water in the Clinical Laboratory; Approved Guideline. 4th ed. Wayne, PA: CLSI; 2006. (Publication No. C3-A4.) Amended June, 2012. 10. National Institutes of Health. Laboratory Water: Its Importance and Applications. NIH. http://orf.od.nih.gov/Policies and Guidelines/Documents. Published March, 2013. Accessed December 11, 2015. 11. College of American Pathologists (CAP). Laboratory General Checklist: Quality of Water and Glassware Washing. Northfield, IL: September, 25, 2012. 12. National Institute of Standards and Technology (NIST). Calibration Uncertainties of Liquid-in-Glass Thermometers Over the Range from −20°C to 400°C. Gaithersburg, MD: U.S. Department of Commerce; 2003. 13. NIH, National Institute of Allergy and Infectious Diseases. DAIDS Guidelines for Good Clinical Laboratory Practice Standards.

14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26.

https://www.niaid.nih.gov/LabsAndResources/resources/DAIDSClinRsrch/Documents/gclp.pdf. Published July, 2013. Accessed December 8, 2015. American Society for Testing and Materials (ASTM), E969-02 (2012). Standards Specification for Volumetric (Transfer) Pipets. West Conshohocken, PA: ASTM International; 2012. doi: 10.1520/E0969-02R12 American Society for Testing and Materials (ASTM), E288-10. Standard Specification for Glass, Volumetric flasks. West Conshohocken, PA: ASTM International; 2010. doi: 10.1520/E0288-10 American Society for Testing and Materials (ASTM), E438-92 (2011). Standard Specification for Glasses in Laboratory Apparatus. West Conshohocken, PA: ASTM International; 2011. doi: 10.1520/E0438-92R11. Eppendorf. Standard Operating Procedure for Pipettes. AESOP13640-08 05213_en_online. Hamburg: Eppendorf AG; 2013. Accessed October 2015. Gilson. Verification Procedure for Accuracy and Precision. Middleton, WI. Procedure LT 8022921C; 2003. http://www.sipoch.cz/download/overeni_pipety.pdf. Accessed December 15, 2015. McCall RE, Tankersley CM. Phlebotomy Essentials. 6th ed. Philadelphia, PA: LWW; 2016. American Society for Testing and Materials (ASTM) E617-13. Standard Specification for Laboratory Weights and Precision Mass Standards. West Conshohocken, PA: ASTM; 2013. www.astm.org. doi: 10.1520/E0617 Doucette, Lorraine J. Mathematics for the Clinical Laboratory. 2nd ed. Philadelphia, PA: Saunders; 2010. Clinical Laboratory Standards Institute (CLSI). Procedures for the Handling and Processing of Blood Specimens. 4th ed. Wayne, PA: CLSI; 2010. (Publication No. H18-A4, Vol. 30.) Centers for Disease Control and Prevention, Division of Laboratory Systems. Clinical Laboratory Improvement Amendments. CFR Part 493, Laboratory requirements. http://wwwn.cdc.gov/clia/default.aspx. Accessed November 2015. Clinical Laboratory Standards Institute (CLSI). Procedures and Devices for the Collection of Diagnostic Blood Specimens by Skin Puncture. 6th ed. Wayne, PA: CLSI; 2010. (Publication No. H3-A6, Vol. 27) Young DS. Effects of Drugs on Clinical Laboratory Tests. 5th ed. Washington, DC: AACC Press; 2000. College of American Pathology; Commission on Laboratory Accreditation; Laboratory Accreditation Program Laboratory General Checklist Website. http://www.cap.org/apps/docs/laboratory_accreditation/checklists/laboratory. Accessed November 2015.

2 Laboratory Safety and Regulations TOLMIE E. WACHTER

Chapter Outline Laboratory Safety and Regulations Occupational Safety and Health Act Other Regulations and Guidelines

Safety Awareness for Clinical Laboratory Personnel Safety Responsibility Signage and Labeling

Safety Equipment Chemical Fume Hoods and Biosafety Cabinets Chemical Storage Equipment PPE and Hygiene

Biologic Safety General Considerations Spills Bloodborne Pathogens Airborne Pathogens Shipping

Chemical Safety Hazard Communication Safety Data Sheet OSHA Laboratory Standard Toxic Effects from Hazardous Substances Storage and Handling of Chemicals

Radiation Safety Environmental Protection Personal Protection Nonionizing Radiation

Fire Safety The Chemistry of Fire Classification of Fires

Types and Applications of Fire Extinguishers

Control of Other Hazards Electrical Hazards Compressed Gas Hazards Cryogenic Materials Hazards Mechanical Hazards Ergonomic Hazards

Disposal of Hazardous Materials Chemical Waste Radioactive Waste Biohazardous Waste

Accident Documentation and Investigation Questions Bibliography and Suggested Reading Chapter Objectives Upon completion of this chapter, the clinical laboratorian should be able to do the following: Discuss safety awareness for clinical laboratory personnel. List the responsibilities of employer and employee in providing a safe workplace. Identify hazards related to handling chemicals, biologic specimens, and radiologic materials. Choose appropriate personal protective equipment when working in the clinical laboratory. Identify the classes of fires and the types of fire extinguishers to use for each. Describe steps used as precautionary measures when working with electrical equipment, cryogenic materials, and compressed gases and avoiding mechanical hazards associated with laboratory equipment. Select the correct means for disposal of waste generated in the clinical laboratory. Outline the steps required in documentation of an accident in the workplace.

For additional student resources, please visit

Key Terms Airborne pathogens Biohazard Bloodborne pathogens Carcinogens

at http://thepoint.lww.com

Chemical hygiene plan Corrosive chemicals Cryogenic material Fire tetrahedron Hazard Communication Standard Hazardous materials High-efficiency particulate air (HEPA) filters Laboratory standard Mechanical hazards Medical waste National Fire Protection Association (NFPA) Occupational Safety and Health Act (OSHA) Radioactive materials Reactive chemicals Safety Data Sheets (SDSs) Teratogens Universal precautions

LABORATORY REGULATIONS

SAFETY

AND

Clinical laboratory personnel, by the nature of the work they perform, are exposed daily to a variety of real or potential hazards: electric shock, toxic vapors, compressed gases, flammable liquids, radioactive material, corrosive substances, mechanical trauma, poisons, and the inherent risks of handling biologic materials, to name a few. Each clinician should develop an understanding of the risks associated with these hazards and must be “safety conscious” at all times. Laboratory safety necessitates the effective control of all hazards that exist in the clinical laboratory at any given time. Safety begins with the recognition of hazards and is achieved through the application of common sense, a safetyfocused attitude, good personal behavior, good housekeeping in all laboratory work and storage areas, and, above all, the continual practice of good laboratory technique. In most cases, accidents can be traced directly to two primary causes: unsafe acts (not always recognized by personnel) and unsafe environmental conditions. This chapter discusses laboratory safety as it applies to the clinical laboratory.

Occupational Safety and Health Act

Public Law 91-596, better known as the Occupational Safety and Health Act (OSHA), was enacted by the U.S. Congress in 1970. The goal of this federal regulation was to provide all employees (clinical laboratory personnel included) with a safe work environment. Under this legislation, the Occupational Safety and Health Administration (also known as OSHA) is authorized to conduct onsite inspections to determine whether an employer is complying with the mandatory standards. Safety is no longer only a moral obligation but also a federal law. In about half of the states, this law is administered by individual state agencies rather than by the federal OSHA. These states still fall within delineated OSHA regions, but otherwise they bear all administrative, consultation, and enforcement responsibilities. The state regulations must be at least as stringent as the federal ones, and many states incorporate large sections of the federal regulations verbatim. OSHA standards that regulate safety in the laboratory include the Bloodborne Pathogen Standard, Formaldehyde Standard, Laboratory Standard, Hazard Communication Standard, Respiratory Protection Standard, Air Contaminants Standard, and Personal Protective Equipment Standard. Because laws, codes, and ordinances are updated frequently, current reference materials should be reviewed. Assistance can be obtained from local libraries, the Internet, and federal, state, and local regulatory agencies. The primary standards applicable to clinical laboratory safety are summarized next.

Bloodborne Pathogens [29 CFR 1910.1030] This standard applies to all exposure to blood or other potentially infectious materials in any occupational setting. It defines terminology relevant to such exposures and mandates the development of an exposure control plan. This plan must cover specific preventative measures including exposure evaluation, engineering controls, work practice controls, and administrative oversight of the program. Universal precautions and personal protective equipment (PPE) are foremost among these infection control measures. The universal precautions concept is basically an approach to infection control in which all human blood, tissue, and most fluids are handled as if known to be infectious for the human immunodeficiency virus (HIV), hepatitis B virus (HBV), and other bloodborne pathogens. The standard also provides fairly detailed directions for decontamination and the safe handling of potentially infectious laboratory supplies and equipment, including practices for managing laundry and infectious wastes. Employee information and training are covered regarding recognition of hazards and risk of infection. There is also a requirement for HBV vaccination or

formal declination within 10 days of assuming duties that present exposure. In the event of an actual exposure, the standard outlines the procedure for postexposure medical evaluation, counseling, and recommended testing or postexposure prophylaxis.

Hazard Communication [29 CFR 1910.1200] This subpart to OSHA's Toxic and Hazardous Substances regulations is intended to ensure that the hazards of all chemicals used in the workplace have been evaluated and that this hazard information is successfully transmitted to employers and their employees who use the substances. Informally referred to as the OSHA “HazCom Standard,” it defines hazardous substances and provides guidance for evaluating and communicating identified hazards. The primary means of communication are through proper labeling, the development and use of safety data sheets (SDSs), and employee education.

Occupational Exposure to Hazardous Chemicals in Laboratories [29 CFR 1910.1450] This second subpart to OSHA's Toxic and Hazardous Substances regulations is also known as the “OSHA Lab Standard.” It was intended to address the shortcomings of the Hazard Communication Standard regarding its application peculiar to the handling of hazardous chemicals in laboratories, whose multiple small-scale manipulations differ from the industrial volumes and processes targeted by the original HazCom Standard. The Lab Standard requires the appointment of a chemical hygiene officer and the development of a chemical hygiene plan to reduce or eliminate occupational exposure to hazardous chemicals. This plan is required to describe the laboratory's methods of identifying and controlling physical and health hazards presented by chemical manipulations, containment, and storage. The chemical hygiene plan must detail engineering controls, PPE, safe work practices, and administrative controls, including provisions for medical surveillance and consultation, when necessary.

Other Regulations and Guidelines There are other federal regulations relating to laboratory safety, such as the Clean Water Act, the Resource Conservation and Recovery Act (RCRA), and the Toxic Substances Control Act. In addition, clinical laboratories are required to comply with applicable local and state laws, such as fire and building codes. The Clinical and Laboratory Standards Institute (CLSI, formerly National Committee

for Clinical Laboratory Standards [NCCLS]) provides excellent general laboratory safety and infection control guidelines in their documents GP17-A3 (Clinical Laboratory Safety; Approved Guideline, Second Edition) and M29-A4 (Protection of Laboratory Workers from Occupationally Acquired Infections; Approved Guideline, Third Edition). Safety is also an important part of the requirements for initial and continued accreditation of health care institutions and laboratories by voluntary accrediting bodies such as The Joint Commission (TJC; formerly the Joint Commission on Accreditation of Health Care Organizations [JCAHO]) and the Commission on Laboratory Accreditation of the College of American Pathologists (CAP). TJC publishes a yearly accreditation manual for hospitals and the Accreditation Manual for Pathology and Clinical Laboratory Services, which includes a detailed section on safety requirements. CAP publishes an extensive inspection checklist (Laboratory General Checklist) as part of their Laboratory Accreditation Program, which includes a section dedicated to laboratory safety. Over the past decade, several new laws and directives have been emplaced regarding enhanced security measures for particular hazardous substances with potential for nefarious use in terrorist activities. These initiatives are typically promulgated by the Department of Homeland Security in cooperation with the respective agency regulating chemical, nuclear, or biological agents of concern. Although most laboratories do not store or use the large volumes of chemicals required to trigger chemical security requirements, many laboratories do surpass the thresholds for radiological and biological agents. Management and employees must be cognizant of security requirements for substances in quantities qualifying them for regulation under enhanced security measures for chemical (Chemical Facilities Anti-Terrorism Standards, 6 CFR 27), radiological (Nuclear Regulatory Commission [NRC] Security Orders and Increased Controls for licensees holding sources above Quantities of Concern), and biological (Select Agents and Toxins, 42 CFR 73) agents. Most security measures involve restriction of access to only approved or authorized individuals, assessment of security vulnerabilities, secure physical containment of the agents, and inventory monitoring and tracking.

SAFETY AWARENESS FOR CLINICAL LABORATORY PERSONNEL

Safety Responsibility The employer and the employee share safety responsibility. While the individual employee has an obligation to follow safe work practices and be attentive to potential hazards, the employer has the ultimate responsibility for safety and delegates authority for safe operations to laboratory managers and supervisors. In order to ensure clarity and consistency, safety management in the laboratory should start with a written safety policy. Laboratory supervisors, who reflect the attitudes of management toward safety, are essential members of the safety program.

Employer's Responsibilities Establish laboratory work methods and safety policies. Provide supervision and guidance to employees. Provide safety information, training, PPE, and medical surveillance to employees. Provide and maintain equipment and laboratory facilities that are free of recognized hazards and adequate for the tasks required. The employee also has a responsibility for his or her own safety and the safety of coworkers. Employee conduct in the laboratory is a vital factor in the achievement of a workplace without accidents or injuries.

Employee's Responsibilities Know and comply with the established laboratory safe work practices. Have a positive attitude toward supervisors, coworkers, facilities, and safety training. Be alert and give prompt notification of unsafe conditions or practices to the immediate supervisor and ensure that unsafe conditions and practices are corrected. Engage in the conduct of safe work practices and use of PPE.

Signage and Labeling Appropriate signs to identify hazards are critical, not only to alert laboratory personnel to potential hazards but also to identify specific hazards that may arise because of emergencies such as fire or explosion. The National Fire Protection

Association (NFPA) developed a standard hazard identification system (diamond-shaped, color-coded symbol), which has been adopted by many clinical laboratories. At a glance, emergency personnel can assess health hazards (blue quadrant), flammable hazards (red quadrant), reactivity/stability hazards (yellow quadrant), and other special information (white quadrant). In addition, each quadrant shows the magnitude of severity, graded from a low of 0 to a high of 4, of the hazards within the posted area. (Note the NFPA hazard code symbol in Fig. 2.1.)

FIGURE 2.1 Sample chemical label: (1) statement of hazard, (2) hazard class, (3) safety precautions, (4) NFPA hazard code, (5) fire extinguisher type, (6) safety instructions, (7) formula weight, and (8) lot number. Color of the diamond in the NFPA label indicates hazard: Red = flammable. Store in an area segregated for flammable reagents. Blue = health hazard. Toxic if inhaled, ingested, or absorbed through the skin. Store in a secure area. Yellow = reactive and oxidizing reagents. May react violently with air, water, or other substances. Store away from flammable and combustible materials. White = corrosive. May harm skin, eyes, or mucous membranes. Store away from red-, blue-, and yellow-coded reagents. Gray = presents no more than moderate hazard in any of the categories. For general chemical storage. Exception = reagent incompatible with other reagents of same color bar. Store separately. Hazard code (4)— Following the NFPA use, each diamond shows a red segment (flammability), a blue segment (health; i.e., toxicity), and a yellow segment (reactivity). Printed over each color-coded segment is a black number showing the degree of hazard

involved. The fourth segment, as stipulated by the NFPA, is left blank. It is reserved for special warnings, such as radioactivity. The numeric ratings indicate degree of hazard: 4 = extreme, 3 = severe, 2 = moderate, 1 = slight, and 0 = none according to present data. (Courtesy of Baxter International Inc.)

Manufacturers of laboratory chemicals also provide precautionary labeling information for users. Information indicated on the product label includes statement of the hazard, precautionary measures, specific hazard class, first aid instructions for internal/external contact, the storage code, the safety code, and personal protective gear and equipment needed. This information is in addition to specifications on the actual lot analysis of the chemical constituents and other product notes (Fig. 2.1). Over the last two decades, there has been an effort to standardize hazard terminology and classification under an internationally recognized guideline, titled the Globally Harmonized System of Classification and Labeling of Hazardous Chemicals (GHS). This system incorporates universal definitions and symbols to clearly communicate specific hazards in a single concise label format (Fig. 2.2). Although not yet law, or codified as a regulatory standard, OSHA is presently working to align the existing Hazard Communication Standard with provisions of the GHS and encourages employers to begin adopting the program.

FIGURE 2.2 Example of a GHS inner container label (e.g., bottle inside a shipping box).

All in-house prepared reagents and solutions should be labeled in a standard manner and include the chemical identity, concentration, hazard warning, special handling, storage conditions, date prepared, expiration date (if applicable), and preparer's initials.

SAFETY EQUIPMENT Safety equipment has been developed specifically for use in the clinical laboratory. The employer is required by law to have designated safety equipment available, but it is also the responsibility of the employee to comply with all safety rules and to use safety equipment. All laboratories are required to have safety showers, eyewash stations, and fire extinguishers and to periodically test and inspect the equipment for proper operation. It is recommended that safety showers deliver 30 to 50 gallons of water per minute at 20 to 50 pounds per square in. (psi) and be located in areas where corrosive liquids are stored or used. Eyewash stations must be accessible (i.e., within 100 ft or 10 s travel) in laboratory areas presenting chemical or biological exposure hazards. Other items that must be available for personnel include fire blankets, spill kits, and first aid supplies. Mechanical pipetting devices must be used for manipulating all types of liquids in the laboratory, including water. Mouth pipetting is strictly prohibited.

Chemical Fume Hoods and Biosafety Cabinets Fume Hoods Fume hoods are required to contain and expel noxious and hazardous fumes from chemical reagents. Fume hoods should be visually inspected for blockages. A piece of tissue paper placed at the hood opening will indicate airflow direction. The hood should never be operated with the sash fully opened, and a maximum operating sash height should be established and conspicuously marked. Containers and equipment positioned within hoods should not block airflow. Periodically, ventilation should be evaluated by measuring the face velocity with a calibrated velocity meter. The velocity at the face of the hood (with the sash in normal operating position) should be 100 to 120 ft per minute and fairly uniform across the entire opening. Smoke testing is also recommended to locate no flow or turbulent areas in the working space. As an added precaution, personal air monitoring should be conducted in accordance with the

chemical hygiene plan of the facility.

Biosafety Cabinets Biological safety cabinets (BSCs) remove particles that may be harmful to the employee who is working with potentially infectious biologic specimens. The Centers for Disease Control and Prevention (CDC) and the National Institutes of Health have described four levels of biosafety, which consist of combinations of laboratory practices and techniques, safety equipment, and laboratory facilities. The biosafety level of a laboratory is based on the operations performed, the routes of transmission of the infectious agents, and the laboratory function or activity. Accordingly, biosafety cabinets are designed to offer various levels of protection, depending on the biosafety level of the specific laboratory (Table 2.1). BSCs should be periodically recertified to ensure continued optimal performance as filter occlusion or rupture can compromise their effectiveness. TABLE 2.1 Comparison of Biosafety Cabinet Characteristics

BSC, biological safety cabinet; HEPA, high-efficiency particulate air; lfm, linear feet per minute. Adapted from Centers for Disease Control and Prevention, National Institutes of Health. Biosafety in Microbiological and Biomedical Laboratories. 5th ed. Washington, DC: U.S. Government Printing Office; 2009.

Chemical Storage Equipment Safety equipment is available for the storage and handling of hazardous chemicals and compressed gases. Safety carriers should always be used to transport glass bottles of acids, alkalis, or organic solvents in volumes larger than 500 mL, and approved safety cans should be used for storing, dispensing, or disposing of flammables in volumes greater than 1 quart. Steel safety cabinets with self-closing doors are required for the storage of flammable liquids, and only specially designed, explosion-proof refrigerators may be used to store flammable materials. Only the amount of chemical needed for that day should be available at the bench. Gas cylinder supports or clamps must be used at all times, and larger cylinders should be transported with valve caps on, using handcarts.

PPE and Hygiene The parts of the body most frequently subject to injury in the clinical laboratory are the eyes, skin, and respiratory and digestive tracts. Hence, the use of PPE and proper hygiene is very important. Safety glasses, goggles, visors, or work shields protect the eyes and face from splashes and impact. Contact lenses do not offer

eye protection; it is strongly recommended that they not be worn in the clinical chemistry laboratory, unless additional protective eyewear is also utilized. If any solution is accidentally splashed into the eye(s), thorough irrigation is required. Gloves and rubberized sleeves protect the hands and arms when using caustic chemicals. Gloves are required for routine laboratory use; however, polyvinyl or other nonlatex gloves are an acceptable alternative for people with latex allergies. Certain glove materials offer better protection against particular reagent formulations. Nitrile gloves, for example, offer a wider range of compatibility with organic solvents than do latex gloves. Laboratory coats, preferably with knit-cuffed sleeves, should be full length and buttoned and made of liquid-resistant material. When performing manipulations prone to splash hazards, the laboratory coat should be supplemented with an impermeable apron and/or sleeve garters, constructed of suitable material to guard against the substances. Proper footwear is required; shoes constructed of porous materials, open-toed shoes, and sandals are considered ineffective against spilled hazardous liquids. Respirators may be required for various procedures in the clinical laboratory. Whether used for biologic or chemical hazards, the correct type of respirator must be used for the specific hazard. Respirators with high-efficiency particulate air (HEPA) filters must be worn when engineering controls are not feasible, such as when working directly with patients with tuberculosis (TB) or when performing procedures that may aerosolize specimens of patients with a suspected or confirmed case of TB. Training, maintenance, and written protocol for use of respirators are required according to the respiratory protection standard. Each employer must provide (at no charge) laboratory coats, gloves, or other protective equipment to all employees who may be exposed to biologic or chemical hazards. It is the employer's responsibility to clean and maintain any PPE used by more than one person. All contaminated PPE must be removed and properly cleaned or disposed off before leaving the laboratory. Hand washing is a crucial component of both infection control and chemical hygiene. After removing gloves, hands should be washed thoroughly with soap and warm water, even if glove breakthrough or contamination is not suspected. The use of antimicrobial soap is not as important as the physical action of washing the hands with water and any mild soap. After any work with highly toxic or carcinogenic chemicals, the face should also be washed.

BIOLOGIC SAFETY General Considerations All blood samples and other body fluids should be collected, transported, handled, and processed using universal precautions (i.e., presumed to be infectious). Gloves, gowns, and face protection must be used during manipulations or transfers when splashing or splattering is most likely to occur. Consistent and thorough hand washing is an essential component of infection control. Antiseptic gels and foams may be used at waterless stations between washes, but they should not take the place of an actual hand wash. Centrifugation of biologic specimens produces finely dispersed aerosols that are a high-risk source of infection. Ideally, specimens should remain capped during centrifugation, or several minutes should be allowed to elapse after centrifugation is complete before opening the lid. As a preferred option, the use of a sealed-cup centrifuge is recommended. These sealed vessels can then be brought to a biosafety cabinet to be opened.

Spills Any blood, body fluid, or other potentially infectious material spill must be promptly cleaned up, and the area or equipment must be disinfected immediately. Safe cleanup includes the following recommendations: Alert others in area of the spill. Wear appropriate protective equipment. Use mechanical devices to pick up broken glass or other sharp objects. Absorb the spill with paper towels, gauze pads, or tissue. Clean the spill site using a common aqueous detergent. Disinfect the spill site using approved disinfectant or 10% bleach, using appropriate contact time. Rinse the spill site with water. Dispose off all materials in appropriate biohazard containers.

Bloodborne Pathogens In December 1991, OSHA issued the final rule for occupational exposure to bloodborne pathogens. To minimize employee exposure, each employer must have a written exposure control plan. The plan must be available to all

employees whose duties may result in reasonably anticipated occupational exposure to blood or other potentially infectious materials. The exposure control plan must be discussed with all employees and be available to them while they are working. The employee must be provided with adequate training in all techniques described in the exposure control plan at initial work assignment and annually thereafter. All necessary safety equipment and supplies must be readily available and inspected on a regular basis. Clinical laboratory personnel are knowingly or unknowingly in frequent contact with potentially biohazardous materials. In recent years, new and serious occupational hazards to personnel have arisen, and this problem has been complicated because of the general lack of understanding of the epidemiology, mechanisms of transmission of the disease, or inactivation of the causative agent. Special precautions must be taken when handling all specimens because of the continual increase in the proportion of infectious samples received in the laboratory. Therefore, in practice, specimens from patients with confirmed or suspected hepatitis, acquired immunodeficiency syndrome (AIDS), or other potentially infectious diseases should be handled no differently than other routine specimens. Adopting a universal precautions policy, which considers blood and other body fluids from all patients as potentially infective, is required.

Airborne Pathogens Because of a global resurgence of TB, OSHA issued a statement in 1993 that the agency would enforce CDC Guidelines for Preventing the Transmission of Tuberculosis in Health Care Facilities. The purpose of the guidelines is to encourage early detection, isolation, and treatment of active cases. A TB exposure control program must be established, and risks to laboratory workers must be assessed. In 1997, a proposed standard (29 CFR 1910.1035, Tuberculosis) was issued by OSHA only to be withdrawn again when it was determined that existing CDC guidelines could be enforced by OSHA through its “general duty” clause and Respiratory Protection Standard. The CDC guidelines require the development of a tuberculosis infection control program by any facility involved in the diagnosis or treatment of cases of confirmed infectious TB. TB isolation areas with specific ventilation controls must be established in health care facilities. Those workers in high-risk areas may be required to wear a respirator for protection. All health care workers considered to be at risk must be screened for TB infection. Other specific pathogens, including viruses, bacteria, and fungi, may be

considered airborne transmission risks. Protective measures in the clinical laboratory generally involve work practice and engineering controls focused on prevention of aerosolization, containment/isolation, and respiratory protection of N-95 (filtration of 95% of particles >0.3 μm) or better.

Shipping Clinical laboratories routinely ship regulated material. The U.S. Department of Transportation (DOT) and the International Air Transport Association (IATA) have specific requirements for carrying regulated materials. There are two types of specimen classifications. Known or suspect infectious specimens are labeled infectious substances if the pathogen can be readily transmitted to humans or animals. Diagnostic specimens are those tested as routine screening or for initial diagnosis. Each type of specimen has rules and packaging requirements. The DOT guidelines are found in the Code of Federal Regulations, Title 49, Subchapter C; IATA publishes its own manual, Dangerous Goods Regulations.

CHEMICAL SAFETY Hazard Communication In the August 1987 issue of the Federal Register, OSHA published the new Hazard Communication Standard (Right to Know Law, 29 CFR 1910.1200). The Right to Know Law was developed for employees who may be exposed to hazardous chemicals in the workplace. Employees must be informed of the health risks associated with those chemicals. The intent of the law is to ensure that health hazards are evaluated for all chemicals that are produced and that this information is relayed to employees. To comply with the regulation, clinical laboratories must Plan and implement a written hazard communication program. Obtain SDSs for each hazardous compound present in the workplace and have the SDSs readily accessible to employees. Educate all employees annually on how to interpret chemical labels, SDSs, and health hazards of the chemicals and how to work safely with the chemicals. Maintain hazard warning labels on containers received or filled on-site. In 2012, OSHA adopted significant changes to the Hazard Communication

Standard to facilitate a standardization of international hazard communication programs. This new initiative was titled the Globally Harmonized System of Classification and Labelling of Chemicals, or GHS. The primary improvements to the program involved more specific criteria for classification of chemicals; a uniform system of chemical labeling, including intuitive pictographs; and, replacing the existing Material Safety Data Sheet (MSDS) program with the new SDS format. These changes were phased in over a 3-year period, with the final requirements effective in June of 2016.

Safety Data Sheet The SDS is a major source of safety information for employees who may use hazardous materials in their occupations. Employers are responsible for obtaining the SDS from the chemical manufacturer or developing an SDS for each hazardous agent used in the workplace. The information contained in the SDS must follow a specific format, addressing each of the following 16 items: Section 1: Identification Section 2: Hazard identification Section 3: Ingredients information Section 4: First aid procedures Section 5: Fire-fighting procedures Section 6: Accidental-release measures Section 7: Handling and storage Section 8: Exposure controls and personal protection Section 9: Physical and chemical properties Section 10: Stability and reactivity Section 11: Toxicological information Section 12: Ecological information Section 13: Disposal considerations Section 14: Transport information Section 15: Regulatory information Section 16: Other information, including date of preparation or last revision The SDS must provide the specific compound identity, together with all common names. All information sections must be completed, and the date that the SDS was printed must be indicated. Copies of the SDS must be readily accessible to employees during all shifts.

OSHA Laboratory Standard Occupational Exposure to Hazardous Chemicals in Laboratories (29 CFR 1910.1450), also known as the laboratory standard, was enacted in May 1990 to provide laboratories with specific guidelines for handling hazardous chemicals. This OSHA standard requires each laboratory that uses hazardous chemicals to have a written chemical hygiene plan. This plan provides procedures and work practices for regulating and reducing exposure of laboratory personnel to hazardous chemicals. Hazardous chemicals are those that pose a physical or health hazard from acute or chronic exposure. Procedures describing how to protect employees against teratogens (substances that affect cellular development in a fetus or embryo), carcinogens, and other toxic chemicals must be described in the plan. Training in the use of hazardous chemicals must be provided to all employees and must include recognition of signs and symptoms of exposure, location of SDS, the chemical hygiene plan, and how to protect themselves against hazardous chemicals. A chemical hygiene officer must be designated for any laboratory using hazardous chemicals. The protocol must be reviewed annually and updated when regulations are modified or chemical inventory changes. Remember that practicing consistent and thorough hand washing is an essential component of preventative chemical hygiene.

Toxic Effects from Hazardous Substances Toxic substances have the potential of producing deleterious effects (local or systemic) by direct chemical action or interference with the function of body systems. They can cause acute or chronic effects related to the duration of exposure (i.e., short-term, or single contact, versus long-term, or prolonged, repeated contact). Almost any substance, even the most benign seeming, can pose risk of damage to a worker's lungs, skin, eyes, or mucous membranes following long- or short-term exposure and can be toxic in excess. Moreover, some chemicals are toxic at very low concentrations. Exposure to toxic agents can be through direct contact (absorption), inhalation, ingestion, or inoculation/injection. In the clinical chemistry laboratory, personnel should be particularly aware of toxic vapors from chemical solvents, such as acetone, chloroform, methanol, or carbon tetrachloride, that do not give explicit sensory irritation warnings, as do bromide, ammonia, and formaldehyde. Air sampling or routine monitoring may be necessary to quantify dangerous levels. Mercury is another frequently

disregarded source of poisonous vapors. It is highly volatile and toxic and is rapidly absorbed through the skin and respiratory tract. Mercury spill kits should be available in areas where mercury thermometers are used. Most laboratories are phasing out the use of mercury and mercury-containing compounds. Laboratories should have a policy on mercury reduction or elimination and a method for legally disposing off mercury. Several compounds, including formaldehyde and methylene chloride, have substance-specific OSHA standards, which require periodic monitoring of air concentrations. Laboratory engineering controls, PPE, and procedural controls must be adequate to protect employees from these substances.

Storage and Handling of Chemicals To avoid accidents when handling chemicals, it is important to develop respect for all chemicals and to have a complete knowledge of their properties. This is particularly important when transporting, dispensing, or using chemicals that, when in contact with certain other chemicals, could result in the formation of substances that are toxic, flammable, or explosive. For example, acetic acid is incompatible with other acids such as chromic and nitric acid, carbon tetrachloride is incompatible with sodium, and flammable liquids are incompatible with hydrogen peroxide and nitric acid. Arrangements for the storage of chemicals will depend on the quantities of chemicals needed and the nature or type of chemicals. Proper storage is essential to prevent and control laboratory fires and accidents. Ideally, the storeroom should be organized so that each class of chemicals is isolated in an area that is not used for routine work. An up-to-date inventory should be kept that indicates location of chemicals, minimum/maximum quantities required, and shelf life. Some chemicals deteriorate over time and become hazardous (e.g., many ethers and tetrahydrofuran form explosive peroxides). Storage should not be based solely on alphabetical order because incompatible chemicals may be stored next to each other and react chemically. They must be separated for storage, as shown in Table 2.2. TABLE 2.2 Storage Requirements

Flammable/Combustible Chemicals Flammable and combustible liquids, which are used in numerous routine procedures, are among the most hazardous materials in the clinical chemistry laboratory because of possible fire or explosion. They are classified according to flash point, which is the temperature at which sufficient vapor is given off to form an ignitable mixture with air. A flammable liquid has a flash point below 37.8°C (100°F), and combustible liquids, by definition, have a flash point at or above 37.8°C (100°F). Some commonly used flammable and combustible solvents are acetone, benzene, ethanol, heptane, isopropanol, methanol, toluene, and xylene. It is important to remember that flammable or combustible chemicals also include certain gases, such as hydrogen, and solids, such as paraffin.

Corrosive Chemicals Corrosive chemicals are injurious to the skin or eyes by direct contact or to the tissue of the respiratory and gastrointestinal tracts if inhaled or ingested. Typical examples include acids (acetic, sulfuric, nitric, and hydrochloric) and bases (ammonium hydroxide, potassium hydroxide, and sodium hydroxide). External exposures to concentrated corrosives can cause severe burns and require immediate flushing with copious amounts of clean water.

Reactive Chemicals Reactive chemicals are substances that, under certain conditions, can spontaneously explode or ignite or that evolve heat or flammable or explosive gases. Some strong acids or bases react with water to generate heat (exothermic reactions). Hydrogen is liberated if alkali metals (sodium or potassium) are mixed with water or acids, and spontaneous combustion also may occur. The mixture of oxidizing agents, such as peroxides, and reducing agents, such as hydrogen, generates heat and may be explosive.

Carcinogenic Chemicals Carcinogens are substances that have been determined to be cancer-causing agents. OSHA has issued lists of confirmed and suspected carcinogens and detailed standards for the handling of these substances. Benzidine is a common example of a known carcinogen. If possible, a substitute chemical or different procedure should be used to avoid exposure to carcinogenic agents. For regulatory (OSHA) and institutional safety requirements, the laboratory must maintain an accurate inventory of carcinogens.

Chemical Spills Strict attention to good laboratory technique can help prevent chemical spills. However, emergency procedures should be established to handle any accidents. If a spill occurs, the first step should be to assist/evacuate personnel, and then confinement and cleanup of the spill can begin. There are several commercial spill kits available for neutralizing and absorbing spilled chemical solutions (Fig. 2.3). However, no single kit is suitable for all types of spills. Emergency procedures for spills should also include a reporting system.

FIGURE 2.3 Spill cleanup kit.

RADIATION SAFETY Environmental Protection A radiation safety policy should include environmental and personnel protection. All areas where radioactive materials are used or stored must be posted with caution signs, and traffic in these areas should be restricted to essential personnel only. Regular and systematic monitoring must be emphasized, and decontamination of laboratory equipment, glassware, and work areas should be scheduled as part of routine procedures. Records must be maintained as to the quantity of radioactive material on hand as well as the quantity that is disposed. A NRC or agreement state license is required if the amount of radioactive material exceeds a certain level. The laboratory safety officer must consult with the institutional safety officer about these requirements.

Personal Protection It is essential that only properly trained personnel work with radioisotopes. Good work practices must consistently be employed to ensure that contamination and inadvertent internalization are avoided. Users should be monitored to ensure that the maximal permissible dose of radiation is not exceeded. Radiation monitors must be evaluated regularly to detect degree of exposure for the laboratory employee. Records must be maintained for the length of employment plus 30 years.

Nonionizing Radiation Nonionizing forms of radiation are also a concern in the clinical laboratory. Equipment often emits a variety of wavelengths of electromagnetic radiation that must be protected against through engineered shielding or use of PPE (Table 2.3). These energies have varying biologic effects, depending on wavelength, power intensity, and duration of exposure. Laboratorians must be knowledgeable regarding the hazards presented by their equipment to protect themselves and ancillary personnel. TABLE 2.3 Examples of Nonionizing Radiation in Clinical Laboratories

FIRE SAFETY The Chemistry of Fire Fire is basically a chemical reaction that involves the rapid oxidation of a combustible material or fuel, with the subsequent liberation of heat and light. In the clinical chemistry laboratory, all the elements essential for fire to begin are

present—fuel, heat or ignition source, and oxygen (air). However, recent research suggests that a fourth factor is present. This factor has been classified as a reaction chain in which burning continues and even accelerates. It is caused by the breakdown and recombination of the molecules from the material burning with the oxygen in the atmosphere. The fire triangle has been modified into a three-dimensional pyramid known as the fire tetrahedron (Fig. 2.4). This modification does not contradict established procedures in dealing with a fire but does provide additional means by which fires may be prevented or extinguished. A fire will extinguish if any of the three basic elements (heat, air, or fuel) are removed.

FIGURE 2.4 Fire tetrahedron.

Classification of Fires Fires have been divided into four classes based on the nature of the combustible material and requirements for extinguishment: Class A: ordinary combustible solid materials, such as paper, wood, plastic, and fabric Class B: flammable liquids/gases and combustible petroleum products Class C: energized electrical equipment Class D: combustible/reactive metals, such as magnesium, sodium, and potassium

Types and Applications of Fire Extinguishers Just as fires have been divided into classes, fire extinguishers are divided into

classes that correspond to the type of fire to be extinguished. Be certain to choose the right type—using the wrong type of extinguisher may be dangerous. For example, do not use water on burning liquids or electrical equipment. Pressurized water extinguishers, as well as foam and multipurpose dry chemical types, are used for Class A fires. Multipurpose dry-chemical and carbon dioxide extinguishers are used for Class B and C fires. Halogenated hydrocarbon extinguishers are particularly recommended for use with computer equipment. Class D fires present special problems, and extinguishment is left to trained firefighters using special dry chemical extinguishers (Fig. 2.5). Generally, all that can be done for a Class D fire in the laboratory is to try and isolate the burning metal from combustible surfaces with sand or ceramic barrier material. Personnel should know the location and type of portable fire extinguisher near their work area and know how to use an extinguisher before a fire occurs. In the event of a fire, first evacuate all personnel, patients, and visitors who are in immediate danger and then activate the fire alarm, report the fire, and, if possible, attempt to extinguish the fire. Personnel should work as a team to carry out emergency procedures. Fire drills must be conducted regularly and with appropriate documentation. Fire extinguishers must be inspected monthly to ensure that they are mounted, visible, accessible, and charged.

FIGURE 2.5 Proper use of fire extinguishers. (Adapted from the Clinical and Laboratory Safety Department, The University of Texas Health Science Center at Houston.)

CONTROL OF OTHER HAZARDS Electrical Hazards Most individuals are aware of the potential hazards associated with the use of electrical appliances and equipment. Direct hazards of electrical energy can result in death, shock, or burns. Indirect hazards can result in fire or explosion. Therefore, there are many precautionary procedures to follow when operating or working around electrical equipment: Use only explosion-rated (intrinsically wired) equipment in hazardous atmospheres. Be particularly careful when operating high-voltage equipment, such as

electrophoresis apparatus. Use only properly grounded equipment (three-prong plug). Check for frayed electrical cords. Promptly report any malfunctions or equipment producing a “tingle” for repair. Do not work on “live” electrical equipment. Never operate electrical equipment with wet hands. Know the exact location of the electrical control panel for the electricity to your work area. Use only approved extension cords in temporary applications and do not overload circuits. (Some local regulations prohibit the use of any extension cord.) Have ground, polarity, and leakage checks and other periodic preventive maintenance performed on outlets and equipment.

Compressed Gas Hazards Compressed gases, which serve a number of functions in the laboratory, present a unique combination of hazards in the clinical laboratory: danger of fire, explosion, asphyxiation, or mechanical injuries. There are several general requirements for safely handling compressed gases: Know the gas that you will use. Store tanks in a vertical position. Keep cylinders secured at all times. Never store flammable liquids and compressed gases in the same area. Use the proper regulator, tubing, and fittings for the type of gas in use. Do not attempt to control or shut off gas flow with the pressure relief regulator. Keep removable protection caps in place until the cylinder is in use. Make certain that acetylene tanks are properly piped (the gas is incompatible with copper tubing). Do not force a “frozen” or stuck cylinder valve. Use a hand truck to transport large cylinders. Always check cylinders on receipt and then periodically for any problems such as leaks. Make certain that the cylinder is properly labeled to identify the contents. Empty tanks should be marked “empty.”

Cryogenic Materials Hazards Liquid nitrogen is probably one of the most widely used cryogenic fluids (liquefied gases) in the laboratory. There are, however, several hazards associated with the use of any cryogenic material: fire or explosion, asphyxiation, pressure buildup, embrittlement of materials, and tissue damage similar to that of thermal burns. Only containers constructed of materials designed to withstand ultralow temperatures should be used for cryogenic work. In addition to the use of eye/face protection, hand protection to guard against the hazards of touching supercooled surfaces is recommended. The gloves, of impermeable material, should fit loosely so that they can be taken off quickly if liquid spills on or into them. Also, to minimize violent boiling/frothing and splashing, specimens to be frozen should always be inserted into the coolant very slowly. Cryogenic fluids should be stored in well-insulated but loosely stoppered containers that minimize loss of fluid resulting from evaporation by boil-off and that prevent plugging and pressure buildup.

Mechanical Hazards In addition to physical hazards such as fire and electric shock, laboratory personnel should be aware of the mechanical hazards of equipment such as centrifuges, autoclaves, and homogenizers. Centrifuges, for example, must be balanced to distribute the load equally. The operator should never open the lid until the rotor has come to a complete stop. Safety interlocks on equipment should never be rendered inoperable. Laboratory glassware itself is another potential hazard. Agents, such as glass beads or boiling chips, should be added to help eliminate bumping/boilover when liquids are heated. Tongs or insulated gloves should be used to remove hot glassware from ovens, hot plates, or water baths. Glass pipettes should be handled with extra care, as should sharp instruments such as cork borers, needles, scalpel blades, and other tools. A glassware inspection program should be in place to detect signs of wear or fatigue that could contribute to breakage or injury. All infectious sharps must be disposed in OSHA-approved containers to reduce the risk of injury and infection.

Ergonomic Hazards Although increased mechanization and automation have made many tedious and

repetitive manual tasks obsolete, laboratory processes often require repeated manipulation of instruments, containers, and equipment. These physical actions can, over time, contribute to repetitive strain disorders such as tenosynovitis, bursitis, and ganglion cysts. The primary contributing factors associated with repetitive strain disorders are position/posture, applied force, and frequency of repetition. Remember to consider the design of hand tools (e.g., ergonomic pipettes), adherence to ergonomically correct technique, and equipment positioning when engaging in any repetitive task. Chronic symptoms of pain, numbness, or tingling in extremities may indicate the onset of repetitive strain disorders. Other hazards include acute musculoskeletal injury. Remember to lift heavy objects properly, keeping the load close to the body and using the muscles of the legs rather than the back. Gradually increase force when pushing or pulling, and avoid pounding actions with the extremities.

DISPOSAL OF HAZARDOUS MATERIALS The safe handling and disposal of chemicals and other materials require a thorough knowledge of their properties and hazards. Generators of hazardous wastes have a moral and legal responsibility, as defined in applicable local, state, and federal regulations, to protect both the individual and the environment when disposing off waste. There are four basic waste disposal techniques: flushing down the drain to the sewer system, incineration, landfill burial, and recycling.

Chemical Waste In some cases, it is permissible to flush water-soluble substances down the drain with copious quantities of water. However, strong acids or bases should be neutralized before disposal. The laboratory must adhere to institutional, local, and state regulations regarding the disposal of strong acids and bases. Foulsmelling chemicals should never be disposed off down the drain. Possible reaction of chemicals in the drain and potential toxicity must be considered when deciding if a particular chemical can be dissolved or diluted and then flushed down the drain. For example, sodium azide, which is used as a preservative, forms explosive salts with metals, such as the copper, in pipes. Many institutions ban the use of sodium azide due to this hazard. In all cases, check with the local water reclamation district or publicly owned treatment works for specific limitations before utilizing sewer disposal. Other liquid wastes, including flammable solvents, must be collected in approved containers and segregated into compatible classes. If practical, solvents

such as xylene and acetone may be filtered or redistilled for reuse. If recycling is not feasible, disposal arrangements should be made by specifically trained personnel. Flammable material can also be burned in specially designed incinerators with afterburners and scrubbers to remove toxic products of combustion. Also, before disposal, hazardous substances that are explosive (e.g., peroxides) and carcinogens should be transformed to less hazardous forms whenever feasible. Solid chemical wastes that are unsuitable for incineration may be amenable to other treatments or buried in an approved, permitted landfill. Note that certain chemical wastes are subject to strict “cradle to grave” tracking under the RCRA, and severe penalties are associated with improper storage, transportation, and disposal.

Radioactive Waste The manner of use and disposal of isotopes is strictly regulated by the NRC or NRC agreement states and depends on the type of waste (soluble or nonsoluble), its level of radioactivity, and the radiotoxicity and half-life of the isotopes involved. The radiation safety officer should always be consulted about policies dealing with radioactive waste disposal. Many clinical laboratories transfer radioactive materials to a licensed receiver for disposal.

Biohazardous Waste On November 2, 1988, President Reagan signed into law The Medical Waste Tracking Act of 1988. Its purpose was to (1) charge the Environmental Protection Agency with the responsibility to establish a program to track medical waste from generation to disposal, (2) define medical waste, (3) establish acceptable techniques for treatment and disposal, and (4) establish a department with jurisdiction to enforce the new laws. Several states have implemented the federal guidelines and incorporated additional requirements. Some entities covered by the rules are any health care–related facility including, but not limited to, ambulatory surgical centers; blood banks and blood drawing centers; clinics, including medical, dental, and veterinary; clinical, diagnostic, pathologic, or biomedical research laboratories; emergency medical services; hospitals; long-term care facilities; minor emergency centers; occupational health clinics and clinical laboratories; and professional offices of physicians and dentists. Medical waste is defined as special waste from health care facilities and is

further defined as solid waste that, if improperly treated or handled, “may transmit infectious diseases.” (For additional information, see the TJC Web site: http://www.jointcommission.org/). It comprises animal waste, bulk blood and blood products, microbiologic waste, pathologic waste, and sharps. The approved methods for treatment and disposition of medical waste are incineration, steam sterilization, burial, thermal inactivation, chemical disinfection, or encapsulation in a solid matrix. Generators of medical waste must implement the following procedures: Employers of health care workers must establish and implement an infectious waste program. All biomedical waste should be placed in a bag marked with the biohazard symbol and then placed into a leakproof container that is puncture resistant and equipped with a solid, tight-fitting lid. All containers must be clearly marked with the word biohazard or its symbol. All sharp instruments, such as needles, blades, and glass objects, should be placed into special puncture-resistant containers before placing them inside the bag and container. Needles should not be transported, recapped, bent, or broken by hand. All biomedical waste must then be disposed off according to one of the recommended procedures. Highly pathogenic waste should undergo preliminary treatment on-site. Potentially biohazardous material, such as blood or blood products and contaminated laboratory waste, cannot be directly discarded. Contaminated combustible waste can be incinerated. Contaminated noncombustible waste, such as glassware, should be autoclaved before being discarded. Special attention should be given to the discarding of syringes, needles, and broken glass that could also inflict accidental cuts or punctures. Appropriate containers should be used for discarding these sharp objects.

ACCIDENT DOCUMENTATION INVESTIGATION

AND

Any accidents involving personal injuries, even minor ones, should be reported immediately to a supervisor. Manifestation of occupational illnesses and exposures to hazardous substances should also be reported. Serious injuries and illnesses, including those resulting in hospitalization, disability, or death, must be reported to OSHA or the state-administered program within 8 hours. Under

OSHA regulations, employers are required to maintain records of occupational injuries and illnesses for the length of employment plus 30 years. The recordkeeping requirements include a first report of injury, an accident investigation report, and an annual summary that is recorded on an OSHA injury and illness log (Form 300). The first report of injury is used to notify the insurance company and the human resources or safety department that a workplace injury has occurred. The employee and the supervisor usually complete the report, which contains information on the employer and injured person, as well as the time and place, cause, and nature of the injury. The report is signed and dated; then, it is forwarded to the institution's risk manager or insurance representative. The investigation report should include information on the injured person, a description of what happened, the cause of the accident (environmental or personal), other contributing factors, witnesses, the nature of the injury, and actions to be taken to prevent a recurrence. This report should be signed and dated by the person who conducted the investigation. Annually, a log and summary of occupational injuries and illnesses should be completed and forwarded to the U.S. Department of Labor, Bureau of Labor Statistics' OSHA injury and illness log (Form 300). The standardized form requests depersonalized information similar to the first report of injury and the accident investigation report. Information about every occupational death, nonfatal occupational illness, biologic or chemical exposure, and nonfatal occupational injury that involved loss of consciousness, restriction of work or motion, transfer to another job, or medical treatment (other than first aid) must be reported. Because it is important to determine why and how an accident occurred, an accident investigation should be conducted. Most accidents can be traced to one of two underlying causes: environmental (unsafe conditions) or personal (unsafe acts). Environmental factors include inadequate safeguards, use of improper or defective equipment, hazards associated with the location, or poor housekeeping. Personal factors include improper laboratory attire, lack of skills or knowledge, specific physical or mental conditions, and attitude. The employee's positive motivation is important in all aspects of safety promotion and accident prevention. It is particularly important that the appropriate authority be notified immediately if any individual sustains a contaminated needle puncture during blood collection or a cut during subsequent specimen processing or handling.

For a summary of recommendations for the protection of laboratory workers, refer to Protection of Laboratory Workers from Occupationally Acquired Infections; Approved Guideline, Third Edition, M29-A3 (CLSI). For additional student http://thepoint.lww.com

resources,

please

visit



at

questions 1. Which of the following standards requires that SDSs are accessible to all employees who come in contact with a hazardous compound? a. Hazard Communication Standard b. Bloodborne Pathogen Standard c. CDC Regulations d. Personal Protection Equipment Standard 2. Chemicals should be stored a. According to their chemical properties and classification b. Alphabetically, for easy accessibility c. Inside a safety cabinet with proper ventilation d. Inside a fume hood, if toxic vapors can be released when opened 3. Proper PPE in the chemistry laboratory for routine testing includes a. Impermeable lab coat with eye/face protection and appropriate disposable gloves b. Respirators with HEPA filter c. Gloves with rubberized sleeves d. Safety glasses for individuals not wearing contact lenses 4. A fire caused by a flammable liquid should be extinguished using which type of extinguisher? a. Class B b. Halogen c. Pressurized water d. Class C 5. Which of the following is the proper means of disposal for the type of waste?

a. Microbiologic waste by steam sterilization b. Xylene into the sewer system c. Mercury by burial d. Radioactive waste by incineration 6. What are the major contributing factors to repetitive strain injuries? a. Position/posture, applied force, and frequency of repetition b. Inattention on the part of the laboratorian c. Temperature and vibration d. Fatigue, clumsiness, and lack of coordination 7. Which of the following are examples of nonionizing radiation? a. Ultraviolet light and microwaves b. Gamma rays and x-rays c. Alpha and beta radiation d. Neutron radiation 8. One liter of 4 N sodium hydroxide (strong base) in a glass 1 L beaker accidentally fell and spilled on the laboratory floor. The first step is to: a. Call 911 b. Alert and evacuate those in the immediate area out of harm's way c. Throw some kitty litter on the spill d. Squirt water on the spill to dilute the chemical e. Neutralize with absorbing materials in a nearby spill kit 9. Of the following, which is NOT reportable to the Department of Labor? a. A laboratorian with a persistent cough that is only triggered at work b. A laboratorian that experienced a chemical burn c. A laboratorian that tripped in the lab and hit her head on the lab bench rendering her unconscious d. A laboratorian that was stuck by a contaminated needle after performing phlebotomy on a patient e. A laboratorian that forgot to wear his lab coat and gloves while diluting patient serum

bibliography and suggested reading

1. Allocca JA, Levenson HE. Electrical and Electronic Safety. Reston, VA: Reston Publishing Company; 1985. 2. American Chemical Society, Committee on Chemical Safety. Smith GW, ed. Safety in Academic Chemistry Laboratories. Washington, DC: American Chemical Society; 1985. 3. Boyle MP. Hazardous chemical waste disposal management. Clin Lab Sci. 1992;5:6. 4. Brown JW. Tuberculosis alert: an old killer returns. MLO Med Lab Obs. 1993;25:5. 5. Bryan RA. Recommendations for handling specimens from patients with confirmed or suspected Creutzfeldt-Jakob disease. Lab Med. 1984;15:50. 6. Centers for Disease Control and Prevention, National Institutes of Health. Biosafety in Microbiological and Biomedical Laboratories. 5th ed. Washington, DC: U.S. Government Printing Office; 2009. 7. Chervinski D. Environmental awareness: it's time to close the loop on waste reduction in the health care industry. Adv Admin Lab. 1994;3:4. 8. Clinical Laboratory Standards Institute (CLSI). Clinical Laboratory Waste Management (Approved Guideline, GP05-A3). Wayne, PA: CLSI; 2011. 9. Clinical Laboratory Standards Institute (CLSI). Clinical Laboratory Safety (Approved Guideline, GP17-A3). Wayne, PA: CLSI; 2012. 10.Clinical Laboratory Standards Institute (CLSI). Protection of Laboratory Workers from Occupationally Acquired Infection (Approved Guideline, M29-A4). Wayne, PA: CLSI; 2014. 11. Committee on Hazardous Substances in the Laboratory, Assembly of Mathematical and Physical Sciences, National Research Council. Prudent Practices for Handling Hazardous Chemicals in Laboratories. Washington, DC: National Academies Press; 1981. 12.Furr AK. Handbook of Laboratory Safety. 5th ed. Boca Raton, FL: CRC Press; 2000. 13.Gile TJ. Hazard-communication program for clinical laboratories. Clin Lab. 1988;1:2. 14.Gile TJ. An update on lab safety regulations. MLO Med Lab Obs. 1995;27:3. 15.Gile TJ. Complete Guide to Laboratory Safety. Marblehead, MA: HCPro, Inc.; 2004. 16.Hayes DD. Safety considerations in the physician office laboratory. Lab Med. 1994;25:3. 17.Hazard communication. Federal Register. 1994;59:27. 18.Karcher RE. Is your chemical hygiene plan OSHA proof? MLO Med Lab Obs. 1993;25:7. 19.Le Sueur CL. A three-pronged attack against AIDS infection in the lab. MLO Med Lab Obs. 1989;21:37. 20.Miller SM. Clinical safety: dangers and risk control. Clin Lab Sci. 1992;5:6. 21.National Institute for Occupational Safety and Health (NIOSH). NIOSH Pocket Guide to Chemical Hazards. Atlanta, GA: Center for Disease Control; 2007. 22.National Institutes of Health, Radiation Safety Branch. Radiation: The National Institutes of Health Safety Guide. Washington, DC: U.S. Government Printing Office; 1979. 23.National Regulatory Committee, Committee on Hazardous Substances in the Laboratory. Prudent Practices for the Handling of Hazardous Chemicals in Laboratories. Washington, DC: National Academies Press; 1981. 24.National Regulatory Committee, Committee on Hazardous Substances in the Laboratory. Prudent Practices for Disposal of Chemicals from Laboratories. Washington, DC: National Academies Press; 1983. 25.National Regulatory Committee, Committee on the Hazardous Biological Substances in the Laboratory. Biosafety in the Laboratory: Prudent Practices for the Handling and Disposal of Infectious Materials. Washington, DC: National Academies Press; 1989. 26.National Safety Council (NSC). Fundamentals of Industrial Hygiene. 5th ed. Chicago, IL: NSC; 2002. 27.Occupational exposure to bloodborne pathogens; final rule. Federal Register. 1991;56:235. 28.Occupational Safety and Health Administration. Subpart Z 29CFR 1910.1000-1450. 29.Otto CH. Safety in health care: prevention of bloodborne diseases. Clin Lab Sci. 1992;5:6. 30.Pipitone DA. Safe Storage of Laboratory Chemicals. New York, NY: Wiley; 1984. 31.Rose SL. Clinical Laboratory Safety. Philadelphia, PA: JB Lippincott; 1984.

32.Rudmann SV, Jarus C, Ward KM, et al. Safety in the student laboratory: a national survey of university-based programs. Lab Med. 1993;24:5. 33.Stern A, Ries H, Flynn D, et al. Fire safety in the laboratory: part I. Lab Med. 1993;24:5. 34.Stern A, Ries H, Flynn D, et al. Fire safety in the laboratory: part II. Lab Med. 1993;24:6. 35.United Nations. Globally Harmonized System of Classification and Labelling of Chemicals. New York, NY; Geneva: United Nations; 2003. 36.Wald PH, Stave GM. Physical and Biological Hazards of the Workplace. New York, NY: Van Nostrand Reinhold; 1994.

3 Method Evaluation and Quality Control MICHAEL W. ROGERS, CINDI BULLOCK LETSOS, MATTHEW P. A. HENDERSON, MONTE S. WILLIS, and CHRISTOPER R. McCUDDEN

Chapter Outline Basic Concepts Descriptive Statistics: Measures of Center, Spread, and Shape Descriptive Statistics of Groups of Paired Observations Inferential Statistics

Method Evaluation Regulatory Aspects of Method Evaluation (Alphabet Soup) Method Selection Method Evaluation First Things First: Determine Imprecision and Inaccuracy Measurement of Imprecision Interference Studies COM Studies Allowable Analytical Error Method Evaluation Acceptance Criteria

Quality Control QC Charts Operation of a QC System Multirules RULE! Proficiency Testing

Reference Interval Studies Establishing Reference Intervals Selection of Reference Interval Study Individuals Preanalytic and Analytic Considerations Determining Whether to Establish or Transfer and Verify Reference Intervals Analysis of Reference Values Data Analysis to Establish a Reference Interval Data Analysis to Transfer and Verify a Reference Interval

Diagnostic Efficiency Measures of Diagnostic Efficiency

Summary Practice Problems Problem 3-1. Calculation of Sensitivity and Specificity Problem 3-2. A Quality Control Decision Problem 3-3. Precision (Replication) Problem 3-4. Recovery Problem 3-5. Interference Problem 3-6. Sample Labeling Problem 3-7. QC Program for POCT Testing Problem 3-8. QC Rule Interpretation Problem 3-9. Reference Interval Study Design

Questions Online Resources References Chapter Objectives Upon completion of this chapter, the clinical laboratorian should be able to do the following: Define the following terms: quality control, accuracy, precision, descriptive statistics, reference interval, random error, sensitivity, specificity, systematic error, and confidence intervals. Calculate the following: sensitivity, specificity, efficiency, predictive value, mean, median, range, variance, and standard deviation. Understand why statistics are needed for effective quality management. Read a descriptive statistics equation without fear. Understand the types, uses, and requirements for reference intervals. Understand the basic protocols used to verify or establish a reference interval. Appreciate how the test cutoff affects diagnostic performance. Evaluate laboratory data using multirules for quality control. Graph laboratory data and determine significant constant or proportional errors. Determine if there is a trend or a shift, given laboratory data. Discuss the processes involved in method selection and evaluation. Discuss proficiency testing programs in the clinical laboratory. Describe how a process can be systematically improved.

For additional student resources, please visit at

http://thepoint.lww.com

Key Terms Accuracy AMR Bias CLIA Constant error CRR Descriptive statistics Dispersion Histograms Levey-Jennings control chart Limit of detection Linear regression Precision Predictive values Proficiency testing Proportional error Quality control Random error Reference interval Reference method SDI Sensitivity Shifts Specificity Systematic error Trends

It is widely accepted that the majority of medical decisions are supported using laboratory data. It is therefore critical that results generated by the laboratory are accurate and provide the appropriate laboratory diagnostic information to assist the health care provider in treating patients. Determining and maintaining accuracy requires considerable effort and cost, entailing the use of a series of approaches depending on the complexity of the test. To begin, one must appreciate what quality is and how quality is measured and managed. To this end, it is vital to understand basic statistical concepts that enable the laboratorian to measure quality. Before implementing a new test, it is important to determine if the test is capable of performing acceptably by meeting or exceeding defined quality criteria; method evaluation is used to verify the acceptability of new methods prior to reporting patient results. Once a method has been implemented, it is essential that the laboratory ensures it remains valid over time; this is achieved by a process known as quality control (QC) and Quality

Improvement/Assurance (see Chapter 4). This chapter describes basic statistical concepts and provides an overview of the procedures necessary to implement a new method and ensure its continued accuracy.

BASIC CONCEPTS Each day, high-volume clinical laboratories generate thousands of results. This wealth of clinical laboratory data must be summarized and critically be reviewed to monitor test performance. The foundation for monitoring performance (known as QC) is descriptive statistics.

Descriptive Statistics: Measures of Center, Spread, and Shape When examined closely, a collection of seemingly similar things always has at least slight differences for any given characteristic (e.g., smoothness, size, color, weight, volume, and potency). Similarly, laboratory data will have at least slight measurement differences. For example, if glucose on a given specimen is measured 100 times in a row, there would be a range of values obtained. Such differences in laboratory values can be a result of a variety of sources. Although measurements will differ, their values form patterns that can be visualized and analyzed collectively. Laboratorians view and describe these patterns using graphical representations and descriptive statistics (Fig. 3.1).

FIGURE 3.1 Basic measures of data include the center, spread, and shape.

When comparing and analyzing collections or sets of laboratory data, patterns can be described by their center, spread, and shape. Although comparing the center of data is most common, comparing the spread can be even more powerful. Assessment of data dispersion, or spread, allows laboratorians to assess the predictability (and the lack of) in a laboratory test or measurement.

Measures of Center The three most commonly used descriptions of the center of a data set (Fig. 3.2) are the mean, the median, and the mode. The mean is most commonly used and often called the average. The median is the “middle” point and is often used with skewed data so its calculation is not significantly affected by outliers. The mode is rarely used as a measure of the data's center but is more often used to describe data that seem to have two centers (i.e., bimodal). The mean is calculated by summing the observations and dividing by the number of the critically evaluated observations (Box 3.1).

FIGURE 3.2 The center can be defined by the mean (x-bar character), median, or mode.

BOX 3.1 Sources of Analytic Variability

Mean Equation:

(Eq. 3-1) The summation sign, Σ, is an abbreviation for (x1 + x2 + x3 + ··· + xn) and is used in many statistical formulas. Often, the mean of a specific data set is called or “x bar.” The median is the middle of the data after the data have been rank ordered. It is the value that divides the data in half. To determine the median, values are rank ordered from least to greatest and the middle value is selected. For example, given a sample of 5, 4, 6, 5, 3, 7, 5, the rank order of the points is 3, 4, 5, 5, 5, 6, 7. Because there are an odd number of values in the sample, the

middle value (median) is 5; the value 5 divides the data in half. Given another sample with an even number of values 5, 4, 6, 8, 9, 7, the rank order of the points is 4, 5, 6, 7, 8, 9. The two “middle” values are 6 and 7. Adding them yields 13 (6 + 7 = 13); division by 2 provides the median (13/2 = 6.5). The median value of 6.5 divides the data in half. The mode is the most frequently occurring value in a data set. Although it is seldom used to describe data, it is referred to when in reference to the shape of data, a bimodal distribution, for example. In the sample 3, 4, 5, 5, 5, 6, 7, the value that occurs most often is 5. The mode of this set is then 5. The data set 3, 4, 5, 5, 5, 6, 7, 8, 9, 9, 9 has two modes, 5 and 9. After describing the center of the data set, it is very useful to indicate how the data are distributed (spread). The spread represents the relationship of all the data points to the mean (Fig. 3.3). There are four commonly used descriptions of spread: (1) range, (2) standard deviation (SD), (3) coefficient of variation (CV), and (4) standard deviation index (SDI). The easiest measure of spread to understand is the range. The range is simply the largest value in the data minus the smallest value, which represents the extremes of data one might encounter. Standard deviation (also called “s,” SD, or σ) is the most frequently used measure of variation. Although calculating SD can seem somewhat intimidating, the concept is straightforward; in fact, all of the descriptive statistics and even the inferential statistics have a combination of mathematical operations that are by themselves no more complex than a square root. The SD and, more specifically, the variance represent the “average” distance from the center of the data (the mean) and every value in the data set. The CV allows a laboratorian to compare SDs with different units and reflects the SDs in percentages. The SDI is a calculation to show the number of SDs a value is from the target mean. Similar to CV, it is a way to reflect ranges in a relative manner regardless of how low or high the values are.

FIGURE 3.3 Spread is defined by the standard deviation and coefficient of variation. Range is one description of the spread of data. It is simply the difference between the highest and lowest data points: range = high − low. For the sample 5, 4, 6, 5, 3, 7, 5, the range is 7 − 3 = 4. The range is often a good measure of dispersion for small samples of data. It does have a serious drawback; the range is susceptible to extreme values or outliers (i.e., wrong quality control material or wrong data entry). To calculate the SD of a data set, it is easiest to first determine the variance (s2). Variance is similar to the mean in that it is an average. Variance is the average of the squared distances of all values from the mean: (Eq. 3-2) As a measure of dispersion, variance represents the difference between each value and the average of the data. Given the values 5, 4, 6, 5, 3, 7, 5, variance can be calculated as shown below:

(Eq. 3-3) To calculate the SD (or “s”), simply take the square root of the variance:

(Eq. 3-4) Although it is important to understand how these measures are calculated, most analyzer/instruments, laboratory information systems, and laboratory statistical software packages determine these automatically. SD describes the distribution of all data points around the mean. Another way of expressing SD is in terms of the CV. The CV is calculated by dividing the SD by the mean and multiplying by 100 to express it as a percentage: (Eq. 3-5) The CV simplifies comparison of SDs of test results expressed in different units and concentrations. As shown in Table 3.1, analytes measured at different concentrations can have a drastically different SD but a comparable CV. The CV is used extensively to summarize QC data. The CV of highly precise analyzers can be lower than 1%. For values at the low end of the analytical measurement,

the acceptable CV can range as high as 50% as allowed by CLIA. TABLE 3.1 Comparison of SD and CV for Two Different Analytes

SD, standard deviation; CV, coefficient of variation; FSH, follicle-stimulating hormone; βhCG, β-human chorionic gonadotropin. Another way of expressing SD is in terms of the SDI, where the SDI is calculated by dividing the difference of the (measured laboratory value or mean − target or group mean)/assigned or group SD. The SDI can be a negative or positive value.

(Eq. 3-6)

Measures of Shape Although there are hundreds of different “shapes”— distributions—that data sets can exhibit, the most commonly discussed is the Gaussian distribution (also called normal distribution; Fig. 3.4). The Gaussian distribution describes many continuous laboratory variables and shares several unique characteristics: the mean, median, and mode are identical; the distribution is symmetric—meaning half the values fall to the left of the mean and the other half fall to the right with the peak of the curve representing the average of the data. This symmetrical shape is often called a “bell curve.”

FIGURE 3.4 Shape is defined by how the distribution of data relates the center. This is an example of data that have “normal” or Gaussian distribution. The total area under the Gaussian curve is 1.0, or 100%. Much of the area— 68.3%—under the “normal” curve is between ±1 SD (μ ± 1σ) (Fig. 3.5A). Most of the area—95.4%—under the “normal” curve is between ±2 SDs (μ ± 2σ; Fig. 3.5B). And almost all of the area—99.7%—under the “normal” curve is between ±3 SDs (μ ± 3σ) (Fig. 3.5C). (Note that μ represents the average of the total population, whereas the mean of a specific data set is or “x bar.”)

FIGURE 3.5 A normal distribution contains (A) ≈68% of the results within ±1 SD (1s or 1σ), (B) 95% of the results within ±2s (2σ), and (C) ≈99% of the results within ±3σ. The “68–95–99 Rule” summarizes the above relationships between the area under a Gaussian distribution and the SD. In other words, given any Gaussian distributed data, ≈68% of the data fall between ±1 SD from the mean; ≈95% of the data fall between ±2 SDs from the mean; and ≈99% fall between ±3 SDs from the mean. Likewise, if you selected a value in a data set that is Gaussian distributed, there is a 0.68 chance of it lying between ±1 SD from the mean; there is a 0.95 likelihood of it lying between ±2 SDs; and there is a 0.99 probability of it lying between ±3 SDs. (Note: the terms “chance,” “likelihood,” and “probability” are synonymous in this example.) As will be discussed in the reference interval section, most patient data are

not normally distributed. These data may be skewed or exhibit multiple centers (bimodal, trimodal, etc.) as shown in Figure 3.6. Plotting data in histograms as shown in the figure is a useful and an easy way to visualize distribution. However, there are also mathematical analyses (e.g., normality tests) that can confirm if data fit a given distribution. The importance of recognizing whether data are or are not normally distributed is related to the way it can be statistically analyzed.

FIGURE 3.6 Examples of normal (Gaussian), skewed, and bimodal distributions. The type of statistical analysis that is performed to analyze the data depends on the distribution (shape).

Descriptive Statistics Observations

of

Groups

of

Paired

While the use of basic descriptive statistics is satisfactory for examining a single method, laboratorians frequently need to compare two different methods. This is most commonly encountered in comparison of methods (COM) experiments. A COM experiment involves measuring patient specimens by both an existing (reference) method and a new (test) method (described in the Reference Interval

and Method Evaluation sections later). The data obtained from these comparisons consist of two measurements for each patient specimen. It is easiest to visualize and summarize the paired-method comparison data graphically (Fig. 3.7). By convention, the values obtained by the reference method are plotted on the x-axis, and the values obtained by the test method are plotted on the y-axis.

FIGURE 3.7 A generic example of a linear regression. A linear regression compares two tests and yields important information about systematic and random error. Systematic error is indicated by changes in the y-intercept (constant error) and the slope (proportional error). Random error is indicated by the standard error of the estimate (Sy/x); Sy/x basically represents the distance of each point from the regression line. The correlation coefficient indicates the strength of the relationship between the tests. In Figure 3.7, the agreement between the two methods is estimated from the straight line that best fits the points. Whereas visual estimation may be used to draw the line, a statistical technique known as linear regression analysis provides objective an objective measure of the line of best fit for the data. Three factors are generated in a linear regression—the slope, the y-intercept, and the correlation coefficient (r). In Figure 3.7, there is a linear relationship between the two methods over the entire range of values. The linear regression is defined by

the equation y = mx + b. The slope of the line is described by m, and the value of the y-intercept (b) is determined by plugging x = 0 into the equation and solving for y. The correlation coefficient (r) is a measure of the strength of the relationship between the two methods. The correlation coefficient can have values from −1 to 1, with the sign indicating the direction of relationship between the two variables. A positive r indicates that both variables increase and decrease together, whereas a negative r indicates that as one variable increases, the other decreases. An r value of 0 indicates no relationship, whereas r = 1.0 indicates a perfect relationship. Although many equate high positive values of r (0.95 or higher) with excellent agreement between the test and comparative methods, most clinical chemistry comparisons should have correlation coefficients greater than 0.98. When r is less than 0.99, the regression formula can underestimate the slope and overestimate the y-intercept. The absolute value of the correlation coefficient can be increased by widening the concentration range of samples being compared. However, if the correlation coefficient remains less than 0.99, then alternate regression statistics or modified COM value sets should be used to derive more realistic estimates of the regression, slope, and y-intercept.1,2 Visual inspection of data is essential prior to drawing conclusions from the summary statistics as demonstrated by the famous Anscombe quartet (Fig. 3.8). In this data set, the slope, y-intercept, and correlation coefficients are all identical, yet visual inspection reveals that the underlying data are completely different.

FIGURE 3.8 Anscombe's quartet demonstrates the need to visually inspect data. In each panel, y = 0.5 x + 3, r2 = 0.816, Sy/x = 4.1. An alternate approach to visualizing paired data is the difference plot, which is also known as the Bland-Altman plot (Fig. 3.9). A difference plot indicates either the percent or absolute bias (difference) between the reference and test method values over the average range of values. This approach permits simple comparison of the differences to previously established maximum limits. As is evident in Figure 3.9, it is easier to visualize any concentration-dependent differences than by linear regression analysis. In this example, the percent difference is clearly greatest at lower concentrations, which may not be obvious from a regression plot.

FIGURE 3.9 An example of a difference (Bland-Altman) plot. Difference plots are a useful tool to visualize concentration-dependent error. The difference between test and reference method results is called error. There are two kinds of error measured in COM experiments: random and systematic. Random error is present in all measurements and can be either positive or negative, typically a combination of both positive and negative errors on both sides of the assigned target value. Random error can be a result of many factors including instrument, operator, reagent, and environmental variation. Random error is calculated as the SD of the points about the regression line (Sy/x). Sy/x essentially refers to average distance of the data from the regression line (Fig. 3.7). The higher the Sy/x, the wider is the scatter and the higher is the amount of random error. In Figure 3.7, the Sy/x is 5.0. If the points were perfectly in line with the linear regression, the Sy/x would equal 0.0, indicating there would not be any random error. Sy/x is also known as the standard error of the estimate SE (Box 3.2).

BOX 3.2 Types of Error in Laboratory Testing: a Preview of Things to Come

Systematic error influences observations consistently in one direction (higher or lower). The measures of slope and y-intercept provide estimates of the systematic error. Systematic error can be further broken down into constant error and proportional error. Constant systematic error exists when there is a continual difference between the test method and the comparative method values, regardless of the concentration. In Figure 3.7, there is a constant difference of 6.0 between the test method values and the comparative method values. This constant difference, reflected in the y-intercept, is called constant systematic error. Proportional error exists when the differences between the test method and the comparative method values are proportional to the analyte concentration. Proportional error is present when the slope is 1. In the example, the slope of 0.89 represents the proportional error, where samples will be underestimated in a concentration-dependent fashion by the test method compared with the reference method; the error is proportional, because it will increase with the analyte concentration.

Inferential Statistics The next level of complexity beyond paired descriptive statistics is inferential statistics. Inferential statistics are used to draw conclusions (inferences) regarding the means or SDs of two sets of data. Inferential statistical analyses are most commonly encountered in research studies, but can also be used in COM

studies. An important consideration for inferential statistics is the distribution of the data (shape). The distribution of the data determines what kind of inferential statistics can be used to analyze the data. Normally distributed (Gaussian) data are typically analyzed using what are known as “parametric” tests, which include a Student's t-test or analysis of variance (ANOVA). Data that are not normally distributed require a “nonparametric” analysis. Nonparametric tests are encountered in reference interval studies, where population data are often skewed. For reference interval studies, nonparametric methods typically rely on ranking or ordering the values in order to determine upper and lower percentiles. While many software packages are capable of performing either parametric or nonparametric analyses, it is important for the user to understand that the type of data (shape) dictates which statistical test is appropriate for the analysis. An inappropriate analysis of sound data can yield the wrong conclusion and lead to erroneous provider decisions and patient care/impact consequences.

METHOD EVALUATION The value of clinical laboratory service is based on its ability to provide reliable, accurate test results and optimal test information for patient management. At the heart of providing these services is the performance of a testing method. To maximize the usefulness of a test, laboratorians undergo a process in which a method is selected and evaluated for its usefulness to those who will be using the test. This process is carefully undertaken to produce results within medically acceptable error limits to help providers maximally manage/treat their patients. Currently, clinical laboratories more often select and evaluate methods that were commercially developed instead of developing their own. Most commercially developed tests have U.S. Food and Drug Administration (FDA) approval, only requiring a laboratory to provide a limited but regulatory mandated evaluation of a method to verify the manufacturer's performance claims and to see how well the method works specifically in the laboratory and patient population served.

Regulatory Aspects of Method Evaluation (Alphabet Soup) The Centers for Medicare and Medicaid Services (CMS) and the Federal Drug Agency (FDA) are the primary government agencies that influence laboratory

testing methods in the United States. The FDA regulates laboratory instruments and reagents, and the CMS regulates the Clinical Laboratory Improvement Amendments (CLIA).3 Most large laboratories in the United States are accredited by the College of American Pathologists (CAP) and The Joint Commission (TJC; formerly the Joint Commission on Accreditation of Healthcare Organizations [JCAHO]), which impacts how method evaluations need to be performed. Professional organizations such as the National Academy of Clinical Biochemistry (NACB) and the American Association for Clinical Chemistry (AACC) also contribute important guidelines and method evaluations that influence how method evaluations are performed. Future trends are seeing additional regulatory services being used by both larger laboratories and smaller laboratories (i.e., Physician clinics and public health departments), Commission of Office Laboratory Accreditation (COLA) is an example of additional regulatory services available to clinical laboratories regardless of their size, test menu, and test volume. Regardless of which regulatory agency is used the acceptable and mandatory standards of compliance guidelines are set by CLIA, FDA, and TJC. The FDA “Office of In Vitro Diagnostic Device Evaluation and Safety” (OIVD) regulates diagnostic tests. Tests are categorized into one of three groups: (1) waived, (2) moderate complexity, and (3) high complexity. Waived tests are cleared by the FDA to be so simple that they are most likely accurate and most importantly would pose negligible risk of harm to the patient if not performed correctly. A few examples of waived testing include dipstick tests, qualitative pregnancy testing, rapid strep, rapid HIV, rapid HCV, and glucose monitors. For waived testing, personnel requirements are not as stringent as moderately complex testing or high-complexity testing, but there are still training, competency, and quality control mandatory requirements. Most automated methods are rated as moderate complexity, while manual methods and methods requiring more interpretation are rated as high complexity. The patient management/treatment impact is also a factor in determining whether a test is waived, moderate complexity or high complexity. The CLIA final rule requires that waived tests simply follow the manufacturer's instructions. Both moderateand high-complexity tests require in-laboratory validation. However, FDAapproved nonwaived tests may undergo a more basic validation process (Table 3.2), whereas a more extensive validation is required for tests developed by laboratories (Table 3.2). While the major requirements for testing validation are driven by the CLIA, TJC and CAP essentially require the same types of experiments to be performed, with a few additions. It is these rules that guide the

way tests in the clinical chemistry laboratory are selected and validated. TABLE 3.2 General CLIA Regulations of Method Validation

From Clinical Laboratory Improvement Amendments of 1988; final rule. Fed Regist. 7164 [42 CFR 493 1253]: Department of Health and Human Services, Centers for Medicare and Medicaid Services; 1992.

DEFINITIONS BOX AACC American Association for Clinical Chemistry CAP College of American Pathologists CLIA Clinical Laboratory Improvement Amendments CLSI Clinical Laboratory and Standards Institute (formerly NCCLS) CMS Centers for Medicare and Medicaid Services COLA Commission of Office Laboratory Accreditation FDA Food and Drug Administration NACB National Academy of Clinical Biochemistry OIVD Office of In Vitro Diagnostic Device Evaluation and Safety TJC The Joint Commission

Method Selection Evaluating a method is a labor-intensive costly process—so why select a new method? There are many reasons, including enhanced provider utilization in effectively treating/managing patients, reducing costs, staffing needs, improving

the overall quality of results, increasing provider satisfaction, and improving overall efficiency. Selecting a test method starts with the collection of technical information about a particular test from colleagues, scientific presentations, the scientific literature, and manufacturer claims. Practical considerations should also be addressed at this point, such as the type and volume of specimen to be tested, the required throughput and turnaround time, your testing needs, testing volumes, cost, break-even financial analysis, calibration, Quality Control Plan, space needs, disposal needs, personnel requirements, safety considerations, alternative strategies to supply testing (i.e., referral testing laboratories), etc. Most importantly, the test should be able to meet the clinical task by having specific analytic performance standards that will accurately assist in the management and ultimately the treatment of patients. Specific information that should be discovered about a test you might bring into the laboratory includes analytic sensitivity, analytic specificity, linear range, interfering substances, estimates of precision and accuracy, and test-specific regulatory requirements. The process of method selection is the beginning of a process to bring in a new test for routine use (Fig. 3.10).

FIGURE 3.10 A flowchart on the process of method selection, evaluation, and monitoring. (Adapted from Westgard JO, Quam E, Barry T. Basic QC Practices: Training in Statistical Quality Control for Healthcare Laboratories. Madison, WI: Westgard Quality Corp.; 1998.)

Method Evaluation A short, initial evaluation should be carried out before the complete method

evaluation. This preliminary evaluation should include the analysis of a series of standards to verify the linear range and the replicate analysis (at least eight measurements) of two controls to obtain estimates of short-term precision. If any results fall short of the specifications published in the method's product information sheet (package insert), the method's manufacturer should be consulted. Without a successful initial evaluation, including precision and linear range minimal studies, there is no opportunity to proceed with the method evaluation until the initial issues have been resolved. Also, without improvement in the method, more extensive evaluations are pointless.4

First Things First: Determine Imprecision and Inaccuracy The first determinations (estimates) to be made in a method evaluation are the precision and accuracy, which should be compared with the maximum allowable error budgets based on medical criteria and regulatory guidelines. If the precision or accuracy exceeds the maximum allowable error budgets, it is unacceptable and must be modified and reevaluated or rejected. Precision is the dispersion of repeated measurements around a mean (true level), as shown in Figure 3.11A with the mean represented as the bull's-eye. Random analytic error is the cause of lack of precision or the imprecision in a test. Precision is estimated from studies in which multiple aliquots of the same specimen (with a constant concentration) are analyzed repetitively.

FIGURE 3.11 Graphic representation of (A) imprecision and (B) inaccuracy on a dartboard configuration with bull's-eye in the center.

Accuracy, or the difference between a measured value and its actual value, is due to the presence of a systematic error, as represented in Figure 3.11B. Systematic error can be due to constant or proportional error and is estimated from three types of study: (1) recovery, (2) interference, and (3) a COM study.

DEFINITIONS BOX Analytic sensitivity Ability of a method to detect small quantities or small changes in concentration of an analyte. Analytic specificity Ability of a method to detect only the analyte it is designed to determine, also known as cross-reactivity. AMR (analytic measurement range) Also known as linear or dynamic range. Range of analyte concentrations that can be directly measured without dilution, concentration, or other pretreatment. CRR (clinically reportable range) Range of analyte that a method can quantitatively report, allowing for dilution, concentration, or other pretreatment used to extend AMR. LoD (limit of detection) Lowest amount of analyte accurately detected by a method. LoQ (limit of quantitation) Lowest amount of analyte that can be reported while achieving a precision target (e.g., lowest concentration at which a CV of 10% may be achieved). SDI (standard deviation index) Refers to the difference between the measured value and the mean expressed as a number of SDs. An SDI = 0 indicates the value is accurate or in 100% agreement; an SDI = 3 is 3 SDs away from the target (mean) and indicates inaccuracy. SDI may be positive or negative.

DEFINITIONS BOX Precision Dispersion of repeated measurements about the mean due to analytic error. Accuracy How close the measured value is to the true value due to systematic error, which can be either constant or proportional.

Bias Difference between the true value and the measured value. Systematic error Error always in one direction (may be constant or proportional). Constant error Type of systematic error in the sample direction and magnitude; the magnitude of change is constant and not dependent on the amount of analyte. Proportional error Type of systematic error where the magnitude changes as a percent of the analyte present, error dependent on analyte concentration. Random error Error varies from sample to sample. Causes include instrument instability, temperature variations, reagent variation, handling techniques, and operator variables. Total error Random error plus systematic error. SDI Standard deviation index. ((Test method value) − (Reference method value))/(Reference method SD), analogous to Z-score.

Measurement of Imprecision Method evaluation begins with a precision study. This estimates the random error associated with the test method and detects any problems affecting its reproducibility. It is recommended that this study be performed over a 10- to 20day period, incorporating one or two analytic runs (runs with patient samples or QC materials) per day, preferred is an AM run and a PM run.5,6 A common precision study is a 2 × 2 × 10 study, where two controls are run twice a day (AM and PM) for 10 days. The rationale for performing the evaluation of precision over many days is logical. Running multiple samples on the same day does a good job of estimating precision within a single day (simple precision) but underestimates long-term changes and testing variables that occur over time. By running multiple samples on different days, a better estimation of the random error over time is given. It is important that more than one concentration be tested in these studies, with materials ideally spanning the clinically appropriate and analytical measurement range of concentrations. For glucose, this might include samples in the hyperglycemic range (150 mg/dL) and the hypoglycemic range (50 mg/dL). After these data are collected, the mean, SD, CV, and SDI are calculated. An example of a precision study from our laboratory is shown in Figure 3.12.

FIGURE 3.12 An example of a precision study for vitamin B12. The data represent analysis of a control sample run in duplicate twice a day (red and black circles) for 10 days (x-axis). Data are presented as standard deviation index (SDI). SDI refers to the difference between the measured value and the mean expressed as a number of SDs. An imprecision study is designed to detect random error. The random error or imprecision associated with the test procedure is indicated by the SD and the CV. The within-run imprecision is indicated by the SD of the controls analyzed within one run. The total imprecision may be obtained from the SD of control data with one or two data points accumulated per day. The total imprecision is the most accurate assessment of performance that would affect the values a provider might see and reflects differences in operators, pipettes, and variations in environmental changes such as temperature, reagent stability, etc. In practice, however, within-run imprecision is used more commonly than total imprecision. An inferential statistical technique, ANOVA, is then used to analyze the available precision data to provide estimates of the

within-run, between-run, and total imprecision.7

Acceptable Performance Criteria: Precision Studies During an evaluation of vitamin B12 in the laboratory, a precision study was performed for a new test method (Fig. 3.12). Several concentrations of vitamin B12 were run twice daily (in duplicate) for 10 days, as shown in Figure 3.12 (for simplicity, only one concentration is shown). The data are represented in the precision plot in Figure 3.12 (≈76 pg/mL). The amount of variability between runs is represented by different colors, over 10 days (x-axis). The CV was then calculated for within-run, between-run, and between-days. The total SD, estimated at 2.3, is then compared with medical decision levels or medically required standards based on the analyte (Table 3.3). The acceptability of analytic error is based on how the test is to be used to make clinical interpretations.8,9 In this case, the medically required SD limit is 4.8, based on mandated CLIA error budgets. The determination of whether long-term precision is adequate is based on the total imprecision being less than one-third of the total allowable error (total imprecision, in this case, 1.6; selection of one-third total allowable error for imprecision is based on Westgard10). In the case that the value is greater than the total allowable error (1.79 in our example), the test can pass as long as the difference between one-third total allowable error and the determined allowable error is not statistically significant. In our case, the 1.79 was not statistically different from 1.6 (1/3 × 4.8), and the test passed our imprecision studies (Fig. 3.12). The one-third of total error is a run of thumb; some laboratories may choose one-fourth of the total error for analytes that are very precise and accurate. It is not recommended to use all of the allowable error for imprecision (random error) as it leaves no room for systematic error (bias or inaccuracy). TABLE 3.3 Performance Standards for Common Clinical Chemistry Analytes as Defined by the CLIA

Reprinted from Centers for Disease Control and Prevention (CDC), Centers for Medicare and Medicaid Services (CMS), Health and Human Services. Medicare, Medicaid, and CLIA programs; laboratory requirements relating to quality systems and certain personnel qualifications. Final rule. Fed Regist. 2003;68:3639–3714, with permission.

DEFINITIONS BOX Recovery Ability of an analytic test to measure a known amount of analyte; a known amount of analyte is added to real sample matrices. Interference Effect of (a) compound(s) on the accuracy of detection of a particular analyte. Interferents Substances that cause interference. Matrix Body component (e.g., fluid and urine) in which the analyte is to be measured.

Estimation of Inaccuracy Once method imprecision is estimated and deemed acceptable, the determination of accuracy can begin.6 Accuracy is estimated using three different types of studies: (1) recovery, (2) interference, and (3) patient sample comparison.

Recovery Studies Recovery studies will show whether a method is able to accurately measure an analyte. In a recovery experiment, a small aliquot of concentrated analyte is added (spiked) into a patient sample (matrix) and then measured by the method being evaluated. The amount recovered is the difference between the spiked sample and the patient sample (unmodified). The purpose of this type of study is to determine how much of the analyte can be detected (recovered) in the presence of all the other compounds in the matrix. The original patient samples (matrix) should not be diluted more than 10% so that the physiological (biological and protein components) matrix solution is minimally affected. An actual example of a recovery study for total calcium is illustrated in Figure 3.13; the results are expressed as percentage recovered. The performance standard for calcium, defined by CLIA, is the target value ±1.0 mg/dL (see Table 3.3). Recovery of calcium in this example exceeds this standard at the two calcium levels tested and therefore acceptable (Fig. 3.13).

FIGURE 3.13 An example of a sample recovery study for total calcium. A sample is spiked with known amounts of calcium in a standard matrix, and recovery is determined as shown. Recovery studies are designed to detect proportional error in an assay.

Interference Studies Interference studies are designed to determine if specific compounds affect the

accuracy of laboratory tests. Common interferences encountered in the laboratory include hemolysis (broken red blood cells and their contents), icterus (high bilirubin), and turbidity (particulate matter or lipids), which can affect the measurement of many analytes. Interferents often affect tests by absorbing or scattering light, but they can also react with the reagents or affect reaction rates used to measure a given analyte. Interference experiments are typically performed by adding the potential interferent to patient samples.11 If an effect is observed, the concentration of the interferent is lowered sequentially to determine the concentration at which test results are not clinically affected (or minimally affected). It is common practice to flag results with unacceptably high levels of an interferent. Results may be reported with cautionary comments or not reported at all. An example of an interference study performed in one laboratory is shown in Figure 3.14. When designing a method validation study, potential interferents should be selected from literature reviews and specific references. Other excellent resources include Young12 and Siest and Galteau.13 Common interferences, such as hemolysis, lipemia, bilirubin, anticoagulant, and preservatives, are tested by the manufacturer. Glick and Ryder14,15 published “interferographs” for clinical chemistry instruments, which relate analyte concentration measure to interferent concentration. They have also demonstrated that considerable expense can be saved by the acquisition of instruments that minimize hemoglobin, triglyceride, and bilirubin interference.16 It is good laboratory practice and a regulatory requirement to consider interferences as part of any method validation. In the clinical laboratory, interference studies “in vitro” are often difficult, and standardized experiments are often suspect as to reflecting the actual actions “in vivo.” Proper instrumentation and critical literature reviews are often the best options for determining interference acceptability guidelines.

FIGURE 3.14 An example of an interference study for troponin I (TnI). Increasing amounts of hemolysate (lysed red blood cells, a common interference) were added to a sample with elevated TnI of 0.295 ng/mL. Bias is calculated based on the difference between the baseline and hemolyzed samples. The data are corrected for the dilution with hemolysate.

COM Studies A method comparison experiment examines patient samples by the method being evaluated (test method) with a reference method. It is used primarily to estimate systematic error in actual patient samples, and it may offer the type of systematic error (proportional vs. constant). Ideally, the test method is compared with a standardized reference method (gold standard), a method with acceptable accuracy in comparison with its imprecision. Many times, reference methods are laborious and time consuming, as is the case with the ultracentrifugation methods of determining cholesterol. Because most laboratories are understaffed or do not have the technical expertise required to perform reference methods, most test methods are compared with those routinely used. These routine tests have their own particular inaccuracies, so it is important to determine what inaccuracies they might have that are documented in the literature. If the new test method is to replace the routine method, differences between the two should be well characterized. Also extensive documented communications with all providers and staff must be performed before implementation.

To compare a test method with a comparative method, it is recommended by Westgard et al.6 and CLIA17 that 40 to 100 specimens be run by each method on the same day over 8 to 20 days (preferably within 4 hours), with specimens spanning the clinical range and representing a diversity of pathologic conditions. Samples should cover the full analytical measurement range, and it is recommended that 25% be lower than the reference range, 50% should be in the reference range, and 25% should be higher than the reference range. Additional samples at the medical decision points should be a priority during the COM. As an extra measure of QC, specimens should be analyzed in duplicate. Otherwise, experimental results must be critically reviewed by the laboratory director and evaluation staff comparing test and comparative method results immediately after analysis. Samples with large differences should be repeated to rule out technical errors as the source of variation. Daily analysis of two to five patient specimens should be followed for at least 8 days if 40 specimens are compared and for 20 days if 100 specimens are compared in replication studies.17 A plot of the test method data (y-axis) versus the comparative method (xaxis) helps to visualize the data generated in a COM test (Fig. 3.15A).18 As described earlier, if the two methods correlate perfectly, the data pairs plotted as concentration values from the reference method (x) versus the evaluation method (y) will produce a straight line (y = mx + b), with a slope of 1.0, a y-intercept of 0, and a correlation coefficient (r) of 1. Data should be plotted daily and inspected for outliers so that original samples can be reanalyzed as needed. While linearity can be confirmed visually in most cases, it may be necessary to evaluate linearity more quantitatively.19

FIGURE 3.15 A comparison of methods experiment. (A) A model of a perfect method comparison. (B) An actual method comparison of beta-human chorionic gonadotropin (βhCG) between the Elecsys 2010 (Roche, Nutley, NJ) and the IMx (Abbott Laboratories, Abbott Park, IL). (Adapted from Shahzad K, Kim DH, Kang MJ. Analytic evaluation of the beta-human chorionic gonadotropin assay on the Abbott IMx and Elecsys2010 for its use in doping control. Clin Biochem. 2007;40:1259–1265.)

DEFINITIONS BOX Deming regression Linear regression analysis (orthogonal least squares) used to compare two methodologies using the best fit line through the data points (see Figs. 3.24 and 3.25). Passing-Bablok regression A nonparametric regression method for comparison studies. It is a robust method that is resistant to outliers.

Statistical Analysis of COM Studies While a visual inspection of method comparisons is essential, statistical analysis

can be used to make objective decisions about the performance of a method. The first and most fundamental statistical analysis for COM studies is the linear regression. Linear regression analysis yields the slope (b), the y-intercept (a), the SD of the points about the regression line (Sy/x), and the correlation coefficient (r); regression also yields the coefficient of determination (r2) (see DEFINITIONS BOX). An example of these calculations can be found in Figure 3.15, where a comparison of β-human chorionic gonadotropin (βhCG) concentrations on an existing immunoassay system (Reference Method) and to a new system (Test Method). Statistics are calculated to determine the types and amounts of error that a method has, which is the basis for deciding if the test is still valid to make clinical decisions. Several types of errors can be seen looking at a plot of test method versus comparative method (Fig. 3.16). When random errors occur (Fig. 3.16A), points are randomly distributed around the regression line. Increases in the Sy/x statistic reflect random error. Constant error (Fig. 3.16B) is seen visually as a shift in the y-intercept; a t-test can be used to determine if two sets of data are significantly different from each other. Proportional error (Fig. 3.16C) is reflected in alterations in line slope and can also be analyzed with a t-test (see Fig. 3.1).

FIGURE 3.16 Examples of (A) random, (B) constant, and (C) proportional error using linear regression analysis. (Adapted from Westgard JO. Basic Method Evaluation. 2nd ed. Madison, WI: Westgard Quality Corp.; 2003.)

DEFINITIONS BOX Correlation Coefficient (r) Defines the strength of relationship between two variables. Ranges from −1 (perfect inverse correlation) to +1 (perfect positive correlation). A value of 0 indicates no correlation or a random relationship between variables. Coefficient of Determination (r2 or R2) Indicates the proportion of variation explained by one variable to predict another. Ranges from 0 to 1. Where r2 = 0.95, one variable explains 95% of the variation of the

other. In the case of regression, it indicates how well the line represents the data. Note that a statistical difference does not necessarily indicate that the method is not clinically usable, just that it is different. The size and nature (systematic and random error) of the differences determines if a method is clinical useable.20 For this reason, linear regression analysis is more useful than the t-test for evaluating COM studies,20 as the constant systematic error can be determined by the y-intercept and the proportional systematic error can be determined by the slope. Random error can also be determined by the standard error of the estimate (Sy/x). Importantly, if a nonlinear relationship occurs between the test and comparative methods, linear regression analysis can be used only over the values in the linear range. To make accurate conclusions about the relationship between two tests, it is important to confirm that outliers are true outliers and not the result of technical errors.21–23 So far, we have described how we estimate error test methods in terms of imprecision and inaccuracy. However, tests are performed to answer clinical questions, so to assess how this error might affect clinical judgments, it is assessed in terms of allowable (analytical) error (Ea).24 Allowable error is determined for each test based on the amount of error that will not negatively affect clinical decisions. If the combined random and systematic error (total error) is less than Ea, then the performance of the test is considered acceptable. However, if the error is larger than Ea, corrections (calibration, new reagents, or hardware improvements) must be made to reduce the error or the method rejected. This process ensures that the laboratory provides accurate, clinically usable information to physicians to manage their patients effectively. To emphasize this point, laboratorians should consider that physicians are rarely aware of the imprecision, bias, and performance of a given test that they rely on to make decisions. It is the responsibility of the laboratory to ensure quality.

Allowable Analytical Error An important aspect of method evaluation is to determine if the random and systematic errors (total error) are less than the total allowable analytical error (Ea). In the past, there have been several methodologies that have been used to establish Ea, including physiologic variation, multiples of the reference interval, and pathologist judgment.25, 26, 27, 28 The Clinical Laboratory Improvement

Amendments of 1988 (CLIA 88) published allowable error (Ea) for an array of clinical tests.29 The Ea limits published by the CLIA specify the maximum error allowable by federally mandated proficiency testing (see examples in Table 3.3). These performance standards are now being used to determine the acceptability of clinical chemistry analyzer performance.30,31 The Ea is specifically calculated based on the types of studies described in the previous section (Table 3.4). An example of calculations made for the single-value criteria is shown in Table 3.4. Here, estimates of random and systematic errors are calculated and then compared with the published allowable error at critical concentrations of the analyte. If the test does not meet the allowable error criteria, it must be modified to reduce error or be rejected. TABLE 3.4 Single-Value Criteria of Westgard et al.

Based on National Committee for Clinical Laboratory Standards (NCCLS). Approved Guideline for Handling and Processing of Blood Specimens. 3rd ed. Villanova, PA: NCCLS; 2004 (Document no. H18A3).

Medically acceptable error (Ea) based on performance standards set by CLIA, as shown in Table 3.12. Comprehensive COM studies are demanding on personnel, time, and budgets. This has led to the description of abbreviated experiments that could be undertaken to estimate imprecision and inaccuracy.32 CLIA has published guidelines for such an abbreviated application, which can be used by a laboratory to confirm that the precision and accuracy performance is consistent with the manufacturer's reported claims. These studies can be completed in 5 working days, making it likely laboratories will use these guidelines to set up new methodologies. The Clinical Laboratory Standards Institute (CLSI) also has several documents that describe best practices for method evaluation with worked examples.

Method Evaluation Acceptance Criteria Collectively, the data gathered in precision, linearity, interference, recovery, and

method comparison studies are used to guide test implementation decisions. That is, the data do not define if a test method is acceptable by itself. Clinical judgment is required to determine if the analytical performance is acceptable for clinical use with consideration for the nature and application of the analyte. For example, imprecision for a pregnancy test around the cutoff value of 5 or 10 mIU/mL is more of a concern than it would be at 15,000 mIU/mL. Likewise, proportional bias for troponin at a high concentration is of less concern than it is near the clinical decision point. Thus, method evaluation studies and statistical analysis are necessary, but not sufficient to determine if a test is valid. The validation and statistics serve as the basis on which to make the decision.

QUALITY CONTROL QC in the laboratory involves the systematic monitoring of analytic processes to detect analytic errors that occur during analysis and to ultimately prevent the reporting of incorrect patient test results. In the context of what we have discussed so far, QC is part of the performance monitoring that occurs after a test has been established (see Fig. 3.10). In general, monitoring of analytic methods is performed by assaying stable control materials and comparing their determined values with their expected values. The expected values are represented by intervals of acceptable values with upper and lower limits, known as control limits. When the expected values are within the control limits, the operator can be reasonably assured that the analytic method is properly reporting values as approved during the method validation. However, when observed values fall outside the control limits, the operator must be aware of possible problems and the need for further investigation before reporting potentially erroneously patient results. The principles of visualizing QC data were initially applied to the clinical laboratory in the 1950s by Levey and Jennings.33 Many important modifications have been made to these systems since that time, and they are discussed in general in this section.

DEFINITIONS BOX Control limits Threshold at which the value is statistically unlikely. Control material Material analyzed only for QC purposes. Levey-Jennings control chart Graphical representation of observed values of a control material over time in the context of the upper and

lower control limits. Multirule procedure Decision criteria to determine if an analytic run is in control; used to detect random and systematic errors over time (e.g., 1-3S, 2-2S, R4-S in combination). Specimens analyzed for QC purposes are known as QC materials. These materials must be available in sufficient quantity to last for extended periods of time determined by QC material stability and vendor determined expiration dates. QC materials should be the same matrix as the specimens actually to be tested. For example, a glucose assay performed on serum should have QC materials that are prepared in serum. Variation between vials should be minimal so that differences seen over time can be attributed to the analytic method itself and not variation in the QC material. Control material concentrations should span the clinically important range of the analyte at appropriate decision levels. For example, sodium QC materials might be tested at 130 and 150 mmol/L, representing cutoff values for hyponatremia and hypernatremia, respectively. QC for general chemistry assays generally uses two levels of control, while immunoassays commonly use three. Today, laboratories more often purchase manufactured control materials for QC, instead of preparing the materials themselves. These materials are often lyophilized (dehydrated to powder) for stability and can be reconstituted in specific diluents or matrices representing urine, blood, or cerebrospinal fluid (CSF). Control materials can be purchased with or without previously assayed ranges. Assayed materials give expected target ranges, often including the mean and SD using common analytic methods. While these products are more expensive because of the additional characterization, they allow another external check of method accuracy. Some laboratories will prepare internal quality control materials using human serum (i.e., serum pools) as an attempt to address concerns about artificial matrixes and analyte values used in commercial materials. Because most commercially prepared control materials are lyophilized and require reconstitution before use, the diluent should be carefully added and mixed. Incomplete mixing yields a partition of supernatant liquid and underlying sediment and will result in incorrect control values. Frequently, the reconstituted material will be more turbid (cloudy) than the actual patient specimen. Stabilized frozen controls do not require reconstitution but may behave differently from patient specimens in some analytic systems. It is important to carefully evaluate these stabilized controls with any new instrument system. Improper preparation

and handling of QC materials is the main reason for QC failures in the laboratory and requires staff education and material management to minimize. For example, if one user mixes the QC material thoroughly after warming it for several hours, while another user runs it cold with minimal mixing, it may be expected that the QC results will vary while the process itself may be perfectly stable. More manufactured liquid QC is becoming available; however, it is usually more expensive, but it does remove any variation due to the preparation of the QC materials.

QC Charts A common method to assess the determination of control materials over time is by the use of a Levey-Jennings control chart (Fig. 3.17). Control charts graphically represent the observed values of a control material over time in the context of the upper and lower control limits in relation to the target value. When the observed value falls with the control limits, it can be interpreted that the method is performed adequately. Points falling outside the control limits suggest that problems may be developing. Control limits are expressed as the mean ± SD using formulas previously described in this chapter. Control charts can detect errors in accuracy and imprecision over time (Fig. 3.17A). Analytic errors that can occur can be separated into random and systematic errors. The underlying rationale for running repeated assays is to detect random errors that affect precision (Fig. 3.17B, middle). Random errors may be caused by variations in technique. Systematic errors arise from factors that contribute to constant differences between measurements; these errors may be either positive or negative (Fig. 3.17B, right). Systematic errors may be due to several factors, including poorly made standards, reagents, instrumentation problems, poorly written procedures, or inadequate staff training.

FIGURE 3.17 Levey-Jennings control chart. Data are plotted over time to identify quality control failures. (Adapted from Westgard JO, Klee GG. Quality management. In: Tietz NW, Burtis CA, Ashwood ER, Bruns DE, eds. Tietz Textbook of Clinical Chemistry and Molecular Diagnostics. 4th ed. St. Louis, MO: Elsevier Saunders; 2006:485–529.)

Operation of a QC System The QC system in the clinical laboratory is used to monitor the analytic variations that can occur (refer to Chapter 4 for Quality Control Plan additional information) The QC program can be thought of as a three-stage process: 1. Establishing or verifying allowable statistical limits of variation for each analytic method 2. Using these limits as criteria for evaluating the QC data generated for each test 3. Taking action to remedy errors real time when indicated a. Finding the cause(s) of error

b. Taking corrective action c. Reanalyzing control and patient data

Establishing Statistical Quality Control Limits With a new instrument or with new lots of control material, the different levels of control material must be analyzed for 20 days. Exceptions include assays that are highly precise (CV < 1%), such as blood gases, where 5 days is adequate. Also, if the QC materials is a new lot with similar ranges for an existing analyte to be implemented, a modified precision study can be done with as few as 10 values and monitoring post implementation. Repeat analysis of the control materials allows the determination of the mean and SD of control materials. Initial estimates of the mean and control limits may be somewhat inaccurate because of the low number of data points. In order to produce more reliable data, estimates of the mean and SDs should be reviewed and updated to include cumulative data. When changing to a new lot of similar material, laboratorians use the newly obtained mean (or manufacturer mean and SD) as the target mean but retain the previous SD. As more data are obtained, all data should be averaged to derive the best estimates of the mean and SD.34 The distribution of error is assumed to be symmetrical and bell shaped (Gaussian) as shown in Figure 3.18. Control limits are set to include most observed values (95% to 99.7%), corresponding to the mean ±2 or 3 SDs. Observation of values in the distribution tails should therefore be rare (1/20 for 2 SDs; 3/1,000 for 3 SDs). Observations outside the control limits suggest changes in the analytic methods. If the process is in control, no more than 0.3% of the points will be outside the 3 SDs (3s) limits. Analytic methods are considered in control if a symmetrical distribution of control values about the mean is seen, and few values outside the 2 SDs (2s) control limits are observed. Some laboratories define a method out of control if a control value exceeds the 2s limits. Other laboratories use the 2s limit as a warning limit and the 3s limit as an error limit. In this particular case, a control point between 2s and 3s would alert the technologist to a potential problem, while a point greater than 3s would require a corrective action. The selection of control rules and numbers should be related to the goals set by the laboratory.35

FIGURE 3.18 Control chart showing the relationship of control limits to the Gaussian distribution. Daily control values are graphed, and they show examples of a shift, an abrupt change in the analytic process, and a trend, a gradual change in the analytic process. It is essential to recognize the distinction between QC limits and QC goals or specifications (Figure 3.19 control_v_specification.png). QC specifications are the criteria used to decide that a given method meets the clinical requirements. QC specifications may be derived from the total allowable error, biological variation, or other medical decision criteria. QC limits are defined by the process itself based on its natural variation. A good description of the difference is that QC specifications are the “voice of the customer,” while QC limits are the “voice of the process.” Thus, QC specifications are “the performance the health care provider needs,” whereas QC limits are the “natural variation of a given test”; more simply, QC specifications are what you want and QC limits are what you have. A process tells the user its limits and the user decides if those limits meet their specification.

FIGURE 3.19 QC specifications limits are the criteria used to decide that a

given method meets the clinical requirements based on the needs of the health care provider. QC control limits are defined by the process itself based on its natural variation. From a practical perspective, QC limits define if a process is in control on a daily basis, and QC specifications define if the overall performance of the process meets the quality goals. Consider that a process may be out of control (e.g., have a 1-3S flag), but still meet the QC specification limits. Conversely, a poorly performing method might have natural variation QC limits, which exceed the QC specification. QC limits must be within QC specifications or the lab will not be able to consistently report accurate results. Importantly, QC specifications do not have any effect on or causal relationship with QC limits; setting QC specifications to a CV of 0.1% will not make a method more precise any more than widening specifications to a CV of 100% would make the method become imprecise. However, the relationship between QC limits and QC specifications can be used to establish which multirules to use.

Multirules RULE! The use of the statistical process control chart (Levey-Jennings) was pioneered by Shewhart in the 1920s. Multirules were formalized by the Western Electric Company and later applied to the clinical laboratory by Westgard and Groth.36 Multirules establish a criterion for judging whether an analytic process is out of control. To simplify the various control rules, abbreviations are used to refer to the various control rules (Table 3.14). Control rules indicate the number of control observations per analytic run, followed by the control amount in subscript.37 For example, the 13s rule indicates that a data point cannot exceed 3 SDs (3s). If the 13s rule is not triggered, the analytic run will be accepted (i.e., results will be reported). If the QC results are more than 3 SDs (the 13s rule is violated), the run may be rejected and there will be additional investigation. The type of rule violated indicates what type of error exists. For example, a 13s rule violation may indicate a loss of precision or “random error” (Table 3.5). TABLE 3.5 Multirule Procedures

SD, standard deviation. Analogous to overlapping diseased and healthy patient results, it is important to consider that not every rule violation indicates that a process is out of control. The 12s rule, for example, will be outside the 2s limit in 5% of the runs with normal analytic variation (Fig. 3.19A). The 8X rule is violated if 8 consecutive control observations fall on one side or the other of the mean (Fig. 3.19B). The more levels of QC material analyzed, the higher the probability of a rule violation even in the absence of true error. When two controls are used, there is an approximately 10% chance that at least one control will be outside the 2s limits; when four controls are used, there is a 17% chance. For this reason, many laboratories use 2s limits either as a warning or not at all when used with other QC rules rather than criteria for run rejection. QC rules should be selected based on the performance of the test relative to the QC specifications. If a test is very precise and well within the QC specification, fewer QC rules are needed. For example, if there are 6 standard deviations between the natural process variation and the QC specification limits, a simple 1-3S rule will easily detect that a process is out of control long before it exceeds the specification limits. Conversely, a test that just meets the QC specification would need multiple rules to ensure that error is detected before exceeding the specification. By judiciously selecting QC rules based on method performance and specifications, error detection can be maximized while minimized false rejections. A final practical concept related to QC materials is repeats. It is common practice is to repeat a control when a QC rule is violated. In the context of imprecision, this practice does not make much sense. A repeat sample is likely to regress to the mean and therefore falls within the limits, even if the process is out of control. If ever, QC repeats only make sense in the context of a shift or bias, where the bias would persist between samples. Collectively, operators should review the Levey-Jennings chart and think about their process and what a given QC violation means before acting. The operators' job is not get QC to pass, but to report accurate results.

Proficiency Testing In addition to daily QC practices, laboratories are required to participate in external proficiency testing programs. Acceptable performance in proficiency testing programs is required by the CAP, the CLIA, and TJC to maintain laboratory accreditation. Even more important, proficiency testing is another tool in the ongoing process of monitoring test performance.

Indicators of Analytic Performance Proficiency testing Internal quality control Laboratory inspections (accreditation) Quality assurance monitoring (see Chapter 4) Clinical utilization

DEFINITIONS BOX Proficiency test Method used to validate a particular measurement process. The results are compared with other external laboratories to give an objective indication of test accuracy. Proficiency samples Specimens that have known concentrations of an analyte for the test of interest. The testing laboratory does not know the targeted concentration when tested. The majority of clinical laboratories subscribe to the proficiency program provided by the CAP. The CAP program has been in existence for 50 years, and it is the gold standard for clinical laboratory proficiency testing. In our laboratory, the majority of analytes are monitored with the CAP proficiency surveys. Other proficiency testing programs are often used when analytes of interest are not tested through CAP (e.g., esoteric tests) or as a means to supplement CAP proficiency testing programs. Additional proficiency programs used in our laboratory include the International Sirolimus Proficiency Testing Scheme (IST), the Binding Site, the American Proficiency Institute (API), and the Centers for Disease Control and Prevention (CDC). If there is no commercial

proficiency testing program available for an analyte, the laboratory is required to implement a non–proficiency test scheme; this is reviewed at the end of this section. For a proficiency test, a series of unknown samples are sent several times per year to the laboratory from the program offering this analysis, such as CAP. The sample challenges are from 2 to 5 in quantity for each analyte for each testing event based on complexity and span a range of target values within the AMR and medical decision points. The samples are analyzed in the same manner as patient specimens as much as possible, and the results are reported to the proficiency program. The program then compiles the results from all of the laboratories participating in the survey and sends a performance report back to each participating laboratory. Each analyte has a defined performance criteria (e.g., ±3 SDs to peer mean), where laboratories using the same method are graded by comparing them with the group. Some proficiency tests are not quantitative and are qualitatively or semiquantitatively (i.e., rapid pregnancy test, RPR titers) compared with other laboratories. Areas of pathology other than clinical chemistry are also subjected to mandatory proficiency qualitative/interpretive testing, including anatomic pathology, clinical microbiology, and clinical microscopy.

Example of Proficiency Test Results for bhCG βhCG-08: CAP value = 75.58; SD = 4.80; CV = 6.4%; n = 47 peer laboratories Evaluation criteria: Peer group ±3 SDs; acceptable range 65.7–85.2 mIU/mL Testing laboratory value = 71.54; SDI = −0.84 acceptable An example of a hypothetical survey is shown in the text box above. The βhCG survey was the eighth sample sent in that year (βhCG-08). The mean of all the laboratories using the same method was 75.58 mIU/mL. The SD and CV are indicated, as is the number of laboratories that participate in that survey (n = 47). The acceptance criteria for this test is established as within ±3 SDs (i.e., between 65.7 and 85.2 mIU/mL). The laboratory's result was 71.54 mIU/mL, which is −0.84 SD from the mean and is within the acceptable limits. When a laboratory performs proficiency testing, there are strict requirements

as follows: 1. The laboratory must incorporate proficiency testing into its routine workflow as much as possible. 2. The test values/samples must not be shared with other laboratories until after the deadline of submission of results to the proficiency provider. Referral of proficiency samples to another lab is prohibited as well as acceptance of proficiency samples from another lab is prohibited. 3. Proficiency samples are tested by bench technical staff who normally conduct patient testing; there can be no unnecessary repeats or actions outside of how a patient sample would be tested and reported. 4. Testing should be completed within the usual time it would take for routine patient testing. 5. Proficiency samples are to be performed and submitted on the primary analyzer when there are multiple analyzers in the laboratory following CLIA guidelines (D2006 493.801). In addition, at least twice per year, other analyzers within the laboratory are to be compared to the primary analyzer. 6. All proficiency failures and significant shifts and trends must be reviewed, investigated, and resolved with 30 days of final receipt of proficiency results. Emphasis must be placed on investigating potential patient impact during the time of proficiency testing. 7. Proficiency testing program must demonstrate a dynamic and real-time review time of all proficiency results by the laboratory director and delegated management personnel. The bottom line is that the sample should be treated like a patient sample to yield a true indication of test accuracy. Proficiency samples are not to be analyzed more than once unless this is the standard policy for patient testing at your laboratory. The acceptability criteria for proficiency testing are provided by the proficiency program. For regulated analytes, these criteria are often the CLIA limits (see Table 3.3). For nonregulated analytes, acceptable criteria are often determined by the scientific community at large. For example, the acceptability criterion for lactate dehydrogenase is ±20% or 3 SDs (whichever is greater) based on peer group data. Proficiency testing allows each laboratory to compare its test results with those of peer laboratories that use the same or similar instruments and methods. Proficiency testing provides performance data for a given analyte at a specific

point in time. Comparison of performance to a robust, statistically valid peer group is essential to identify areas for improvement. Areas of improvement that may be identified in a single proficiency testing event or over multiple events include variation from peer group results, imprecision, and/or results that trend above or below the mean consistently or at specific analyte concentrations. Use of these data allows laboratories to continuously monitor and improve their test performance. The proficiency testing samples also can serve as valuable troubleshooting aids when investigating problem analytes. In our hospital, proficiency samples are also included in the technologist competency or new employee training program. Proficiency tests can also be beneficial in validating the laboratory's measurement method, technical training, and total allowable error limits for new tests. Proficiency testing programs require thorough investigation of discrepant results for any analyte (i.e., failure). Laboratories may be asked to submit information that could include current and historical proficiency testing reports, QC and equipment monitoring, analysis and corrective action of the problem that caused the failure, and the steps taken to ensure the reliability of patient test results (patient impact). If the laboratory cannot resolve analyte testing discrepancies over several testing events, the testing facility may be at risk of losing the authority to perform patient testing for the analyte(s) in question. Also if the laboratory violates any of the mandatory proficiency requirements referenced earlier, it may be at risk of losing its accreditation and authority to perform patient testing for all tests. To develop and manage a successful proficiency testing program for the clinical laboratory, it is important to understand the documented requirements from the two main accreditation bodies. A large proficiency testing program often requires considerable personnel resources and costs for the laboratory and is an essential factor of several in providing a quality management system. As an example of the scale and volume of proficiency tests, our laboratory (for a 800bed hospital with outreach clinics) performed ≈10,000 proficiency tests in a year for ≈500 individual analytes. Even a smaller laboratory with a limited test menu and 3 analyzers and waived testing performs several hundred proficiencies per year. Besides meeting required accreditation standards, proficiency testing allows the laboratory to objectively ensure patient results are accurate.

REFERENCE INTERVAL STUDIES Laboratory test data is used to make medical diagnoses, assess physiologic

function, and manage therapy. When interpreting laboratory data, clinicians compare the measured test result from a patient with a reference interval. Reference intervals include all the data points that define the range of observations (e.g., if the interval is 5 to 10, a patient result of 5 would be considered within the interval). The upper and lower reference limits are set to define a specified percentage (usually 95%) of the values for a population; this means that a percentage (usually 5%) of patients will fall outside the reference interval in the absence of any condition or disease. Reference intervals are sometimes erroneously called “normal ranges.” While all normal ranges are in fact reference intervals, not all reference intervals are normal ranges. This is exemplified by the reference interval for therapeutic drug levels. In this case, a “normal” individual would not have any drug in their system, whereas a patient on therapy has a defined target range. Reference intervals are also sometimes called reference ranges; the preferred term is reference interval because range implies the absolute maximum and minimum values.

DEFINITIONS BOX Reference interval A pair of medical decision points that span the limits of results expected for a defined healthy population. The theory for the development of reference intervals was the work of two main expert committees.38, 39, 40, 41 These committees established the importance of standardizing collection procedures, the use of statistical methods for analysis of reference values and estimation of reference intervals, and the selection of reference populations. Reference intervals are usually established by the scientific community or the manufacturers of reagents and new methodologies. Developing reference intervals often has a financial impact on vendors and marketing of the laboratory products. Laboratorians must be aware of these scientific and economic forces when reviewing vendor data and determining the need for reference interval studies. The two main types of reference interval studies that are reviewed in this section are (1) establishing a reference interval and (2) transferring and verifying a reference interval. The clinical laboratory is required by good laboratory practice and accreditation agencies (i.e., the CAP checklist) to either verify or establish reference intervals for any new tests or significant changes in methodology (Box

3.3).

BOX 3.3 Examples of CAP Checklist Questions Regarding Reference Intervals for Laboratory Inspection The Laboratory Establishes or Verifies Its Reference Intervals (Normal Values) NOTE: Reference intervals are important to allow a clinician to assess patient results against an appropriate population. The reference range must be established or verified for each analyte and specimen source (e.g., blood, urine, and CSF), when appropriate. For many analytes (e.g., therapeutic drugs and CSF total protein), literature references or a manufacturer's package insert information may be appropriate.

Evidence of Compliance Record of reference range study or records of verification of manufacturer's stated range when reference range study is not practical (e.g., unavailable normal population) or other methods approved by the laboratory director.47, 48, 49

The Laboratory Evaluates the Appropriateness of Its Reference Intervals and Takes Corrective Action if Necessary Criteria for evaluation of reference intervals include the following: 1. Introduction of a new analyte to the test repertoire 2. Change of analytic methodology

3. Change in patient population If it is determined that the range is no longer appropriate for the patient population, corrective action must be taken.42, 43, 44

Evidence of Compliance Records of evaluation and corrective action, if indicated Adapted from Sarewitz SJ, ed. Laboratory Accreditation Program Inspection Checklists. Northfield, IL: College of American Pathologists; 2009, with permission.

DEFINITIONS BOX Establishing a reference interval A new reference interval is established when there is no existing analyte or methodology in the clinical or reference laboratory with which to conduct comparative studies. It is a costly and labor-intensive study that will involve laboratory resources at all levels and may require from 120 to as many as ≈700 study individuals. Transference and validation of a reference interval In many cases, a reference interval will be transferred from a similar instrument or laboratory. The reference interval is then validated using the method comparison and patient population and/or using a smaller sample of healthy individuals. These are the most common reference interval studies performed in the clinical laboratory and often include as few as 40 study individuals. The core protocols for both establishing and verifying reference ranges are reviewed in this section. Other terms are used for values or ranges that help the clinician determine the relationship of patients' test results to statistically determined values or ranges for the clinical condition under treatment.

DEFINITIONS BOX Medical decision level Value for an analyte that represents the boundary between different therapeutic approaches.

Normal range Range of results between two medical decision points that correspond to the central 95% of results from a healthy patient population. Note: Of the results, 2.5% will be above the upper limit, and 2.5% will be below the lower limit of the normal range. Therapeutic range Reference interval applied to a therapeutic drug. Reference intervals are needed for all tests in the clinical laboratory, and the provision of reliable reference intervals is an important task for clinical laboratories and test manufacturers. The review of existing reference intervals by the health care team (scientific community, manufacturers, and clinical laboratory) is crucial to meeting the challenges of providing optimal laboratory data for patient care. The application of reference intervals can be grouped into three main categories: diagnosis of a disease or condition (Table 3.6), monitoring of a physiologic condition (Table 3.7), or monitoring therapeutic drugs (Table 3.8). These different applications require different approaches for determination of a reference interval. Specifically, therapeutic drug targets are not derived from a healthy population and unique physiologically conditions require the appropriate reference population. TABLE 3.6 Thyroid-Stimulating Hormone (TSH) Thyroid Disease

Based on Dugaw KA, Jack RM, Rutledge J. Pediatric reference ranges for TSH, free T4, total T4 total T3 and T3 uptake on the vitros ECi analyzer. Clin Chem. 2001;47:A108.

TABLE 3.7 hCG at Defined Gestational Age

TABLE 3.8 Therapeutic Management Targets for Digoxin

Tables 3.6, 3.7, 3.8 also demonstrate the complexity of reference intervals when multiple levels (partitions) of reference intervals are required by the clinician. The framework for verifying or establishing reference intervals is one that can be overwhelming and not feasible for many valid reasons for large and small clinical laboratories. The costs, personnel, and resource requirements

mandate that the reference interval experiment be feasible, well defined, and structured in such a manner to provide accurate and timely reference intervals for optimal clinical use. Where possible, the clinical laboratory director may determine that a review of literature references or manufacturer's package inserts is appropriate in assigning reference intervals for an analyte or this additional information may allow for the shorter reference interval verification study (i.e., 20 study individuals).

Establishing Reference Intervals The Clinical and Laboratory Standards Institute (CLSI, formerly National Committee for Clinical Laboratory Standards [NCCLS]) has published a preferred guideline/resource for establishing or verification of reference intervals (Box 3.4).45

BOX 3.4 To Establish a Reference Range Study 1. Define an appropriate list of biological variations and analytic interferences from medical literature. 2. Choose selection and partition (e.g., age or gender) criteria. 3. Complete a written consent form and questionnaire to capture selection criteria. 4. Categorize the potential reference individuals based on the questionnaire findings. 5. Exclude individuals from the reference sample group based on exclusion criteria. 6. Define the number of reference individuals in consideration of desired confidence limits and statistical accuracy. 7. Standardize collection and analysis of reference specimens for the measurement of a given analyte consistent with the routine practice of patients.

8. Inspect the reference value data and prepare a histogram to evaluate the distribution of data. 9. Identify possible data errors and/or outliers and then analyze the reference values. 10. Document all of the previously mentioned steps and procedures.

Selection of Reference Interval Study Individuals The selection of individuals who can be included in a reference interval study requires defining detailed inclusion/exclusion criteria. Inclusion criteria define what factors (e.g., age and gender) are required to be used for the study, while exclusion criteria list factors that render individuals inappropriate for the study (Table 3.9). It is essential to select the appropriate individuals to obtain the optimal set of specimens with an acceptable level of confidence. Determination of the necessary inclusion and exclusion criteria for donor selection may require extensive literature searches and review with laboratory directors and clinicians. Initially, it must be exactly defined what is a “healthy”/“normal” donor for associated reference values. For example, for a βhCG reference interval study, one would exclude pregnant women or those who may be pregnant, as well as individuals with βhCG-producing tumors. An important note to make is that laboratories are often challenged to locate donors outside the laboratory working environment, who may be largely females under 40 years of age. The use of donors who may not represent the population of interest has the potential to skew the evaluation data used to establish the reference interval. Inpatient samples generally cannot be used for reference interval studies that are designed to reflect a healthy population. Often, donors are recruited from unlikely sources out of necessity, that is, medical or university students and public health patients. Practical considerations must be built into the study to include adequate sample procurement areas, staffing, legal disclaimers, and patient financial incentives (Box 3.5). TABLE 3.9 Examples of Possible Exclusion Factors for a Reference Interval Study

BOX 3.5 Example Questionnaire

Capturing the appropriate information for the inclusion and exclusion criteria, such as donor health status, often requires a well-written confidential questionnaire and consent form. Another consideration when selecting individuals for a reference interval study is additional factors that may require partitioning individuals into subgroups (Table 3.10). These subgroups may require separate reference interval studies. Fortunately, a large number of laboratory tests do not require partitioning and can be used with only one reference interval that is not dependent on a variety of factors (Table 3.11). These examples are testimony to the complexity and variability of conducting reference interval studies. The initial selection of individual donors is crucial to the successful evaluation of reference intervals. TABLE 3.10 Examples of Possible Subgroups Requiring Partitions for a Reference Interval Study

TABLE 3.11 Example of a Simple Reference Interval

Preanalytic and Analytic Considerations Once individuals are selected for a reference interval study, it is important to consider both preanalytic and analytic variables that can affect specific laboratory tests (Table 3.12). Preanalytic and analytic variables must be controlled and standardized to generate a valid reference interval. To illustrate these points, consider establishing a reference interval for fasting glucose. An obvious preanalytic variable that should be addressed for this test is that individuals should not eat for at least 8 hours prior to sample collection. In terms of analytic factors, it is important to define acceptable levels of common interferences, such as hemolysis or lipemia. For fasting glucose, the laboratorian must define whether samples with excess hemolysis or lipemia will be included in the study; this depends, in part, on whether the interferences affect the methods (in this case, glucose). If specific interferences do affect the accuracy of the test, it is essential that interferences can be flagged, to appropriately deal with the results and interpretation. Hemolysis and lipemia can be detected automatically by large chemistry analyzers. It is also worth considering that some methods are more sensitive to interferences. It is also necessary to consider what specific reagents are used in an assay. Changing to a new reagent lot in the middle of a reference study could widen the reference interval or change the data distribution (e.g., change from normal to bimodal). Thus, an effective reference interval study requires extensive knowledge of the analyte, analytic parameters, methodology, and instrumentation and management of all study variables. TABLE 3.12 Preanalytic and Analytic Considerations for Reference Interval Studies

Determining Whether to Establish or Transfer and Verify Reference Intervals Whether to transfer and verify a reference interval or establish an entirely new reference interval for a new method/analyte depends on several factors, such as the presence of an existing reference interval for assay and on the results of a statistical analysis comparing the test method with the reference method. The most basic method comparison involves plotting a reference method against a test method and fitting a linear regression (described in Fig. 3.7). If the correlation coefficient is 1.0, slope is 1, and intercept is 0, the two methods agree and may not require new reference ranges. In this case, a simple reference interval verification study is all that may be required. Conversely, if the two methods differ considerably, then a new reference interval needs to be established or external resources reviewed.

Analysis of Reference Values DEFINITIONS BOX Nonparametric method Statistical test that makes no specific assumption about the distribution of data. Nonparametric methods rank the reference data in order of increasing size. Because the majority of analytes are not normally (Gaussian) distributed (see Fig. 3.6), nonparametric tests are the recommended analysis for most reference

range intervals. Parametric method Statistical test that assumes the observed values, or some mathematical transformation of those values, follow a (normal) Gaussian distribution (see Fig. 3.6). Confidence interval Range of values that include a specified probability, usually 90% or 95%. For example, consider a 95% confidence interval for slope = 0.972–0.988 from a method comparison experiment. If this same experiment were conducted 100 times, then the slope would fall between 0.972 and 0.988 in 95 of the 100 times. Confidence intervals serve to convey the variability of estimates and quantify the variability.

Data Analysis to Establish a Reference Interval To establish a reference interval, it is recommended that the study includes at least 120 individuals. This can be challenging and costly, but it may be necessary for esoteric and laboratory-developed tests. Once the acceptable raw data have been generated, the next step is to actually define the reference interval. The reference interval is calculated statistically using methods that depend on the distribution of the data. In the most basic sense, data may be either normally distributed (Gaussian) or skewed (non-Gaussian) (see Fig. 3.6). If reference data are normally distributed, the reference interval can be determined using a parametric method. A parametric method defines the interval by the mean ±1.96 SDs; by centering on the mean, this formula will include the central 95% of values as given in the example in Figure 3.20A.

FIGURE 3.20 (A) Histogram of total thyroxine (TT4) levels in a real population illustrating a shape indicative of a Gaussian distribution, which is analyzed by parametric statistics. The reference interval is determined from the mean ±1.96 SDs. (B) Histogram of beta-human chorionic gonadotropin (βhCG) levels in a population of pregnant women demonstrating non-Gaussian data and nonparametric determination of the reference interval. The reference interval is determined from percentiles to include the central 95% of values. Although the selection of a wide range of gestational ages makes this a poor population for a reference interval study, it does demonstrate the application of nonparametric intervals. Most analytes do not display a normal (Gaussian) distribution. For example, the distribution of βhCG in pregnant individuals over a range of gestational ages is skewed (Fig. 3.20B); although the selection of a wide range of gestational ages makes this a poor population for a reference interval study, it was selected as an example to emphasize the need for nonparametric intervals. Data that are not normally distributed (i.e., non-Gaussian) must be analyzed using nonparametric analyses. Nonparametric determination of the reference interval is

analyzed using percentiles, which do not depend on the distribution. The reference interval is determined by using the central 95% of values; the reference range is therefore defined by the 2.5th to the 97.5th percentiles, as demonstrated in Figure 3.20B. To calculate the interval, values are ranked from lowest to highest and the 2.5th and 97.5th percentiles are then calculated as follows:

(Eq. 3-7) Most reference interval analyses are determined using nonparametric analysis. This is because nonparametric analysis can be used on Gaussian distributed data and it is the CLSI-recommended method (Fig. 3.20B).45 With the development of statistical software packages such as Analyse-it, EP Evaluator, GraphPad Prism, JMP, MedCalc, Minitab, R, and SAS, reference intervals are rarely determined manually, although basic reference range verification can be done with in a spreadsheet with minimal effort just reviewing patient results (>90% or n ≥ 36 out of n = 40 total) that fall within the proposed reference range. However they are determined, it is important to understand how basic statistical concepts are used to establish reference intervals. For more information on these software programs, the interested reader can access the references and online resources listed at the end of the chapter.

Data Analysis to Transfer and Verify a Reference Interval When possible, clinical laboratories rely on assay manufacturers or on published primary literature as a source of reference intervals. This avoids the expensive and lengthy process of establishing a reference range interval on a minimum of 120 healthy people. The CLSI allows less vigorous studies to verify a reference interval with as few as 40 subject specimens.45 Method verification studies can be used if the test method and study subjects are similar to the vendor's reference data and package insert information. The main assumption in using transference studies is that the reference method is of high quality and the subject populations are similar. The manufacturer's reported 95% reference limits may be considered valid if no more than 10% of the tested subjects fall outside the original reported limits. Figure 3.21 shows an example from our laboratory where we verify the manufacturer's reference range for free thyroxine (fT4). In this example, fewer

than 10% are outside the manufacturer's limits, enabling the reference interval to be adopted by the laboratory. If more than 10% of the values fall outside the proposed interval, an additional 20 or more specimens should be analyzed. If the second attempt at verification fails, the laboratorian should reexamine the analytic procedure and identify any differences between the laboratory's population and the population used by the manufacturer for their analysis. If no differences are identified, the laboratory may need to establish the reference interval using at least 120 individuals. Figure 3.22 demonstrates a simple algorithm to verify reference intervals.

FIGURE 3.21 Reference verification test for free thyroxine (fT4). Only 4.3% of the values are outside the expected range (arrow). The test passes because this is less than the allowable number of outlying samples (10%) (underlined in red).

FIGURE 3.22 Algorithm to test whether a reference interval can be verified. A published reference interval (from a manufacturer or scientific literature) can be adopted if only a few samples fall outside the range. When possible, laboratory reference intervals are verified because of the time and expense of establishing a new interval. Once a reference interval is completed, determined, and approved by the laboratory director, it needs to be effectively communicated and documented to the physicians and providers interpreting test results at the time the test results are reported. This is important given the slight variations in reference intervals seen even among testing facilities using similar methodologies. It is considered good laboratory practice to monitor reference intervals regularly. Some common problems that occur when determining reference intervals are given in Table 3.13. To help identify reference interval problems, the clinical laboratorian should be aware of common flags. These flags often come in the form of an event or communication that alerts the laboratory that there is a potential problem with a test. Based on our observations, flags for reference intervals can include vendor notifications, clinician queries of a particular test, and shifts/trends in large average numbers of patients over time. Review of a large number of patient results over time requires a laboratory information system capable of retreiving patient result data. In addition, all studies need to be

evaluated as to patient population sampled and does it represent the desired donor population. Any of these or other related factors may warrant a review of existing reference intervals. TABLE 3.13 Common Problems Encountered when Monitoring Reference Intervals

Not all analytes use population/gender/age-based reference intervals. For example, therapeutic drugs are not found in healthy individuals, such that the target values are based either on toxicity limits or on minimum effective concentrations. The target limits are derived from clinical or pharmacological studies rather than the healthy population studies described above. In these situations, the need for analytical accuracy is essential. One cannot apply a clinical cut point for a test if the results do not agree with the method used to determine the cut point. Cholesterol and HbA1c are also used in the context of clinical outcomes rather than reflecting values in a healthy population. The quintessential example of this is vitamin D testing in extreme latitudes or smogridden large cities, where a healthy population will be deficient.

DIAGNOSTIC EFFICIENCY Ideally, healthy patients would have completely distinct laboratory values from patients with disease (Fig. 3.23A). However, the reality is that laboratory values usually overlap significantly between these populations (Fig. 3.23B). To determine how good a given test is at detecting and predicting the presence of disease (or a physiologic condition), there are a number of different parameters that are used. These parameters are broadly defined as diagnostic efficiency, which can be broken down into sensitivity, specificity, and predictive values.

FIGURE 3.23 Comparison of an ideal and true laboratory values for healthy and abnormal populations. (A) In the ideal case, the healthy population is completely distinct from those with the condition. (B) In reality, values show significant overlap that affects the diagnostic efficiency of the test.

DEFINITIONS BOX Diagnostic sensitivity Ability of a test to detect a given disease or condition. Diagnostic specificity Ability of a test to correctly identify the absence of a given disease or condition. Positive predictive value Chance of an individual having a given disease or condition if the test is abnormal. Negative predictive value Chance an individual does not have a given disease or condition if the test is within the reference interval.

Measures of Diagnostic Efficiency Parameters of diagnostic efficiency are intended to quantify how useful a test is for a given disease or condition.46 For example, βhCG is used as a test to diagnose pregnancy. While βhCG is excellent for this purpose, there are instances where βhCG may be increased because of other causes, such as cancer

(trophoblastic tumors), or below the cutoff, as is the case very early in pregnancy. It is important to recognize that there is both diagnostic and clinical sensitivity. Analytic sensitivity refers to the lower limit of detection for a given analyte, whereas clinical sensitivity refers to the proportion of individuals with that disease who test positively with the test. Sensitivity can be calculated from simple ratios (Fig. 3.24A). Patients with a condition who are correctly classified by a test to have the condition are called true positives (TPs). Patients with the condition who are classified by the test as not having the condition are called false negatives (FNs). Using the βhCG test as an example, sensitivity can be calculated as follows:

FIGURE 3.24 The sensitivity and specificity of beta-human chorionic gonadotropin (βhCG) for pregnancy. (A) Sensitivity refers to the ability to detect pregnancy. (B) Specificity refers to the ability of the test to correctly classify nonpregnant women. FN, false negative; FP, false positive; TN, true negative; TP, true positive.

FIGURE 3.25 Positive and negative predictive values using beta-human chorionic gonadotropin (βhCG) as a test for pregnancy. (A) Predictive value of a positive test (PPV) indicates the probability of being pregnant if the test is positive. (B) Predictive value of a negative test (NPV) refers to the probability of being nonpregnant if the test is negative. FN, false negative; FP, false positive; TN, true negative; TP, true positive.

(Eq. 3-8) Another measure of clinical performance is diagnostic specificity. Diagnostic specificity is defined as the proportion of individuals without a condition who have a negative test for that condition (Fig. 3.24B). Note that there is also an analytic specificity (described in the Method Evaluation section), which refers to cross-reactivity with other substances. Continuing with βhCG as an example, diagnostic specificity refers to the percentage of nonpregnant individuals that have a negative test compared with the number of nonpregnant

individuals tested. Patients who are not pregnant and have a negative βhCG test are called true negatives (TNs), whereas those who are incorrectly classified as pregnant by the test are called false positives (FPs). Clinical specificity can be calculated as follows:

(Eq. 3-9) For example, a sensitivity of 100% plus a specificity of 100% means that the test detects every patient with disease and that the test is negative for every patient without the disease. Because of the overlap in laboratory values between people with and without disease, this is, of course, almost never the case (see Fig. 3.23B). There are other measures of diagnostic efficiency such as predictive values. There are predictive values for both positive and negative test results. The predictive value of a positive (PPV) test refers to the probability of an individual having the disease if the result is abnormal (“positive” for the condition). Conversely, the predictive value of a negative (NPV) test refers to the probability that a patient does not have a disease if a result is within the reference range (test is negative for the disease) (Fig. 3.25). Predictive values are also calculated using ratios of TPs, TNs, FPs, and FNs as follows:

(Eq. 3-10)

(Eq. 3-11) Using the data from Figure 3.25, if the βhCG test is “positive,” there is a 72% chance the patient is pregnant; if the test is negative, then there is a 78% chance the patient is not pregnant. It is important to understand that unlike sensitivity and specificity, predictive values depend on the prevalence of the condition in the population studied. Prevalence refers to the proportion of individuals within a given population who have a particular condition. If one were testing for βhCG, the prevalence of pregnancy would be quite different

between female Olympic athletes and young women shopping for baby clothes (Table 3.14). Accordingly, the predictive values would change drastically, while the sensitivity and specificity of the test would remain unchanged. TABLE 3.14 Dependence of Predictive Value on Condition Prevalencea

aBased on a constant sensitivity of 80% and specificity of 70%. bHypothetical.

PPV, positive predictive value; NPV, negative predictive value. Measures of diagnostic efficiency depend entirely on the distribution of test results for a population with and without the condition and the cutoff used to define abnormal levels. The laboratory does not have control of the overlap between populations but does have control of the test cutoff. Thus, we will consider what happens when the cutoff is adjusted. The test cutoff (also known as a “medical decision limit”) is the analyte concentration that separates a “positive” test from a “negative” one. For qualitative tests, such as a urine βhCG, the cutoff is defined by the manufacturer and can be visualized directly (Fig. 3.26). For quantitative tests, the cutoff is a concentration; in the case of pregnancy, a serum βhCG concentration greater than 5 mIU/mL could be considered “positive.” By changing the cutoff, from 8 mIU/mL (Fig. 3.27A) to 5 mIU/mL (Fig. 3.27B) or 2 mIU/mL (Fig. 3.27C), it becomes apparent that the diagnostic efficiency changes. As the cutoff is lowered, the sensitivity of the test for pregnancy improves from 40% (Fig. 3.27A) to 90% (Fig. 3.27C). However, this occurs at the expense of specificity, which decreases from 80% (Fig. 3.27A) to 40% (Fig. 3.27C) at the same cutoff. The best test with the wrong cutoff would be clinically useless. Accordingly, it is imperative to use an appropriate cutoff for the testing purpose. In the most rudimentary sense, a high sensitivity is desirable for a screening test, whereas a high specificity is appropriate for confirmation testing.

FIGURE 3.26 An example of a point-of-care device for beta-human chorionic gonadotropin. This is an example of a qualitative test, where the test line (T) represents whether the patient has a positive test, and the control (C) line is used to indicate that the test was successful.

FIGURE 3.27 The effect of adjusting the beta-human chorionic gonadotropin test cutoff on sensitivity and specificity for pregnancy. (A) Using a high cutoff, sensitivity is low and specificity is high. (B, C) As the cutoff is lowered, the sensitivity improves at the expense of specificity. The predictive values also change as the cutoff is adjusted. TP, true positive; FP, false positive; FN, false negative; TN, true negative. To define an appropriate cutoff, laboratorians often use a graphical tool called the receiver operator characteristic (ROC).47 ROC curves are generated by plotting the true-positive rate against the false-positive rate (sensitivity vs. 1 – specificity; Fig. 3.28). Each point on the curve represents an actual cutoff concentration. ROC curves can be used to determine the most efficient cutoff for a test and are an excellent tool for comparing two different tests. The area under the curve represents the efficiency of the test, that is, how often the test correctly classifies individuals as having a condition or not. The higher the area under the

ROC curve, the greater the efficiency. Figure 3.28 shows a hypothetical comparison of two tests used to diagnose pregnancy. The βhCG test has a larger area under the curve and has an overall higher performance than test B. Based on these ROC curves, βhCG represents a superior test compared with hypothetical test B, for diagnosing pregnancy.

FIGURE 3.28 A receiver operator characteristic curve for beta-human chorionic gonadotropin (βhCG) and a hypothetical “test B.” The area under the βhCG curve is greater than “test B” at all points, indicating that it is a superior test for pregnancy. The thin dotted line represents a test of no value (equal to diagnosis by a coin toss). The maximum (optimal) efficiency is indicated by the arrow and corresponds to the βhCG cutoff concentration with the fewest incorrect patient classifications. ROC curves can also be used to determine the optimal cutoff point for a test. The optimal cutoff maximizes the number of correct tests (i.e., fewest FPs and FNs). A perfect test would have an area under the curve of 1.0 and reach the topleft corner of the graph (where sensitivity and specificity equal 100%). Clinical evaluations of diagnostic tests frequently use ROC curves to establish optimal

cutoffs and compare different tests.47 As with the other statistical measures described, there are many software applications that can be used to generate ROC curves. In addition to sensitivity, specificity, and predictive values, there are a number of other measures of diagnostic efficiency. These include odds ratios, likelihood ratios, and multivariate analysis.48,49 While these have increasingly higher degrees of complexity, they all represent efforts to make clinical sense of data postanalytically. It is worth remembering the bigger picture, which is that laboratory values are not used in isolation. Laboratory tests are interpreted in the context of a patient's physical exam, symptoms, and clinical history to achieve a diagnosis.

SUMMARY Collectively, method validation is the first step toward establishing quality for a new method. Once a method has an established acceptable performance, quality control, proficiency testing, and reference interval review are employed to ensure that quality is maintained. Underpinning all decisions about quality are allowable error limits and the clinical requirements of a given test. Given a set of clinical requirements, visual inspection and basic statistics are used to help decide if a method meets the clinical need. Beyond the analytical performance of a test is the diagnostic efficiency, the measure of how well a test can differentiate a diseased state from a healthy one. Diligent method evaluation and ongoing quality monitoring ensure that a test consistently meets analytical and clinical requirements to maximize diagnostic utility.

PRACTICE PROBLEMS Problem 3-1. Calculation of Sensitivity and Specificity Urine hCG tests are commonly used to determine if someone is pregnant. Urine pregnancy tests qualitatively detect the presence of hCG in the urine. While manufacturers often state 99.99% accuracy, they are referring to the accuracy of a test in a patient who has a highly elevated urine hCG, effectively testing a positive control. The following data are based on population of women only a few weeks into pregnancy. Calculate the sensitivity, specificity, and efficiency of urine hCG for detecting pregnancy. Determine the predictive value of a positive urine hCG test.

Number of Pregnancies/Interpretation of hCG Findings

Problem 3-2. A Quality Control Decision

1. Calculate the mean and standard deviation for the above data set. 2. Plot these control data by day (one graph for each level, x-axis = day, y-axis = concentration). Indicate the mean and the upper and lower control limits (mean ±2 standard deviations) with horizontal lines (see Fig. 3.28). 3. You are working in the night shift at a community hospital and are the only person in the laboratory. You are running glucose quality control and obtain the following: Low control value = 90; High control value = 230 Plot these controls on the process control chart (Levey-Jennings) you created above. 4. Are these values within the control limits? 5. What do you observe about these control data? 6. What might be a potential problem?

7. What is an appropriate next step?

Problem 3-3. Precision (Replication) For the following precision data, calculate the mean, SD, and CV for each of the two control solutions A and B. These control solutions were chosen because their concentrations were close to medical decision levels (XC) for glucose: 120 mg/dL for control solution A and 300 mg/dL for control solution B. Control solution A was analyzed daily, and the following values were obtained: 118, 120, 121, 119, 125, 118, 122, 116, 124, 123, 117, 117, 121, 120, 120, 119, 121, 123, 120, and 122 mg/dL. Control solution B was analyzed daily and gave the following results: 295, 308, 296, 298, 304, 294, 308, 310, 296, 300, 295, 303, 305, 300, 308, 297, 297, 305, 292, and 300 mg/dL. Does the precision exceed the total allowable error defined by the CLIA?

Problem 3-4. Recovery For the following tacrolimus immunosuppressant data, calculate the percent recovery for each of the individual experiments and the average of all the recovery experiments. The experiments were performed by adding two levels of standard to each of five patient samples (A through E) with the following results:

What do the results of this study indicate?

Problem 3-5. Interference For the interference data that follow for glucose, calculate the concentration of bilirubin added, the interference for each individual sample, and the average interference for the group of patient samples. The experiments were performed by adding 0.1 mL of a 150 mg/dL ascorbic acid standard to 0.9 mL of five different patient samples (A through E). A similar dilution was prepared for each patient sample using water as the diluent. The results follow:

What do the results of this study indicate?

Problem 3-6. Sample Labeling You receive an urine specimen in the laboratory with a request for a complete urinalysis. The cup is labeled and you begin your testing. You finish the testing and report the results to the ward. Several minutes later, you receive a telephone call from the ward informing you that the urine was reported on the wrong patient. You are told that the cup was labeled incorrectly before it was brought to the laboratory. 1. What is the problem in this case, and where did it occur? 2. Would your laboratory's QC system be able to detect or prevent this type of problem?

Problem 3-7. QC Program for POCT Testing

Your laboratory is in charge of overseeing the QC program for the glucometers (POCT) in use at your hospital. You notice that the ward staff is not following proper procedure for running QC. For example, in this case, the glucometer QC was rerun three times in a row in an effort to have the results in control. The first two runs were both 3 SD high. The last run did return to less than 2 SDs. Explain the correct follow-up procedure for dealing with the out-of-control results.

Problem 3-8. QC Rule Interpretation Explain the R4s rule, including what type of error it detects.

Problem 3-9. Reference Interval Study Design You are asked to design a reference interval study for a new test. The results are known to be different in men and women and is affected by the consumption of aspirin, age, and time of day. Create a questionnaire to collect the appropriate information needed to perform a reference interval study. How would you account for these variables in the data collection? For additional student http://thepoint.lww.com

resources,

please

visit

at

questions 1. A Gaussian distribution is usually a. Bell shaped b. Rectangular c. Uniform d. Skewed 2. The following chloride (mmol/L) results were obtained using a new analyzer:

What is the mean? a. 108 b. 105 c. 109 d. 107 3. The following chloride (mmol/L) results were obtained using a new analyzer:

What is the median? a. 108.5 b. 105 c. 112 d. 107 4. For a data value set that is Gaussian distributed, what is the likelihood (%) that a data point will be within ±1 SD from the mean? a. 68% b. 99% c. 95% d. 100% 5. The correlation coefficient a. Indicates the strength of relationship in a linear regression b. Determines the regression type used to derive the slope and y-intercept c. Is always expressed as “b” d. Expresses method imprecision 6. If two methods agree perfectly in a method comparison study, the slope equals and the y-intercept equals. a. 1.0, 0.0 b. 0.0, 1.0 c. 1.0, 1.0

d. 0.0, 0.0 e. 0.5, 0.5 7. Systematic error can best be described as consisting of a. Constant and proportional error b. Constant error c. Proportional error d. Random error e. Syntax error 8. Examples of typical reference interval data distribution plots include all of the following except a. ROC b. Nonparametric c. Parametric d. Bimodal 9. A reference range can be verified by a. Testing as few as 20 normal donor specimens b. Literature and vendor material review c. Using samples from previously tested hospital patients d. Using pharmacy-provided Plasmanate spiked with target analyte concentrations 10. Reference interval transference studies a. Are used to verify a reference interval b. Are used to establish a reference interval c. Require as many as 120 normal donors d. Use a 68% reference limit for acceptability 11. Diagnostic specificity is the a. Ability of a test to correctly identify the absence of a given disease or condition b. Chance an individual does not have a given disease or condition if the test is within the reference interval c. Chance of an individual having a given disease or condition if the test is abnormal d. Ability of a test to detect a given disease or condition

12. To evaluate a moderately complex laboratory test, all of the following must be done except: a. Analytical sensitivity and specificity b. Verification of the reference interval c. Accuracy and precision d. Reportable range 13. An ROC includes all of the following except: a. Perfect test = an area under the curve 320 mOsm/dL). The gross elevation in glucose and osmolality, the elevation in BUN, and the absence of ketones distinguish this condition from diabetic ketoacidosis. Other forms of impaired glucose metabolism that do not meet the criteria for diabetes mellitus include impaired fasting glucose and impaired glucose tolerance. These forms are discussed in the following section.

CASE STUDY 14.2 A 58-year-old obese man with frequent urination was seen by his primary care physician. The following laboratory work was performed, and the following results were obtained:

Questions 1. What is the probable diagnosis of this patient? 2. What other test(s) should be performed to confirm this? Which is the preferred test? 3. What values from no. 2 would confirm the diagnosis of diabetes? 4. After diagnosis, what test(s) should be performed to monitor his condition?

Criteria for Testing for Prediabetes and Diabetes The testing criteria for asymptomatic adults for type 2 diabetes mellitus were modified by the ADA Expert Committee to allow for earlier detection of the disease. According to the ADA recommendations, all adults beginning at the age of 45 years should be tested for diabetes every 3 years using the hemoglobin A1c (HbA1c), fasting plasma glucose, or a 2-hour 75 g oral glucose tolerance test (OGTT) unless the individual has otherwise been diagnosed with diabetes.4 Testing should be carried out at an earlier age or more frequently in individuals who display overweight tendencies, that is, BMI greater than or equal to 25 kg/m2 (at-risk BMI may be lower in some ethnic groups, i.e., Asian Americans ≥23 kg/m2), and have additional risk factors, as follows: Habitually physically inactive Family history of diabetes in a first-degree relative In a high-risk minority population (e.g., African American, Latino, Native American, Asian American, and Pacific Islander) History of GDM or delivering a baby weighing more than 9 lb (4.1 kg) Hypertension (blood pressure ≥ 140/90 mm Hg) Low high-density lipoprotein (HDL) cholesterol concentrations ( 250 mg/dL (2.82 mmol/L) A1C ≥ 5.7% (33 mmol/mol), IGT, or IFG on previous testing History of impaired fasting glucose/impaired glucose tolerance Women with polycystic ovarian syndrome (PCOS) Other clinical conditions associated with insulin resistance (e.g., severe obesity and acanthosis nigricans) History of cardiovascular disease In the absence of the above criteria, testing for prediabetes and diabetes should begin at the age of 45 years. If results are normal, testing should be repeated at least at 3-year intervals, with consideration of more frequent testing depending on initial results and risk status. As the incidence of adolescent type 2 diabetes has risen dramatically in the past few years, criteria for the testing for type 2 diabetes in asymptomatic children have been developed. These criteria include initiation of testing at the

age 10 years or at the onset of puberty, if puberty occurs at a younger age, with follow-up testing every 2 years. Testing should be carried out on children who display the following characteristics: overweight (BMI >85th percentile for age and sex, weight for height >85th percentile, or weight >120% of ideal for height) plus any two of the following risk factors: Family history of type 2 diabetes in first- or second-degree relative Race/ethnicity (e.g., Native American, African American, Latino, Asian American, and Pacific Islander) Signs of insulin resistance or conditions associated with insulin resistance (e.g., acanthosis nigricans, hypertension, dyslipidemia, and PCOS) Maternal history of diabetes or GDM

CASE STUDY 14.3 A 14-year-old male student was seen by his physician. His chief complaints were fatigue, weight loss, and increases in appetite, thirst, and frequency of urination. For the past 3 to 4 weeks, he had been excessively thirsty and had to urinate every few hours. He began to get up three to four times a night to urinate. The patient has a family history of diabetes mellitus.

Questions 1. Based on the preceding information, can this patient be diagnosed with diabetes? 2. What further tests might be performed to confirm the diagnosis? 3. According to the ADA, what criteria are required for the diagnosis of

diabetes? 4. Assuming this patient has diabetes, which type would be diagnosed?

Criteria for the Diagnosis of Diabetes Mellitus Four methods of diagnosis are suggested: (1) HbA1c greater than or equal to 6.5% using a National Glycohemoglobin Standardization Program (NGSP)certified method, (2) a fasting plasma glucose greater than or equal to 126 mg/dL (7.0 mmol/L), or (3) an OGTT with a 2-hour postload (75 g glucose load) level greater than or equal to 200 mg/dL (11.1 mmol/L), and (4) symptoms of diabetes plus a random plasma glucose level greater than or equal to 200 mg/dL (11.1 mmol/L), each of which should be confirmed on a subsequent day by any one of the first three methods (Tables 14.5, 14.6, 14.7). Any of the first three methods are considered appropriate for the diagnosis of diabetes. The decision on which method to use is the decision of the health care provider depending on various patient factors. Point-of-care (POC) assay methods for either plasma glucose or HbA1c are not recommended for diagnosis. TABLE 14.6 Categories of Fasting Plasma Glucose

FPG, fasting plasma glucose. aMust be confirmed.

TABLE 14.7 Categories of Oral Glucose Tolerance

PG, plasma glucose. aMust be confirmed.

An intermediate group of individuals who did not meet the criteria of diabetes mellitus but who have glucose levels above normal be placed into three categories for the risk of developing diabetes. First, those individuals with fasting glucose levels greater than or equal to 100 mg/dL but less than 126 mg/dL are placed in the impaired fasting glucose category. Another set of individuals who have 2-hour OGTT levels greater than or equal to 140 mg/dL but less than 200 mg/dL are placed in the impaired glucose tolerance category. Additionally, individuals with a HbA1c of 5.7% to 6.4% are placed in the third at-risk category. Individuals in these three categories are referred to as having “prediabetes” indicating the relatively high risk for the development of diabetes in these patients.

Criteria for the Testing and Diagnosis of GDM The diagnostic criteria for gestational diabetes were revised by the International Association of the Diabetes and Pregnancy Study Groups. The revised criteria recommend that all nondiabetic pregnant women should be screened for GDM at 24 to 28 weeks of gestation.

CASE STUDY 14.4 A 13-year-old girl collapsed on a playground at school. When her mother was contacted, she mentioned that her daughter had been losing weight and making frequent trips to the bathroom in the night. The emergency squad noticed a fruity breath. On entrance to the emergency department, her vital

signs were as follows:

Stat lab results included:

questions 1. Identify this patient's most likely type of diabetes. 2. Based on your identification, circle the common characteristics associated with that type of diabetes in the case study above. 3. What is the cause of the fruity breath?

The approach for screening and diagnosis is either a one-step or two-step approach. The one-step approach is the performance of a 2-hour OGTT using a 75 g glucose load. Glucose measurements should be taken at fasting, 1 hour, and 2 hours. A fasting plasma glucose value greater than or equal to 92 mg/dL (5.1 mmol/L), a 1-hour value greater than or equal to 180 mg/dL (10 mmol/L), or a 2hour glucose value greater than or equal to 153 mg/dL (8.5 mmol/L) is diagnostic of GDM if any one of the three criteria are met. This test should be performed in the morning after an overnight fast of at least 8 hours (Table 14.8).

In the two-step approach, an initial measurement of plasma glucose at 1-hour postload (50-g glucose load) is performed. A plasma glucose value 140 mg/dL (≥7.8 mmol/L) indicates the need to perform a 3-hour OGTT using a 100 g glucose load. GDM is diagnosed when any two of the following four values are met or exceeded: fasting, >95 mg/dL (5.3 mmol/L); 1 hour, greater than or equal to 180 mg/dL (10.0 mmol/L); 2 hours, greater than or equal to 155 mg/dL (8.6 mmol/L); or 3 hours, greater than or equal to 140 mg/dL (7.8 mmol/L). This test should be performed in the morning after an overnight fast of between 8 and 14 h, after at least 3 days of unrestricted diet (≥150 g carbohydrate per day) and unlimited physical activity. NOTE: The National Diabetes Data Group's levels are slightly higher than those listed above. TABLE 14.8 Diagnostic Criteria for Gestational Diabetes

HYPOGLYCEMIA Hypoglycemia involves decreased plasma glucose levels and can have many causes—some are transient and relatively insignificant, but others can be life threatening. Hypoglycemia causes brain fuel deprivation, which can result in impaired judgment and behavior, seizures, comas, functional brain failure, and death. Hypoglycemia is the result of an imbalance in the rate of glucose appearance and disappearance from the circulation. This imbalance may be caused by treatment, such as diabetic drugs or biological factors. The plasma

glucose concentration at which glucagon and other glycemic factors are released is between 65 and 70 mg/dL (3.6 to 3.9 mmol/L); at about 50 to 55 mg/dL (2.8 to 3.1 mmol/L), observable symptoms of hypoglycemia appear. The warning signs and symptoms of hypoglycemia are all related to the central nervous system. The release of epinephrine into the systemic circulation and of norepinephrine at nerve endings of specific neurons acts in unison with glucagon to increase plasma glucose. Glucagon is released from the islet cells of the pancreas and inhibits insulin. Epinephrine is released from the adrenal gland and increases glucose metabolism and inhibits insulin. In addition, cortisol and growth hormone are released and increase glucose metabolism. Historically, hypoglycemia was classified as postabsorptive (fasting) and postprandial (reactive) hypoglycemia. Postpranial hypoglycemia describes the timing of hypoglycemia (within 4 hours after meals). Current approaches suggest classifying postprandial hypoglycemia based on the severity of symptoms and measured plasma glucose levels. This approach is especially important for individuals with diabetes, who are a high risk for hypoglycemic episodes (Table 14.9).5 The ADA and the Endocrine Society recommend that a plasma concentration of less than or equal to 70 mg/dL (£3.9 mmol/L) be used as cutoff as well as an alert value to prevent a clinical hypoglycemic episode. This value also allows for a margin of error for self-monitoring glucose devices. Patients with diabetes who present with iatrogenic hypoglycemia may potentially be harmed by the low glucose level. TABLE 14.9 Classification of Hypoglycemia

Source: Seaquist ER, Anderson J, Childs B, et al. Hypoglycemia and diabetes: a report of a workgroup of the American Diabetes Association and the Endocrine Society. Diabetes Care. 2013;36:1384–1395.

Hypoglycemia is rare in individuals with normal glucose metabolism. In individuals without diabetes, a diagnosis of hypoglycemia should be made only in those who demonstrate the Whipple triad: (1) hypoglycemic symptoms; (2)

plasma glucose concentration is low (30 mg/dL) are now known to increase the risk of premature CHD and stroke. Because the kringle domains of Lp(a) have a high level of homology with plasminogen, a precursor of plasmin that promotes clot lysis via fibrin cleavage, it has been proposed that Lp(a) may compete with plasminogen for binding sites on endothelium and on fibrin, thereby promoting clotting. Clinical studies have demonstrated increasing risk of both myocardial infarction and stroke with increasing Lp(a) concentration; however, the measurement of Lp(a) is often underutilized in clinical practice. Part of the reason is due to the fact that the accurate measurement of Lp(a) is difficult and specific therapies for reducing its concentration in blood are limited but several are in development. Measuring Lp(a) is recommended for patients with a strong family history of CHD, particularly in the absence of other known risk factors, such as increased LDL-C, or for patients that develop CHD on statin therapy or for patients with premature aortic stenosis, which has been shown to be caused by high Lp(a) levels.

High-Density Lipoproteins HDL, the smallest and most dense lipoprotein particle, is synthesized by both the liver and the intestine (Table 15.1). HDL can exist as either disk-shaped particles or, more commonly, spherical particles.4 Discoidal HDL typically contains two molecules of apo A-I, which form a ring around a central lipid bilayer of phospholipid and cholesterol. Discoidal HDL is believed to represent nascent or newly secreted HDL and is the most active form in removing excess cholesterol from peripheral cells. The ability of HDL to remove cholesterol from cells, called reverse cholesterol transport, is one of the main mechanisms proposed to explain the antiatherogenic property of HDL. When discoidal HDL has acquired an additional lipid, cholesteryl esters and triglycerides form a core region between its phospholipid bilayer, which transforms discoidal HDL into spherical HDL. HDL is highly heterogeneous in size and lipid and protein composition and is separable into as many as 13 or 14 different subfractions. There are two major types of spherical HDL based on density differences: HDL2 (1.063 to 1.125 g/mL) and HDL3 (1.125 to 1.21 g/mL). HDL2 particles are larger in size, less dense, and richer in lipid than HDL3 and may be more efficient in the

delivery of lipids to the liver.

Lipoprotein X Lipoprotein X is an abnormal lipoprotein present in patients with biliary cirrhosis or cholestasis and in patients with mutations in lecithin–cholesterol acyltransferase (LCAT), the enzyme that esterifies cholesterol. Lipoprotein X is different from other lipoproteins in the endogenous pathway due to the lack of apo B-100. Phospholipids and nonesterified cholesterol are its main lipid components (~90% by weight) and albumin and apo C are the main protein components (500 mg/dL of hemoglobin) can cause an increase of up to 30%.2

Determination of Potassium Specimen Serum, plasma, and urine may be acceptable for analysis. Hemolysis must be

avoided because of the high K+ content of erythrocytes. Heparin is the anticoagulant of choice. Whereas serum and plasma generally give similar K+ levels, serum reference intervals tend to be slightly higher. Significantly elevated platelet counts may result in the release of K+ during clotting from rupture of these cells, causing a spurious hyperkalemia. In this case, plasma is preferred. Whole blood samples may be used with some analyzers; however, one should consult the instrument's operations manual for acceptability. Urine specimens should be collected over a 24-hour period to eliminate the influence of diurnal variation. Methods As with Na+, the current method of choice is ISE. For ISE measurements, a valinomycin membrane is used to selectively bind K+, causing an impedance change that can be correlated to K+ concentration where KCl is the inner electrolyte solution.

Reference Ranges See Table 16.9.3 TABLE 16.9 Reference Ranges for Potassium

Chloride Chloride (Cl−) is the major extracellular anion, and its precise function in the body is not well understood; however, it is involved in maintaining osmolality, blood volume, and electric neutrality. In most processes, Cl− shifts secondarily to a movement of Na+ or HCO3−. Cl− ingested in the diet is almost completely absorbed by the intestinal tract.

Cl− is then filtered out by the glomerulus and passively reabsorbed, in conjunction with Na+, by the proximal tubules. Excess Cl− is excreted in the urine and sweat. Excessive sweating stimulates aldosterone secretion, which acts on the sweat glands to conserve Na+ and Cl−. Cl− maintains electrical neutrality in two ways. First, Na+ is reabsorbed along with Cl− in the proximal tubules. In effect, Cl− acts as the rate-limiting component, in that Na+ reabsorption is limited by the amount of Cl− available. Electroneutrality is also maintained by Cl− through the chloride shift. In this process, CO2 generated by cellular metabolism within the tissue diffuses out into both the plasma and the red cell. In the red cell, CO2 forms carbonic acid (H2CO3), which splits into H+ and HCO3−(bicarbonate). Deoxyhemoglobin buffers H+, whereas the HCO3− diffuses out into the plasma and Cl− diffuses into the red cell to maintain the electric balance of the cell (Fig. 16.4).

FIGURE 16.4 Chloride shift mechanism. See text for details. (Reprinted from Burtis CA, Ashwood ER, eds. Tietz Textbook of Clinical Chemistry. 2nd ed. Philadelphia, PA: WB Saunders; 1994, with permission.)

Clinical Applications

Cl− disorders are often a result of the same causes that disturb Na+ levels because Cl− passively follows Na+. There are a few exceptions. Hyperchloremia may also occur when there is an excess loss of HCO3− as a result of GI losses, RTA, or metabolic acidosis. Hypochloremia may also occur with excessive loss of Cl− from prolonged vomiting, diabetic ketoacidosis, aldosterone deficiency, or salt-losing renal diseases such as pyelonephritis. A low serum level of Cl− may also be encountered in conditions associated with high serum HCO3− concentrations, such as compensated respiratory acidosis or metabolic alkalosis.

Determination of Chloride Specimen Serum or plasma may be used, with lithium heparin being the anticoagulant of choice. Hemolysis does not cause a significant change in serum or plasma values as a result of decreased levels of intracellular Cl−. However, with marked hemolysis, levels may be decreased as a result of a dilutional effect. Whole blood samples may be used with some analyzers; however, one should consult the instrument's operation manual for acceptability. The specimen of choice in urine Cl− analyses is 24-hour collection because of the large diurnal variation. Sweat is also suitable for analysis. Sweat collection and analysis are discussed in Chapter 29. Methods There are several methodologies available for measuring Cl−, including ISEs, amperometric-coulometric titration, mercurimetric titration, and colorimetry. The most commonly used is ISE. For ISE measurement, an ion-exchange membrane is used to selectively bind Cl− ions. Amperometric–coulometric titration is a method using coulometric generation of silver ions (Ag+), which combine with Cl− to quantitate the Cl− concentration. (Eq. 16-2)

When all Cl− in a patient is bound to Ag+, excess or free Ag+ is used to indicate the endpoint. As Ag+ accumulates, the coulometric generator and timer are turned off. The elapsed time is used to calculate the concentration of Cl− in the sample. The digital (Cotlove) chloridometer (Labconco Corporation) uses this principle in Cl− analysis.

Reference Ranges See Table 16.10.3 TABLE 16.10 Reference Ranges for Chloride

Bicarbonate Bicarbonate is the second most abundant anion in the ECF. Total CO2 comprises the bicarbonate ion (HCO3−), H2CO3, and dissolved CO2, with HCO3− accounting for more than 90% of the total CO2 at physiologic pH. Because HCO3− composes the largest fraction of total CO2, total CO2 measurement is indicative of HCO3− measurement. HCO3− is the major component of the buffering system in the blood. Carbonic anhydrase in RBCs converts CO2 and H2O to H2CO3, which dissociates into H+ and HCO3−. (Eq. 16-3) where CA is carbonic anhydrase. HCO3− diffuses out of the cell in exchange for Cl− to maintain ionic charge neutrality within the cell (chloride shift; see Fig. 16.4). This process converts potentially toxic CO2 in the plasma to an effective buffer: HCO3−. HCO3− buffers excess H+ by combining with acid, then eventually dissociating into H2O and CO2 in the lungs where the acidic gas CO2 is eliminated.

Regulation Most of the HCO3− in the kidneys (85%) is reabsorbed by the proximal tubules, with 15% being reabsorbed by the distal tubules. Because tubules are only slightly permeable to HCO3−, it is usually reabsorbed as CO2. This happens as HCO3−, after filtering into the tubules, combines with H+ to form H2CO3, which then dissociates into H2O and CO2. The CO2 readily diffuses back into the ECF. Normally, nearly all the HCO3− is reabsorbed from the tubules, with little lost in the urine. When HCO3− is filtered in excess of H+ available, almost all excess HCO3− flows into the urine. In alkalosis, with a relative increase in HCO3− compared with CO2, the kidneys increase excretion of HCO3− into the urine, carrying along a cation such as Na+. This loss of HCO3− from the body helps correct pH. Among the responses of the body to acidosis is an increased excretion of H+ into the urine. In addition, HCO3− reabsorption is virtually complete, with 90% of the filtered HCO3− reabsorbed in the proximal tubule and the remainder in the distal tubule.1

Clinical Applications Acid–base imbalances cause changes in HCO3− and CO2 levels. A decreased HCO3− may occur from metabolic acidosis as HCO3− combines with H+ to produce CO2, which is exhaled by the lungs. The typical response to metabolic acidosis is compensation by hyperventilation, which lowers pco2. Elevated total CO2 concentrations occur in metabolic alkalosis as HCO3− is retained, often with increased pco2 as a result of compensation by hypoventilation. Typical causes of metabolic alkalosis include severe vomiting, hypokalemia, and excessive alkali intake.

Determination of CO2 Specimen

This chapter deals specifically with venous serum or plasma determinations. For discussion of arterial and whole blood pco2 measurements, refer Chapter 17. Serum or lithium heparin plasma is suitable for analysis. Although specimens should be anaerobic for the highest accuracy, many current analyzers (excluding blood gas analyzers) do not permit anaerobic sample handling. In most instances, the sample is capped until the serum or plasma is separated and the sample is analyzed immediately. If the sample is left uncapped before analysis, CO2 escapes. Levels can decrease by 6 mmol/L/h.2 CO2 measurements may be obtained in several ways; however, the actual portion of the total CO2 being measured may vary with the method used. Two common methods are ISE and an enzymatic method. One type of ISE for measuring total CO2 uses an acid reagent to convert all the forms of CO2 to CO2 gas and is measured by a pco2 electrode (see Chapter 5). The enzyme method alkalinizes the sample to convert all forms of CO2 to HCO3−. HCO3− is used to carboxylate phosphoenolpyruvate (PEP) in the presence of PEP carboxylase, which catalyzes the formation of oxaloacetate:

(Eq. 16-4) This is coupled to the following reaction, in which NADH is consumed as a result of the action of malate dehydrogenase: (Eq. 16-5) The rate of change in absorbance of NADH is proportional to the concentration of HCO3−.

Reference Ranges CO2, venous 23 to 29 mmol/L (plasma, serum).3

Magnesium Magnesium Physiology

Magnesium (Mg2+) is the fourth most abundant cation in the body and second most abundant intracellular ion. The average human body (70 kg) contains 1 mol (24 g) of Mg2+. Approximately 53% of Mg2+ in the body is found in bone and 46% in muscle, other organs, and soft tissue, and less than 1% is present in serum and RBCs.15 Of the Mg2+ present in serum, about one-third is bound to protein, primarily albumin. Of the remaining two-thirds, 61% exists in the free or ionized state, and about 5% is complexed with other ions, such as PO4− and citrate. Similar to Ca2+, it is the free ion that is physiologically active in the body.16 The role of Mg2+ in the body is widespread. It is an essential cofactor of more than 300 enzymes, including those important in glycolysis; transcellular ion transport; neuromuscular transmission; synthesis of carbohydrates, proteins, lipids, and nucleic acids; and the release of and response to certain hormones. The clinical usefulness of serum Mg2+ levels has greatly increased in the past 10 years as more information about the analyte has been discovered. The most significant findings are the relationship between abnormal serum Mg2+ levels and cardiovascular, metabolic, and neuromuscular disorders. Although serum levels may not reflect total body stores of Mg2+, they are useful in determining acute changes in the ion.

Regulation Rich sources of Mg2+ in the diet include raw nuts, dry cereal, and “hard” drinking water; other sources include vegetables, meats, fish, and fruit.15 Processed foods, an ever-increasing part of the average US diet, have low levels of Mg2+ that may cause an inadequate intake. This in turn may increase the likelihood of Mg2+ deficiency. The small intestine may absorb 20% to 65% of the dietary Mg2+, depending on the need and intake. The overall regulation of body Mg2+ is controlled largely by the kidney, which can reabsorb Mg2+ in deficiency states or readily excrete excess Mg2+ in overload states. Of the nonprotein-bound Mg2+ that gets filtered by the glomerulus, only 25% to 30% is reabsorbed by the proximal convoluted tubule (PCT), unlike Na+ in which 60% to 75% is absorbed in the PCT. Henle's loop is the major renal regulatory site, where 50% to 60% of filtered Mg2+ is reabsorbed in the ascending limb. In addition, 2% to 5% are reabsorbed in the distal convoluted tubule.17 The renal threshold for Mg2+ is approximately 0.60 to 0.85

mmol/L (≈1.46 to 2.07 mg/dL). Because this is close to normal serum concentration, slight excesses of Mg2+ in serum are rapidly excreted by the kidneys. Normally, only about 6% of filtered Mg2+ is excreted in the urine per day.15 Mg2+ regulation appears to be related to that of Ca2+ and Na+. Parathyroid hormone (PTH) increases the renal reabsorption of Mg2+ and enhances the absorption of Mg2+ in the intestine. However, changes in ionized Ca2+ have a far greater effect on PTH secretion. Aldosterone and thyroxine apparently have the opposite effect of PTH in the kidney, increasing the renal excretion of Mg2+.16

Clinical Applications Hypomagnesemia Hypomagnesemia is most frequently observed in hospitalized individuals in intensive care units (ICUs) or those receiving diuretic therapy or digitalis therapy (CHF, atrial fibrillation). These patients most likely have an overall tissue depletion of Mg2+ as a result of severe illness or loss, which leads to low serum levels. Hypomagnesemia is rare in nonhospitalized individuals.16 There are many causes of hypomagnesemia; however, it can be grouped into general categories (Table 16.11). Reduced intake is least likely to cause severe deficiencies in the United States. A Mg2+-deficient diet as a result of starvation, chronic alcoholism, or Mg2+-deficient IV therapy can cause a loss of the ion. TABLE 16.11 Causes of Hypomagnesemia

Adapted from Polancic JE. Magnesium: metabolism, clinical importance, and analysis. Clin Lab Sci. 1991;4(2):105–109.

Various GI disorders may cause decreased absorption by the intestine, which can result in an excess loss of Mg2+ via the feces. Malabsorption syndromes; intestinal resection or bypass surgery; nasogastric suction; pancreatitis; and prolonged vomiting, diarrhea, or laxative use may lead to an Mg2+ deficiency. Neonatal hypomagnesemia has been reported as a result of various surgical procedures. A primary deficiency has also been reported in infants as a result of a selective malabsorption of the ion.16 A chronic congenital hypomagnesemia with secondary hypocalcemia (autosomal recessive disorder) has also been reported; molecular studies have revealed a specific transport protein defect in the intestine.18 Mg2+ loss due to increased excretion by way of urine can occur as a result of various renal and endocrine disorders or the effects of certain drugs on the kidneys. Renal tubular disorders and other select renal disorders may result in excess amounts of Mg2+ being lost through the urine because of decreased tubular reabsorption. Several endocrine disorders can cause a loss of Mg2+. Hyperparathyroidism and hypercalcemia may cause increased renal excretion of Mg2+ as a result of excess Ca2+ ions. Excess serum Na+ levels caused by hyperaldosteronism may also cause increased renal excretion of Mg2+. A pseudohypomagnesemia may also be the result of hyperaldosteronism caused by increased water reabsorption. Hyperthyroidism may result in an increased renal excretion of Mg2+ and may also cause an intracellular shift of the ion. In persons with diabetes, excess urinary loss of Mg2+ is associated with glycosuria; hypomagnesemia can aggravate the neuromuscular and vascular complications commonly found in this disease. Some studies have shown a relationship between Mg2+ deficiency and insulin resistance; however, Mg2+ is not thought to play a role in the pathophysiology of diabetes mellitus. The American Diabetes Association has issued a statement regarding dietary intake of Mg2+ and measurement of serum Mg2+ in patients with diabetes.19 Several drugs, including diuretics, gentamicin, cisplatin, and cyclosporine, increase renal loss of Mg2+ and frequently result in hypomagnesemia. The loop diuretics, such as furosemide, are especially effective in increasing renal loss of Mg2+. Thiazide diuretics require a longer period of use to cause

hypomagnesemia. Cisplatin has a nephrotoxic effect that inhibits the ability of the renal tubule to conserve Mg2+. Cyclosporine, an immunosuppressant, severely inhibits the renal tubular reabsorption of Mg2+ and has many adverse effects, including nephrotoxicity, hypertension, hepatotoxicity, and neurologic symptoms such as seizures and tremors. Cardiac glycosides, such as digoxin and digitalis, can interfere with Mg2+ reabsorption; the resulting hypomagnesemia is a significant finding because the decreased level of Mg2+ can amplify the symptoms of digitalis toxicity.16 Excess lactation has been associated with hypomagnesemia as a result of increased use and loss through milk production. Mild deficiencies have been reported in pregnancy, which may cause a hyperexcitable uterus, anxiety, and insomnia.

Symptoms of hypomagnesemia A patient who is hypomagnesemic may be asymptomatic until serum levels fall below 0.5 mmol/L.16 A variety of symptoms can occur, and the most frequent involve cardiovascular, neuromuscular, psychiatric, and metabolic abnormalities (Table 16.12). The cardiovascular and neuromuscular symptoms result primarily from the ATPase enzyme's requirement for Mg2+. Mg2+ loss leads to decreased intracellular K+ levels because of a faulty Na+/K+ pump (ATPase). This change in cellular RMP causes increased excitability that may lead to cardiac arrhythmias. This condition may also lead to digitalis toxicity. TABLE 16.12 Symptoms of Hypomagnesemia

Adapted from Polancic JE. Magnesium: metabolism, clinical importance, and analysis. Clin Lab Sci. 1991;4(2):105–109.

Muscle contraction also requires Mg2+ and ATPase for normal Ca2+ uptake following contraction. Normal nerve and muscle cell stimulation requires Mg2+ to assist with the regulation of acetylcholine, a potent neurotransmitter. Hypomagnesemia can cause a variety of symptoms from weakness to tremors, tetany (irregular muscle spasms), paralysis, or coma. The CNS can also be affected, resulting in psychiatric disorders that range from subtle changes to depression or psychosis. Metabolic disorders are also associated with hypomagnesemia. Studies have indicated that approximately 40% of hospitalized patients with hypokalemia are also hypomagnesemic.17 In addition, 20% to 30% of patients with hyponatremia, hypocalcemia, or hypophosphatemia are also hypomagnesemic.17 Mg2+ deficiency can impair PTH release and target tissue response, resulting in hypocalcemia. Replenishing any of these deficient ions alone often does not remedy the disorder unless Mg2+ therapy is provided. Mg2+ therapy alone may

restore both ion levels to normal; serum levels of the ions must be monitored during treatment.

Treatment of hypomagnesemia The preferred form of treatment is by oral intake using magnesium lactate, magnesium oxide, or magnesium chloride or an antacid that contains Mg2+. In severely ill patients, an MgSO4 solution is given parenterally. Before initiation of therapy, renal function must be evaluated to avoid inducing hypermagnesemia during treatment.17 Hypermagnesemia Hypermagnesemia is observed less frequently than is hypomagnesemia.16 Causes for elevated serum Mg2+ levels are summarized in Table 16.13; the most common is renal failure (GFR < 15 mL/min). The most severe elevations are usually a result of the combined effects of decreased renal function and increased intake of commonly prescribed Mg2+-containing medications, such as antacids, enemas, or cathartics. Nursing home patients are at greatest risk for this occurrence.16 TABLE 16.13 Causes of Hypermagnesemia

Adapted from Polancic JE. Magnesium: metabolism, clinical importance, and analysis. Clin Lab Sci. 1991;4(2):105–109.

CASE STUDY 16.2 A 60-year-old man entered the emergency department after 2 days of “not feeling so well.” History revealed a myocardial infarction 5 years ago, when he was prescribed digoxin. Two years ago, he was prescribed a diuretic after periodic bouts of edema. An ECG at time of admission indicated a cardiac arrhythmia. Admitting lab results are shown in Case Study Table 16.2.1. CASE STUDY TABLE 16.2.1 Laboratory Results

Questions 1. Because the digoxin level is within the therapeutic range, what may be the cause for the arrhythmia? 2. What is the most likely cause for the hypomagnesemia? 3. What is the most likely cause for the decreased potassium and ionized calcium levels? 4. What type of treatment would be helpful?

Hypermagnesemia has been associated with several endocrine disorders. Thyroxine and growth hormone cause a decrease in tubular reabsorption of Mg2+, and a deficiency of either hormone may cause a moderate elevation in serum Mg2+. Adrenal insufficiency may cause a mild elevation as a result of decreased renal excretion of Mg2+.16 MgSO4 may be used therapeutically with preeclampsia, cardiac arrhythmia, or myocardial infarction; Mg2+ is a vasodilator and can decrease uterine

hyperactivity in eclamptic states and increase uterine blood flow. This therapy can lead to maternal hypermagnesemia, as well as neonatal hypermagnesemia due to the immature kidney of the newborn. Premature infants are at greater risk to develop actual symptoms.16 Studies have shown that IV Mg2+ therapy in myocardial infarction patients may reduce early mortality.15 Dehydration can cause a pseudohypermagnesemia, which can be corrected with rehydration. Because of increased bone loss, mild serum Mg2+ elevations can occur in individuals with multiple myeloma or bone metastases.

Symptoms of hypermagnesemia Symptoms of hypermagnesemia typically do not occur until the serum level exceeds 1.5 mmol/L.16 The most frequent symptoms involve cardiovascular, dermatologic, GI, neurologic, neuromuscular, metabolic, and hemostatic abnormalities (Table 16.14). Mild to moderate symptoms, such as hypotension, bradycardia, skin flushing, increased skin temperature, nausea, vomiting, and lethargy, may occur when serum levels are 1.5 to 2.5 mmol/L.16 Life-threatening symptoms, such as ECG changes, heart block, asystole, sedation, coma, respiratory depression or arrest, and paralysis, can occur when serum levels reach 5.0 mmol/L.16 TABLE 16.14 Symptoms of Hypermagnesemia

Adapted from Polancic JE. Magnesium: metabolism, clinical importance, and analysis. Clin Lab Sci. 1991;4(2):105–109.

Elevated Mg2+ levels may inhibit PTH release and target tissue response. This may lead to hypocalcemia and hypercalcuria.16 Normal hemostasis is a Ca2+-dependent process that may be inhibited as a result of competition between increased levels of Mg2+ and Ca2+ ions. Thrombin generation and platelet adhesion are two processes in which interference may occur.16

Treatment of hypermagnesemia Treatment of Mg2+ excess associated with increased intake is to discontinue the source of Mg2+. Severe symptomatic hypermagnesemia requires immediate supportive therapy for cardiac, neuromuscular, respiratory, or neurologic abnormalities. Patients with renal failure require hemodialysis. Patients with

normal renal function may be treated with a diuretic and IV fluid.

Determination of Magnesium Specimen Nonhemolyzed serum or lithium heparin plasma may be analyzed. Because the Mg2+ concentration inside erythrocytes is 10 times greater than that in the ECF, hemolysis must be avoided, and the serum should be separated from the cells as soon as possible. Oxalate, citrate, and ethylenediaminetetraacetic acid (EDTA) anticoagulants are unacceptable because they will bind with Mg2+. A 24-hour urine sample is preferred for analysis because of a diurnal variation in excretion. The urine must be acidified with HCl to avoid precipitation. Methods The three most common methods for measuring total serum Mg2+ are colorimetric: calmagite, formazan dye, and methylthymol blue. In the calmagite method, Mg2+ binds with calmagite to form a reddish-violet complex that may be read at 532 nm. In the formazan dye method, Mg2+ binds with the dye to form a colored complex that may be read at 660 nm. In the methylthymol blue method, Mg2+ binds with the chromogen to form a colored complex. Most methods use a Ca2+ shelter to prohibit interference from this divalent cation. The reference method for measuring Mg2+ is AAS. Although the measurement of total Mg2+ concentrations in serum remains the usual diagnostic test for detection of Mg2+ abnormalities, it has limitations. First, because approximately 25% of Mg2+ is protein bound, total Mg2+ may not reflect the physiologically active, free ionized Mg2+. Second, because Mg2+ is primarily an intracellular ion, serum concentrations will not necessarily reflect the status of intracellular Mg2+; even when tissue and cellular Mg2+ is depleted by as much as 20%, serum Mg2+ concentrations may remain normal.

Reference Ranges See Table 16.15.3

TABLE 16.15 Reference Range for Magnesium

Calcium Calcium Physiology In 1883, Ringer20 showed that Ca2+ was essential for myocardial contraction. While attempting to study how bound and free forms of Ca2+ affected frog heart contraction, McLean and Hastings21 showed that the ionized/free Ca2+ concentration was proportional to the amplitude of frog heart contraction, whereas protein-bound and citrate-bound Ca2+ had no effect. From this observation, they developed the first assay for ionized/free Ca2+ using isolated frog hearts. Although the method had poor precision by today's standards, the investigators were able to show that blood-ionized Ca2+ was closely regulated and had a mean concentration in humans of about 1.18 mmol/L. Because decreased ionized Ca2+ impairs myocardial function, it is important to maintain ionized Ca2+ at a near-normal concentration during surgery and in critically ill patients. Decreased ionized Ca2+ concentrations in blood can cause neuromuscular irritability, which may become clinically apparent as tetany.

Regulation Three hormones, PTH, vitamin D, and calcitonin, are known to regulate serum Ca2+ by altering their secretion rate in response to changes in ionized Ca2+. The actions of these hormones are shown in Figure 16.5.

FIGURE 16.5 Hormonal response to hypercalcemia and hypocalcemia. PTH, parathyroid hormone; 25-OH vit D, 25-hydroxyvitamin D; 1,25(OH)2 vit D, dihydroxyvitamin D. PTH secretion in blood is stimulated by a decrease in ionized Ca2+, and conversely, PTH secretion is stopped by an increase in ionized Ca2+. PTH exerts three major effects on both bone and kidney. In the bone, PTH activates a process known as bone resorption, in which activated osteoclasts break down bone and subsequently release Ca2+ into the ECF. In the kidneys, PTH conserves Ca2+ by increasing tubular reabsorption of Ca2+ ions. PTH also stimulates renal production of active vitamin D. Vitamin D3, a cholecalciferol, is obtained from the diet (seafood, dairy, egg yolks) or exposure of skin to sunlight. Vitamin D3 is then converted in the liver to 25- hydroxycholecalciferol (25-OH-D3), still an inactive form of vitamin D. In the kidney, 25-OH-D3 is specifically hydroxylated to form 1,25dihydroxycholecalciferol (1,25-[OH]2-D3), the biologically active form. This active form of vitamin D increases Ca2+ absorption in the intestine and enhances the effect of PTH on bone resorption. Calcitonin, which originates in the medullary cells of the thyroid gland, is secreted when the concentration of Ca2+ in blood increases. Calcitonin exerts its

Ca2+-lowering effect by inhibiting the actions of both PTH and vitamin D. Although calcitonin is apparently not secreted during normal regulation of the ionized Ca2+ concentration in blood, it is secreted in response to a hypercalcemic stimulus.

Distribution About 99% of Ca2+ in the body is part of bone, and the remaining 1% is mostly in the blood and other ECF, as little is in the cytosol of most cells. In fact, the concentration of ionized Ca2+ in blood is 5,000 to 10,000 times higher than in the cytosol of cardiac or smooth muscle cells. Maintenance of this large gradient is vital to maintain the essential rapid inward flux of Ca2+. Ca2+ in blood is distributed among several forms. About 45% circulates as free Ca2+ ions (referred to as ionized Ca2+); 40% is bound to protein, mostly albumin; and 15% is bound to anions, such as HCO3−, citrate, and lactate. Clearly, this distribution can change in disease. It is noteworthy that concentrations of citrate, HCO3−, lactate, and albumin can change dramatically during surgery or critical care. This is why ionized Ca2+ cannot be reliably calculated from total Ca2+ measurements, especially in acutely ill individuals.

Clinical Applications Tables 16.16 and 16.17 summarize causes of hypocalcemic and hypercalcemic disorders. Although both total Ca2+ and ionized Ca2+ measurements are available in many laboratories, ionized Ca2+ is usually a more sensitive and specific marker for Ca2+ disorders. TABLE 16.16 Causes of Hypocalcemia

TABLE 16.17 Causes of Hypercalcemia

Hypocalcemia

When PTH is not present, as with primary hypoparathyroidism, serum Ca2+ levels are not properly regulated. Bone tends to “hang on” to its storage pool, and the kidney increases excretion of Ca2+. Because PTH is also required for normal vitamin D metabolism, the lack of vitamin D's effects also leads to a decreased level of Ca2+. Parathyroid gland aplasia, destruction, and removal are obvious reasons for primary hypoparathyroidism.

CASE STUDY 16.3 An 84-year-old nursing home resident was seen in the emergency department with the following symptoms: nausea, vomiting, decreased respiration, hypotension, and low pulse rate (46 bpm). Physical examination showed the skin was warm to the touch and flushed. Admission laboratory data are found in Case Study Table 16.3.1. CASE STUDY TABLE 16.3.1 Laboratory Results

Questions 1. What is the most likely cause for the patient's symptoms? 2. What is the most likely cause for the hypermagnesemia? 3. What could be the cause for the hypocalcemia?

Because hypomagnesemia has become more frequent in hospitalized patients, chronic hypomagnesemia has also become recognized as a frequent cause of hypocalcemia. Hypomagnesemia may cause hypocalcemia by three mechanisms: (1) it inhibits the glandular secretion of PTH across the parathyroid gland membrane, (2) it impairs PTH action at its receptor site on bone, and (3) it causes vitamin D resistance.15 Elevated Mg2+ levels may inhibit PTH release and target tissue response, perhaps leading to hypocalcemia and hypercalciuria.16 When total Ca2+ is the only result reported, hypocalcemia can appear with hypoalbuminemia. Common causes are associated with chronic liver disease, nephrotic syndrome, and malnutrition. In general, for each 1 g/dL decrease in serum albumin, there is a 0.2 mmol/L (0.8 mg/dL) decrease in total Ca2+ levels.22 About one-half of the patients with acute pancreatitis develop hypocalcemia. The most consistent cause appears to be a result of increased intestinal binding of Ca2+ as increased intestinal lipase activity occurs.22 Vitamin D deficiency and malabsorption can cause decreased absorption, which leads to increased PTH production or secondary hyperparathyroidism. Patients with renal disease caused by glomerular failure often have altered concentrations of Ca2+, PO4−, albumin, Mg2+, and H+ (pH). In chronic renal disease, secondary hyperparathyroidism frequently develops as the body tries to compensate for hypocalcemia caused either by hyperphosphatemia (PO4− binds and lowers ionized Ca2+) or altered vitamin D metabolism. Monitoring and controlling ionized Ca2+ concentrations may avoid problems due to hypocalcemia, such as osteodystrophy, unstable cardiac output or blood pressure, or problems arising from hypercalcemia, such as renal stones and other calcifications. Rhabdomyolysis, as with major crush injury and muscle damage,

may cause hypocalcemia as a result of increased PO4− release from cells, which bind to Ca2+ ions.22 Pseudohypoparathyroidism is a rare hereditary disorder in which PTH target tissue response is decreased (end organ resistance). PTH production responds normally to loss of Ca2+; however, without normal response (decreased cAMP [cyclic adenosine 3′,5′-phosphate] production), Ca2+ is lost in the urine or remains in the bone storage pool. Patients often have common physical features, including short stature, obesity, shortened metacarpals and metatarsals, and abnormal calcification.

Surgery and intensive care Because appropriate Ca2+ concentrations promote good cardiac output and maintain adequate blood pressure, the maintenance of a normal ionized Ca2+ concentration in blood is beneficial to patients in either surgery or intensive care. Controlling Ca2+ concentrations may be critical in open heart surgery when the heart is restarted and during liver transplantation because large volumes of citrated blood are given. Because these patients may receive large amounts of citrate, HCO3−, Ca2+ salts, or fluids, the greatest discrepancies between total Ca2+ and ionized Ca2+ concentrations may be seen during major surgical operations. Consequently, ionized Ca2+ measurements are the Ca2+ measurement of greatest clinical value. Hypocalcemia occurs commonly in critically ill patients, that is, those with sepsis, thermal burns, renal failure, or cardiopulmonary insufficiency. These patients frequently have abnormalities of acid–base regulation and losses of protein and albumin, which are best suited to monitoring Ca2+ status by ionized Ca2+ measurements. Normalization of ionized Ca2+ may have beneficial effects on cardiac output and blood pressure.

Neonatal monitoring Typically, blood-ionized Ca2+ concentrations in neonates are high at birth and then rapidly decline by 10% to 20% after 1 to 3 days. After about 1 week, ionized Ca2+ concentrations in the neonate stabilize at levels slightly higher than in adults.23 The concentration of ionized Ca2+ may decrease rapidly in the early neonatal

period because the infant may lose Ca2+ rapidly and not readily reabsorb it. Several possible etiologies have been suggested: abnormal PTH and vitamin D metabolism, hypercholesterolemia, hyperphosphatemia, and hypomagnesemia.

Symptoms of hypocalcemia Neuromuscular irritability and cardiac irregularities are the primary groups of symptoms that occur with hypocalcemia. Neuromuscular symptoms include paresthesia, muscle cramps, tetany, and seizures. Cardiac symptoms may include arrhythmia or heart block. Symptoms usually occur with severe hypocalcemia, in which total Ca2+ levels are below 1.88 mmol/L (7.5 mg/dL).22

Treatment of hypocalcemia Oral or parenteral Ca2+ therapy may occur, depending on the severity of the decreased level and the cause. Vitamin D may sometimes be administered in addition to oral Ca2+ to increase absorption. If hypomagnesemia is a concurrent disorder, Mg2+ therapy should also be provided. Hypercalcemia Primary hyperparathyroidism is the main cause of hypercalcemia.22 Hyperparathyroidism, or excess secretion of PTH, may show obvious clinical signs or may be asymptomatic. The patient population seen most frequently with primary hyperparathyroidism is older women.22 Although either total or ionized Ca2+ measurements are elevated in serious cases, ionized Ca2+ is more frequently elevated in subtle or asymptomatic hyperparathyroidism. In general, ionized Ca2+ measurements are elevated in 90% to 95% of cases of hyperparathyroidism, whereas total Ca2+ is elevated in 80% to 85% of cases. The second leading cause of hypercalcemia is associated with various types of malignancy, with hypercalcemia sometimes being the sole biochemical marker for disease.22 Many tumors produce PTH-related peptide (PTH-rP), which binds to normal PTH receptors and causes increased Ca2+ levels. Specific assays to measure PTH-rP are available because this abnormal protein is not detected by most PTH assays. Because of the proximity of the parathyroid gland to the thyroid gland, hyperthyroidism can sometimes cause hyperparathyroidism. A rare, benign,

familial hypocalciuria has also been reported. Thiazide diuretics increase Ca2+ reabsorption, leading to hypercalcemia. Prolonged immobilization may cause increased bone resorption. Hypercalcemia associated with immobilization is further compounded by renal insufficiency.

Symptoms of hypercalcemia A mild hypercalcemia (2.62 to 3.00 mmol/L [10.5 to 12 mg/dL]) is often asymptomatic.22 Moderate or severe Ca2+ elevations include neurologic, GI, and renal symptoms. Neurologic symptoms may include mild drowsiness or weakness, depression, lethargy, and coma. GI symptoms may include constipation, nausea, vomiting, anorexia, and peptic ulcer disease. Hypercalcemia may cause renal symptoms of nephrolithiasis and nephrocalcinosis. Hypercalciuria can result in nephrogenic diabetes insipidus, which causes polyuria that results in hypovolemia, which further aggravates the hypercalcemia.22 Hypercalcemia can also cause symptoms of digitalis toxicity.

Treatment of hypercalcemia Treatment of hypercalcemia depends on the level of hypercalcemia and the cause. Often people with primary hyperparathyroidism are asymptomatic. Estrogen deficiency in postmenopausal women has been implicated in primary hyperparathyroidism in older women.22 Often, estrogen replacement therapy reduces Ca2+ levels. Parathyroidectomy may be necessary in some hyperparathyroid patients. Patients with moderate to severe hypercalcemia are treated to reduce Ca2+ levels. Salt and water intake is encouraged to increase Ca2+ excretion and avoid dehydration, which can compound the hypercalcemia. Thiazide diuretics should be discontinued. Bisphosphonates (a derivative of pyrophosphate) are the main drug class used to lower Ca2+ levels, achieved by its binding action to bone, which prevents bone resorption.22

Determination of Calcium Specimen The preferred specimen for total Ca2+ determinations is either serum or lithium heparin plasma collected without venous stasis. Because anticoagulants such as EDTA or oxalate bind Ca2+ tightly and interfere with measurement, they are

unacceptable for use. The proper collection of samples for ionized Ca2+ measurements requires greater care. Because loss of CO2 will increase pH, samples must be collected anaerobically. Although heparinized whole blood is the preferred sample, serum from sealed evacuated blood collection tubes may be used if clotting and centrifugation are done quickly (1 cm in diameter) and show invasion into surrounding structures like the cavernous sinuses. They may or may not be hormonally active and, interestingly, tend to stain commonly for GH or ACTH even though they may not produce clinically evident syndromes.10 MIB-1 is a monoclonal antibody that is used to detect the Ki-67 antigen, a marker of cell proliferation, and a high “proliferation index” suggests higher degree of atypia.11 Physiologic enlargement of the pituitary can be seen during puberty and pregnancy. The enlargement seen during pregnancy is due to lactotroph hyperplasia. Thyrotroph and lactotroph or gonadotroph hyperplasia can also be seen in long-standing primary thyroidal or gonadal failure, respectively.

GROWTH HORMONE The pituitary is vital for normal growth. Growth ceases if the pituitary is removed, and if the hormonal products from other endocrine glands that are acted on by the pituitary are replaced (thyroxine, adrenal steroids, and gonadal steroids), growth is not restored until GH is administered. However, if GH is given in isolation without the other hormones, growth is not promoted. Therefore, it takes complete functioning of the pituitary to establish conditions ripe for growth of the individual. It also takes adequate nutrition, normal levels of insulin, and overall good health to achieve a person's genetic growth potential. GH, also called somatotropin, is structurally related to prolactin and human placental lactogen. A single peptide with two intramolecular disulfide bridges, it belongs to the direct effector class of anterior pituitary hormones. The somatotrophs, pituitary cells that produce GH, constitute over one-third of normal pituitary weight. Release of somatotropin from the pituitary is stimulated by the hypothalamic peptide growth hormone–releasing hormone (GHRH); somatotropin's secretion is inhibited by SS.12 GH is secreted in pulses, with an average interpulse interval of 2 to 3 hours, with the most reproducible peak occurring at the onset of sleep.13 Between these pulses, the level of GH may fall below the detectable limit, resulting in the clinical evaluation of GH deficiency being based on a single, challenging measurement. Ghrelin, an enteric hormone

that plays important roles in nutrient sensing, appetite and in glucose regulation, is also a potent stimulator of GH secretion.14 No other hypothalamic–hypophyseal system more vividly illustrates the concept of an open-loop paradigm than that seen with GH. The on-and-off functions of GHRH/SS and the basic pattern of secretory pulses of GH are heavily modulated by other factors (Table 20.3). TABLE 20.3 Other Modifiers of Growth Hormone Secretion

Actions of GH GH has many diverse effects on metabolism; it is considered an amphibolic hormone because it directly influences both anabolic and catabolic processes. One major effect of GH is that it allows an individual to effectively transition from a fed state to a fasting state without experiencing a shortage of substrates required for normal intracellular oxidation. GH directly antagonizes the effect of insulin on glucose metabolism, promotes hepatic gluconeogenesis, and stimulates lipolysis.15,16 From a teleologic viewpoint, this makes perfect sense— enhanced lipolysis provides oxidative substrate for peripheral tissue, such as skeletal muscle, and yet conserves glucose for the central nervous system by stimulating the hepatic delivery of glucose and opposing insulin-mediated glucose disposal. Indeed, isolated GH deficiency in children may be accompanied by hypoglycemia; however, hypoglycemia is more likely to occur if both GH and ACTH are deficient.17 The anabolic effects of GH are reflected by enhanced protein synthesis in skeletal muscle and other tissues. This is translated into a positive nitrogen balance and phosphate retention. Although GH has direct effects on many tissues, it also has indirect effects that are mediated by factors that were initially called somatomedins. In early experiments, it became apparent that GH supplementation in

hypophysectomized animals induced the production of an additional factor that stimulated the incorporation of sulfate into cartilage.18,19 As this “protein” was purified, it was evident that there was more than one somatomedin, and because of their structural homology to proinsulin, the nomenclature shifted to insulinlike growth factor (IGF).20,21 For example, somatomedin C, the major growth factor induced by GH, is now IGF-I.22 IGFs also have cell surface receptors that are distinct from insulin; however, supraphysiologic levels of IGF-II can “bleed” over on the insulin receptor and cause hypoglycemia,23 and hyperinsulinemia can partially activate IGF-I receptors.24 GH stimulates the production of IGF-I from the liver, and as a result, IGF-I becomes a biologic amplifier of GH levels. IGFs are complexed to specific serum binding proteins that have been shown to affect the actions of IGFs in multifaceted ways.25 IGF-binding protein 3 (IGFBP-3) is perhaps the best studied member of the IGFBP family. Recently, IGFBPs and specifically IGFBP-3 have been shown to directly play a role in the pathophysiology of several human cancers (this may be independent of IGF-1 and IGF-1 receptor–mediated pathways).26 The tumor suppressor gene, P53, has been shown to upregulate active IGFBP-3 secretion, which in turn inhibits IGF-1 signaled mutagenesis, and, thus, inhibits neoplastic cell proliferation.27 Low levels of IGFBP-3 were positively correlated with higher rates of colorectal cancer risk (in men) in a nested case–control study from the Physicians Health Study cohort.28

Testing As noted above, a single, random measurement of GH is rarely diagnostic. The current testing paradigms for GH are soundly based on the dynamic physiology of the GH axis. For example, circulating levels of IGF-I and, perhaps, IGFBP-3 reasonably integrate the peaks of GH secretion, and elevated levels of both are consistent with, but may not be diagnostic of, a sustained excess of GH. Other conditions such as hepatoma can be associated with high levels of IGF-I, and levels of IGFBP-3 may be inappropriately normal in some people with active acromegaly. Conversely, low IGF-I levels may reflect inadequate production of GH; however, low IGF levels are also seen in patients with poorly controlled diabetes, malnutrition, or other chronic illnesses.29 The inherent high biologic variability and assay performance issues further confound the use of IGF-I measurements in the clinical setting.30 Recently, new recommendations to improve the assay performance for both IGF-I and GH have been published.31 Definitive testing for determining the autonomous production of GH relies

upon the normal suppressibility of GH by oral glucose loading.22,32,33 This test is performed after an overnight fast, and the patient is given a 100 g oral glucose load. GH is measured at time zero and at 60 and 120 minutes after glucose ingestion. Following oral glucose loading, GH levels are undetectable in normal individuals; however, in patients with acromegaly, GH levels fail to suppress and may even paradoxically rise. Testing patients for suspected GH deficiency is more complicated. There are several strategies to stimulate GH, and new protocols are currently evolving. Once considered the gold standard, insulin-induced hypoglycemia is being replaced by less uncomfortable testing schemes.34 Combination infusions of GHRH and the amino acid L-arginine or an infusion of L-arginine coupled with oral L-DOPA are the most widely used although acquiring some of the medications can be problematic.35 If GH levels rise above 3 to 5 ng/mL, it is unlikely that the patient is GH deficient34; however, a lower threshold may be adopted because of improved sensitivity of the newer two-site GH assays.35 On the other hand, several studies have shown that provocative GH testing may not be necessary in patients with low IGF-1 levels and otherwise documented panhypopituitarism.29

Acromegaly Acromegaly results from pathologic or autonomous GH excess and, in the vast majority of patients, is a result of a pituitary tumor. There have been isolated case reports of tumors causing acromegaly as a result of the ectopic production of GHRH,36, 37, 38 and although exceedingly interesting or instructive, the ectopic production of GHRH or GH remains rare.37 Recent reports have documented mutations in the aryl hydrocarbon–interacting protein gene (AIP)39 in cases of familial acromegaly and polymorphisms in the SS receptor type 5 gene in sporadic cases.40 If a GH-producing tumor occurs before epiphyseal closure, the patient develops gigantism41 and may grow to an impressive height; otherwise, the patient develops classical, but insidious, features of bony and soft tissue overgrowth.42 These features include progressive enlargement of the hands and feet as well as growth of facial bones, including the mandible and bones of the skull. In advanced cases, the patient may develop significant gaps between their teeth. Diffuse (not longitudinal if the condition occurred following puberty) overgrowth of the ends of long bones or the spine can produce a debilitating form of arthritis.43 Because GH is an insulin antagonist, glucose intolerance or overt diabetes can occur. Hypertension; accelerated

atherosclerosis; and proximal muscle weakness, resulting from acquired myopathy,44 may be seen late in the illness. Sleep apnea is common. Organomegaly, especially thyromegaly, is common, but hyperthyroidism is exceedingly rare unless the tumor cosecretes TSH. GH excess is also a hypermetabolic condition, and as a result, acromegalic patients may complain of excessive sweating or heat intolerance. The features of acromegaly develop slowly over time, and the patient (or even their family) may be oblivious that changes in physiognomy have occurred. In these cases, the patient's complaints may center on the local effects of the tumor (headache or visual complaints) or symptoms related to the loss of other anterior pituitary hormones (hypopituitarism). A careful, retrospective review of older photographs may be crucial in differentiating coarse features due to inheritance from the classical consequences of acromegaly. If left untreated, acromegaly shortens life expectancy because of increased risk of heart disease, resulting from the combination of hypertension, coronary artery disease, and diabetes/insulin resistance. Because patients with acromegaly also have a greater lifetime risk of developing cancer, cancer surveillance programs (especially regular colonoscopy) are recommended.22

CASE STUDY 20.1 A 48-year-old man seeks care for evaluation of muscle weakness, headaches, and excessive sweating. He has poorly controlled hypertension and, on questioning, admits to noticing a gradual increase in both glove and shoe size, as well as a reduction in libido. A review of older photographs of the man documents coarsening of facial features, progressive prognathism, and broadening of the nose. Acromegaly is suspected.

questions 1. What screening tests are available? 2. What is the definitive test for autonomous growth hormone production? 3. Because the patient complains of reduced libido, hypogonadism is suspected. What evaluation is appropriate? 4. How would your thinking change if she had galactorrhea but normal levels of prolactin?

Cosecretion of prolactin can be seen in up to 40% of patients with acromegaly.45 Only a few TSH/GH-secreting tumors have also been reported.46 Confirming the diagnosis of acromegaly is relatively easy; however, some patients with acromegaly have normal random levels of GH. An elevated level of GH that does not suppress normally with glucose loading equates to an easy diagnosis. In those patients with normal, but inappropriately sustained, random levels of GH, elevated levels of IGF-I are helpful; however, nonsuppressibility of GH to glucose loading is the definitive test.32,33 Treatment of acromegaly can be challenging. The goal of treatment is tumor ablation, with continued function of the remainder of the pituitary. Transsphenoidal adenomectomy is the procedure of choice.22 If normal GH levels and kinetics (normal suppressibility to glucose) are restored following surgery, the patient is likely cured. Unfortunately, GH-producing tumors may be too large or may invade into local structures that preclude complete surgical extirpation, and the patient is left with a smaller, but hormonally active, tumor. External beam or focused irradiation is frequently used at this point, but it may take several years before GH levels decline.47,48 In the interim, efforts are made to suppress GH. Three different classes of agents, SS analogs (octreotide, pasireotide, and lanreotide), dopaminergic agonists (cabergoline and bromocriptine), and GH receptor antagonists (pegvisomant) may be employed for GH suppression.49,50

GH Deficiency GH deficiency occurs in both children and adults. In children, it may be genetic or it may be due to tumors, such as craniopharyngiomas. In adults, it is a result of structural or functional abnormalities of the pituitary (see the section on hypopituitarism in this chapter); however, a decline in GH production is an inevitable consequence of aging, and the significance of this phenomenon is poorly understood.51,52 Although GH deficiency in children is manifest by growth failure, not all patients with short stature have GH deficiency53 (see above). There have been several genetic defects identified in the GH axis. The most common type is a recessive mutation in the GHRH gene that causes a failure of GH secretion. A rarer mutation, loss of the GH gene itself, has also been observed. Mutations that

result in GH insensitivity have also been reported. These mutations may involve the GH receptor, IGF-I biosynthesis, IGF-I receptors, or defects in GH signal transduction. As a result, patients with GH insensitivity do not respond normally to exogenously administered GH. Finally, structural lesions of the pituitary or hypothalamus may also cause GH deficiency and may be associated with other anterior pituitary hormone deficiencies. An adult GH deficiency syndrome has been described in patients who have complete or even partial failure of the anterior pituitary. The symptoms of this syndrome are extremely vague and include social withdrawal, fatigue, loss of motivation, and a diminished feeling of well-being,54 but several studies have documented increased mortality in children who are GH deficient although this relationship is less clear in adults.55 Osteoporosis and alterations in body composition (i.e., reduced lean body mass) are frequent concomitants of adult GH deficiency.56 GH replacement therapy has become relatively simple with the advent of recombinant human GH.57 Currently, the cost of GH is the major limiting factor for replacement. GH has been employed by athletes as a performance-enhancing substance and as an aid in injury recovery; however, the effectiveness of GH for these purposes is controversial.58

PROLACTIN Prolactin is structurally related to GH and human placental lactogen. Considered a stress hormone, it has vital functions in relationship to reproduction. Prolactin is classified as a direct effector hormone (as opposed to a tropic hormone) because it has diffuse target tissue and lacks a single endocrine end organ. Prolactin is unique among the anterior pituitary hormones because its major mode of hypothalamic regulation is tonic inhibition rather than intermittent stimulation. Prolactin inhibitory factor (PIF) was once considered a polypeptide hormone capable of inhibiting prolactin secretion; dopamine, however, is the only neuroendocrine signal that inhibits prolactin and is now considered to be the elusive PIF. Any compound that affects dopaminergic activity in the median eminence of the hypothalamus will also alter prolactin secretion. Examples of medications that cause hyperprolactinemia include phenothiazines, butyrophenones, metoclopramide, reserpine, tricyclic antidepressants, α-methyldopa, and antipsychotics that antagonize the dopamine D2 receptor. Any disruption of the pituitary stalk (e.g., tumors, trauma, or

inflammation) causes an elevation in prolactin as a result of interruption of the flow of dopamine from the hypothalamus to the lactotrophs, the pituitary prolactin-secreting cells. TRH directly stimulates prolactin secretion and increases in TRH (as seen in primary hypothyroidism) elevate prolactin levels.59 Estrogens also directly stimulate lactotrophs to synthesize prolactin. Pathologic stimulation of the neural suckling reflex is the likely explanation of hyperprolactinemia associated with chest wall injuries. Hyperprolactinemia may also be seen in renal failure and polycystic ovary syndrome. Physiologic stressors, such as exercise and seizures, also elevate prolactin. The feedback effector for prolactin is unknown. Although the primary regulation of prolactin secretions is tonic inhibition (e.g., dopamine), it is also regulated by several hormones, including GnRH, TRH, and vasoactive intestinal polypeptide. Stimulation of breasts, as in nursing, causes the release of prolactin-secreting hormones from the hypothalamus through a spinal reflex arc. As mentioned, the physiologic effect of prolactin is lactation. The usual consequence of prolactin excess is hypogonadism, either by suppression of gonadotropin secretion from the pituitary or by inhibition of gonadotropin action at the gonad.60 The suppression of ovulation seen in lactating postpartum mothers is related to this phenomenon.

Prolactinoma A prolactinoma is a pituitary tumor that directly secretes prolactin, and it represents the most common type of functional pituitary tumor. The clinical presentation of a patient with a prolactinoma depends on the age and gender of the patient and the size of the tumor. Premenopausal women most frequently complain of menstrual irregularity/amenorrhea, infertility, or galactorrhea; men or postmenopausal women generally present with symptoms of a pituitary mass, such as headaches or visual complaints. Occasionally, a man may present with reduced libido or complaints of erectile dysfunction. The reason(s) for the varied presentations of a prolactinoma are somewhat obscure but likely relate to the dramatic, noticeable alteration in menses or the abrupt onset of a breast discharge in younger women. By contrast, the decline in reproductive function in older patients may be overlooked as an inexorable consequence of “aging.” One recently recognized complication of prolactin-induced hypogonadism is osteoporosis.61

CASE STUDY 20.2

A 23-year-old woman has experienced recent onset of a spontaneous, bilateral breast discharge and gradual cessation of menses. She reports normal growth and development and has never been pregnant.

questions 1. What conditions could be causing her symptoms? 2. What medical conditions (other than a prolactinoma) are associated with hyperprolactinemia? 3. Which medications raise prolactin?

Other Causes of Hyperprolactinemia There are many physiologic, pharmacologic, and pathologic causes of hyperprolactinemia, and a common error by clinicians is to ascribe any elevation in prolactin to a “prolactinoma.” Generally, substantial elevations in prolactin (>150 ng/mL) indicate prolactinoma, and the degree of elevation in prolactin is correlated with tumor size.62 Modest elevations in prolactin (25 to 100 ng/mL) may be seen with pituitary stalk interruption, use of dopaminergic antagonist medications, or other medical conditions such as primary thyroidal failure, renal failure, or polycystic ovary syndrome. Breast or genital stimulation may also modestly elevate prolactin. Significant hyperprolactinemia is also encountered during pregnancy. Under most circumstances, the principal form of prolactin is a 23-kD peptide; however, a 150-kD form may also be secreted. This larger prolactin molecule has a markedly reduced biologic potency and does not share the reproductive consequences of the 23-kD variety. If the 150-kD form of prolactin predominates, this is called macroprolactinemia, and the clinical consequences are unclear, but most patients are relatively asymptomatic.62 The prevalence of macroprolactinemia has been estimated at 10% to 22% of hyperprolactinemic samples63 and can be excluded by precipitating serum samples with polyethylene glycol prior to measuring prolactin.

Clinical Evaluation of Hyperprolactinemia A careful history and physical examination are usually sufficient to exclude most

common, nonendocrine causes of hyperprolactinemia. It is essential to obtain TSH and free T4 to eliminate primary hypothyroidism as a cause for the elevated prolactin. If a pituitary tumor is suspected, a careful assessment of other anterior pituitary function (basal cortisol, LH, FSH, and gender-specific gonadal steroid [either estradiol or testosterone]) and an evaluation of sellar anatomy with a high-resolution MRI should be obtained.

Management of Prolactinoma The therapeutic goals are correction of symptoms that result from local invasion or extension of the tumor by reducing tumor mass, restoration of normal gonadal function and fertility, prevention of osteoporosis, and preservation of normal anterior and posterior pituitary function. The different therapeutic options include simple observation, surgery, radiotherapy, or medical management with dopamine agonists.64 However, the management of prolactinoma also depends on the size of the tumor (macroadenomas [tumor size >10 mm] are less likely to be “cured” than are microadenomas [tumor size < 10 mm])65 and the preferences of the patient. Dopamine agonists are the most commonly used therapy for microprolactinomas. Tumor shrinkage is noted in more than 90% of patients treated with either bromocriptine mesylate (Parlodel) or cabergoline (Dostinex), dopamine receptor agonists. Both drugs also shrink prolactin-secreting macroadenomas.64 A resumption of menses and restoration of fertility is also frequently seen during medical therapy. The adverse effects of bromocriptine include orthostatic hypotension, dizziness, and nausea. The gastrointestinal adverse effects of bromocriptine can be ameliorated through intravaginal administration, and its efficacy is otherwise uncompromised. Cabergoline has fewer adverse effects and may be administered biweekly because of its longer duration of action. By virtue of its ability to interact with the 5hydroxytryptamine (5-HT)2B serotonergic receptor, cabergoline has been linked to the development of valvular heart disease,66 although the doses of cabergoline required to elicit the risk of valvular damage are in vast excess to the doses used in the management of prolactinomas. Either agent should be discontinued during pregnancy unless tumor regrowth has been documented. Neurosurgery is not a primary mode of prolactinoma management. The indications for neurosurgical intervention include pituitary tumor apoplexy (hemorrhage), acute visual loss due to macroadenoma, cystic prolactinoma, intolerance to medical therapy, or tumor resistance to dopaminergic agonists.

Surgical cure rates are inversely proportional to tumor size and the degree of prolactin elevation. Radiotherapy is generally reserved for high surgical risk patients with locally aggressive macroadenomas who are unable to tolerate dopamine agonists.

Idiopathic Galactorrhea Lactation occurring in women with normal prolactin levels is defined as idiopathic galactorrhea. This condition is usually seen in women who have been pregnant several times and has no pathologic implication, but may be a manifestation of a localized increased sensitivity to prolactin in breast tissue. It is to be remembered that this is a diagnosis of exclusion.

HYPOPITUITARISM The failure of either the pituitary or the hypothalamus results in the loss of anterior pituitary function. Complete loss of function is termed panhypopituitarism; however, there may be a loss of only a single pituitary hormone, which is referred to as a monotropic hormone deficiency. The loss of a tropic hormone (ACTH, TSH, LH, and FSH) is reflected in function cessation of the affected endocrine gland. Loss of the direct effectors (GH and prolactin) may not be readily apparent. This section concentrates on the causes of hypopituitarism and certain subtleties involved in the therapy of panhypopituitarism; more detailed descriptions of various hormone deficiency states are covered in other chapters.

CASE STUDY 20.3 A 60-year-old man presented with intractable headaches. MRI was requested to evaluate this complaint, and a 2.5-cm pituitary tumor was discovered. In retrospect, he noted an unexplained 20-kg weight loss, cold intolerance, fatigue, and loss of sexual desire.

questions 1. How would you approach the evaluation of his anterior pituitary function? 2. What additional testing may be required to confirm a loss in anterior

pituitary function?

The laboratory diagnosis of hypopituitarism is relatively straightforward. In contrast to the primary failure of an endocrine gland that is accompanied by dramatic increases in circulating levels of the corresponding pituitary tropic hormone, secondary failure (hypopituitarism) is associated with low or normal levels of tropic hormone. In primary hypothyroidism, for example, the circulating levels of thyroxine are low and TSH levels may exceed 200 μU/mL (normal, 0.4 to 5.0 μU/mL). As a result of pituitary failure in hypothyroidism, TSH levels are inappropriately low and typically less than 1.0 μU/mL in association with low free thyroxine levels. There are several important issues in distinguishing between primary and secondary hormone deficiency states. To differentiate between primary and secondary deficiencies, both tropic and target hormone levels should be measured when there is any suspicion of pituitary failure or as part of the routine evaluation of gonadal or adrenal function. If one secondary deficiency is documented, it is essential to search for other deficiency states and the cause for pituitary failure. For example, failure to recognize secondary hypoadrenalism may have catastrophic consequences if the patient is treated with thyroxine. Similarly, initially overlooking a pituitary or hypothalamic lesion could preclude early diagnosis and treatment of a potentially aggressive tumor.

Etiology of Hypopituitarism The many causes of hypopituitarism are listed in Table 20.4. Direct effects of pituitary tumors, or the sequelae of treatment of tumors, are the most common causes of pituitary failure. Pituitary tumors may cause panhypopituitarism by compressing or replacing normal tissue or interrupting the flow of hypothalamic hormones by destroying the pituitary stalk. Large, nonsecretory pituitary tumors (chromophobe adenomas or null cell tumors) or macroprolactinomas are most commonly associated with this phenomenon. Parasellar tumors (meningiomas and gliomas), metastatic tumors (breast and lung), and hypothalamic tumors (craniopharyngiomas or dysgerminomas) can also cause hypopituitarism through similar mechanisms. Hemorrhage into a pituitary tumor (pituitary tumor apoplexy) is rare; however, when it occurs, it frequently causes complete pituitary failure.67 Postpartum ischemic necrosis of the pituitary following a complicated delivery (Sheehan's syndrome) typically presents as profound,

unresponsive shock or as failure to lactate in the puerperium. Infiltrative diseases, such as hemochromatosis, sarcoidosis, or histiocytosis, can also affect pituitary function. Fungal infections, tuberculosis, and syphilis can involve the pituitary or hypothalamus and may cause impairment of function. Lymphocytic hypophysitis,68 an autoimmune disease of the pituitary, may only affect a single cell type in the pituitary, resulting in a monotropic hormone deficiency, or can involve all cell types, yielding total loss of function. Ipilimumab, a monoclonal antibody that blocks cytotoxic T-lymphocyte–associated antigen 4 (CTLA-4) and proven to increase survival in melanoma patients, has been associated with lymphocytic hypophysitis in up to 5% of treated patients.69 Severe head trauma may shear the pituitary stalk or may interrupt the portal circulation. Similarly, surgery involving the pituitary may compromise the stalk and/or blood supply to the pituitary or may iatrogenically diminish the mass of functioning pituitary tissue. Panhypopituitarism can result from radiotherapy used to treat a primary pituitary tumor or a pituitary that was inadvertently included in the radiation port; loss of function, however, may be gradual and may occur over several years. There have been rare instances of familial panhypopituitarism70 or monotropic hormone deficiencies. In Kallmann's syndrome, for example, GnRH is deficient, and the patient presents with secondary hypogonadism. Last, there may not be an apparent identified cause for the loss of pituitary function, and the patient is classified as having idiopathic hypopituitarism, although it is always prudent to continue the search for an underlying cause. TABLE 20.4 Causes of Hypopituitarism

CASE STUDY 20.4 An 18-year-old woman was admitted to the neurologic intensive care unit following a severe closed head injury. Her course stabilized after 24 hours, but the nursing staff noticed a dramatic increase in the patient's urine output, which exceeded 1,000 mL/h.

questions 1. What caused her increased urine production? 2. How could you prove your suspicions? 3. Could she have other possible endocrinologic problems?

Treatment of Panhypopituitarism In the average patient, replacement therapy for panhypopituitarism is the same as for primary target organ failure. Patients are treated with thyroxine, glucocorticoids, and gender-specific sex steroids. It is less clear about GH replacement in adults, and additional studies are needed to clarify this issue. Replacement becomes more complicated in panhypopituitary patients who desire fertility. Pulsatile GnRH infusions have induced puberty and restored fertility in patients with Kallmann's syndrome,71 and gonadotropin preparations have restored ovulation/spermatogenesis in people with gonadotropin deficiency.72

POSTERIOR PITUITARY HORMONES The posterior pituitary is an extension of the forebrain and represents the storage region for vasopressin (also called ADH) and oxytocin. Both of these small peptide hormones are synthesized in the supraoptic and paraventricular nuclei of the hypothalamus and transported to the neurohypophysis via their axons in the hypothalamoneurohypophyseal tract. This tract transits the median eminence of the hypothalamus and continues into the posterior pituitary through the pituitary stalk. The synthesis of each of these hormones is tightly linked to the production of neurophysin,73 a larger protein whose function is poorly understood. Both hormones are synthesized outside of the hypothalamus in various tissues, and it is plausible they have an autocrine or a paracrine function.

Oxytocin Oxytocin is a cyclic nonapeptide, with a disulfide bridge connecting amino acid residues 1 and 6. As a posttranslational modification, the C-terminus is amidated. Oxytocin has a critical role in lactation74 and plays a major role in labor and parturition.75 Oxytocin is also unique because its secretion responds to a positive feedback loop, meaning that circulating levels of oxytocin actually perpetuate further hormone secretion, instead of suppressing further hormone secretion as is the case with most anterior pituitary hormones. In this way, uterine contractions propagate oxytocin release, which causes further uterine contractions, which cause further oxytocin release until parturition occurs. Synthetic oxytocin, Pitocin, is used in obstetrics to induce labor. Recent studies have linked oxytocin to a variety of biosocial behaviors to include maternal nurturing and mother–infant bonding.76 In addition to its reproductive and prosocial effects, oxytocin has been shown to have effects on pituitary, renal,

cardiac, metabolic, and immune function.

Vasopressin Structurally similar to oxytocin, vasopressin is a cyclic nonapeptide with an identical disulfide bridge; it differs from oxytocin by only two amino acids. Vasopressin's major action is to regulate renal free water excretion and, therefore, has a central role in water balance. The vasopressin receptors in the kidney (V2) are concentrated in the renal collecting tubules and the ascending limb of the loop of Henle. They are coupled to adenylate cyclase, and once activated, they induce insertion of aquaporin-2, a water channel protein, into the tubular luminal membrane.77 Vasopressin is also a potent pressor agent and effects blood clotting78 by promoting factor VII release from hepatocytes and von Willebrand factor release from the endothelium. These vasopressin receptors (V1a and V1b) are coupled to phospholipase C. Hypothalamic osmoreceptors and vascular baroreceptors regulate the release of vasopressin from the posterior pituitary. The osmoreceptors are extremely sensitive to even small changes in plasma osmolality, with an average osmotic threshold for vasopressin release in humans of 284 mOsm/kg. As plasma osmolality increases, vasopressin secretion increases. The consequence is a reduction in renal free water clearance, a lowering of plasma osmolality, and a return to homeostasis. The vascular baroreceptors (located in the left atrium, aortic arch, and carotid arteries) initiate vasopressin release in response to a fall in blood volume or blood pressure. A 5% to 10% fall in arterial blood pressure in normal humans will trigger vasopressin release; however, in contrast to an osmotic stimulus, the vasopressin response to a baroreceptor-induced stimulus is exponential. In fact, baroreceptor-induced vasopressin secretion will override the normal osmotic suppression of vasopressin secretion. Diabetes insipidus (DI), characterized by copious production of urine (polyuria) and intense thirst (polydipsia), is a consequence of vasopressin deficiency. However, total vasopressin deficiency is unusual, and the typical patient presents with a partial deficiency. The causes of hypothalamic DI include apparent autoimmunity to vasopressin-secreting neurons, trauma, diseases affecting pituitary stalk function, and various central nervous system or pituitary tumors. A sizable percentage of patients (up to 30%) will have idiopathic DI.79 Depending on the degree of vasopressin deficiency, diagnosis of DI can be readily apparent or may require extensive investigation. Documenting an

inappropriately low vasopressin level with an elevated plasma osmolality would yield a reasonably secure diagnosis of DI. In less obvious cases, the patient may require a water deprivation test in which fluids are withheld from the patient and serial determinations of serum and urine osmolality are performed in an attempt to document the patient's ability to conserve water. Under selected circumstances, a health care provider may simply offer a therapeutic trial of vasopressin or a synthetic analog such as desmopressin (dDAVP) and assess the patient's response. In this circumstance, amelioration of both polyuria and polydipsia would be considered a positive response, and a presumptive diagnosis of DI is made. However, if the patient has primary polydipsia (also known as compulsive water drinking), a profound hypo-osmolar state (water intoxication) can ensue due to the continued ingestion of copious amounts of fluids and a reduced renal excretion of free water. This scenario illustrates the importance of carefully evaluating each patient prior to therapy. Vasopressin excess may also occur and is much more difficult to treat. Since excess vasopressin leads to the pathologic retention of free water, restricting free water intake to small amounts each day has been a historical cornerstone of treatment. Recently, conivaptan and tolvaptan,80 vasopressin V2 receptor antagonists, have been approved for the management of euvolemic hyponatremia due to vasopressin excess. For additional student resources, please visit http://thepoint.lww.com

questions 1. Open-loop negative feedback refers to the phenomenon of a. Negative feedback with a modifiable set point b. Blood flow in the hypothalamic–hypophyseal portal system c. Blood flow to the pituitary via dural-penetrating vessels d. Negative feedback involving an unvarying, fixed set point 2. The specific feedback effector for FSH is a. Inhibin b. Activin

at

c. Progesterone d. Estradiol 3. Which anterior pituitary hormone lacks a stimulatory hypophysiotropic hormone? a. Prolactin b. Growth hormone c. Vasopressin d. ACTH 4. The definitive suppression test to prove autonomous production of growth hormone is a. Oral glucose loading b. Somatostatin infusion c. Estrogen priming d. Dexamethasone suppression 5. Which of the following is influenced by growth hormone? a. All of these b. IGF-I c. IGFBP-III d. Lipolysis 6. What statement concerning vasopressin secretion is NOT true? a. All of these. b. Vasopressin secretion is closely tied to plasma osmolality. c. Changes in blood volume also alter vasopressin secretion. d. A reduction in effective blood volume overrides the effects of plasma osmolality in regulating vasopressin secretion. 7. What are the long-term sequelae of untreated or partially treated acromegaly? a. An increased risk of colon and lung cancer b. A reduced risk of heart disease c. Enhanced longevity d. Increased muscle strength 8. TRH stimulates the secretion of a. Prolactin and TSH b. Prolactin

c. Growth hormone d. TSH 9. Estrogen influences the secretion of which of the following hormones? a. All of these b. Growth hormone c. Prolactin d. Luteinizing hormone 10. What is the difference between a tropic hormone and a direct effector hormone? a. Tropic and direct effector hormones are both similar in that both act directly on peripheral tissue. b. Tropic and direct effector hormones are both similar in that both act directly on another endocrine gland. c. Tropic hormones act on peripheral tissue, while direct effector hormones act on endocrine glands. d. Tropic hormones act on endocrine glands, while direct effector hormones act on peripheral tissues. 11. A deficiency in vasopressin can lead to which of the following? a. Euvolemic hypokalemia b. Euvolemic hyponatremia c. Diabetes insipidus d. Primary hypothyroidism 12. Which of the following hormones stimulate prolactin secretion? a. Dopamine b. GnRH c. TRH d. TSH 13. Which hormone most directly stimulates testosterone secretion? a. LH b. FSH c. GnRH d. TRH 14. Which of the following is NOT likely to be present in an “atypical pituitary

tumor” as defined by the World Health Organization (WHO)? a. Invasion into surrounding structures such as the cavernous sinus b. MIB-1 proliferative index greater than 3% c. Excessive p53 immunoreactivity d. Decreased mitotic activity 15. Concerning secretion of growth hormone, which of the following is NOT true? a. Secretion is stimulated by GHRH. b. Secretion is pulsatile, occurring usually 2 to 3 times daily. c. The most reproducible surge is at the onset of sleep. d. It is secreted from somatotrophs that constitute over one-third of normal pituitary weight 16. Familial acromegaly is most likely caused by a mutation in which gene? a. GNAS b. AIP c. SS receptor type 5 d. GHRH gene 17. Cosecretion of which hormone is most commonly seen with acromegaly? a. Prolactin b. TSH c. ACTH d. FSH 18. Which hormone is not secreted from the anterior pituitary? a. Prolactin b. Oxytocin c. FSH d. TSH 19. Which of the following is most suggestive of a diagnosis of diabetes insipidus? a. Low sodium in a patient who reports polydipsia and polyuria b. Persistent complaint of polydipsia and polyuria in a patient without diabetes mellitus c. Elevated serum osmolarity in the setting of decreased urine osmolarity,

in the presence of hypernatremia d. Hyponatremia after a therapeutic trial of dDAVP 20. Vasopressin release is regulated by which of the following? a. Hypothalamic osmoreceptors b. Vascular baroreceptors c. V2 receptors in the kidney d. a and b 21. Replacement of thyroxine is potentially dangerous in the setting of which other hormonal abnormality? a. GH deficiency b. Hyperprolactinemia c. Perimenopausal state d. ACTH deficiency 22. Which of the following is NOT generally considered to be a function of oxytocin? a. Uterine contraction during labor b. Milk “letdown” for breast-feeding c. Enhancement of insulin sensitivity in smooth muscle d. Enhancement of mother–infant bonding 23. Which clinical presentation is consistent with Kallmann's syndrome? a. Hypothyroidism and intermittent severe weakness or paralysis b. ACTH deficiency together with GH excess c. Hyperprolactinemia in the setting of pregnancy d. Hypogonadism with the absence of smell 24. Which drug may cause panhypopituitarism? a. Ipilimumab b. Risperdal c. Pitocin d. Cabergoline 25. Which of the following is unlikely to be a cause of hyperprolactinemia? a. Metoclopramide b. Primary hypothyroidism

c. Primary hypogonadism d. Pituitary stalk interruption

references 1. Kelberman D, Rizzoti K, Lovell-Badge R, et al. Genetic regulation of pituitary gland development in human and mouse. Endocr Rev. 2009;30(7):790–829. 2. Adler SM, Wartofsky L. The nonthyroidal illness syndrome. Endocrinol Metab Clin North Am. 2007;36(3):657–672. 3. Veldhuis JD, Keenan DM, Pincus SM. Regulation of complex pulsatile and rhythmic neuroendocrine systems: the male gonadal axis as a prototype. Prog Brain Res. 2010;181:79–110. 4. Nunemaker CS, Satin LS. Episodic hormone secretion: a comparison of the basis of pulsatile secretion of insulin and GnRH. Endocrine. 2014;47:49–63. 5. Dijk DJ, Duffy JF, Silva EJ, et al. Amplitude reduction and phase shifts of melatonin, cortisol and other circadian rhythms after a gradual advance of sleep and light exposure in humans. PLoS One. 2012;7(2):e30037. 6. Spiga F, Waite EJ, Liu Y, et al. ACTH-dependent ultradian rhythm of corticosterone secretion. Endocrinology. 2011;152:1448–1457. 7. Tonsfeldt KJ, Chappell PE. Clocks on top: the role of the circadian clock in the hypothalamic and pituitary regulation of endocrine physiology. Mol Cell Endocrinol. 2012;349:3–12. 8. Valassi E, Biller BM, Klibanski A, et al. Clinical features of non-pituitary sellar lesions in a large surgical series. Clin Endocrinol (Oxf). 2010;73:798–807. 9. Couldwell WT, Altay T, Krisht K, et al. Sellar and parasellar metastatic tumors. Int J Surg Oncol. 2012;2012:647256. 10.Zada G, Woodmansee WW, Ramkissoon S, et al. Atypical pituitary adenomas: incidence, clinical characteristics, and implications. J Neurosurg. 2011;114:336–344. 11. Marrelli D, Pinto E, Nari A, et al. Mib-1 proliferation index is an independent predictor of lymph node metastasis in invasive breast cancer: a prospective study on 675 patients. Oncol Rep. 2006;15(2):425–429. 12.Córdoba-Chacón J, Gahete MD, Castaño JP, et al. Somatostatin and its receptors contribute in a tissue specific manner to the sex-dependent metabolic (fed/fasting) control of growth hormone axis in mice. Am J Physiol Endocrinol Metab. 2011;300:E46–E54. 13.Ribeiro-Oliveira A, Barkan AL. Growth hormone pulsatility and its impact on growth and metabolism in humans. Growth Hormone Relat Dis Ther Contemp Endocrinol. 2011;(pt 1):33–56. 14.Pradhan G, Samson SL, Sun X. Ghrelin: much more than a hunger hormone. Curr Opin Clin Nutr Metab Care. 2013;16:619–624. 15.Chia DJ. Minireview: mechanisms of growth hormone- mediated gene regulation. Mol Endocrinol. 2014;28:1012–1025. 16.Møller N, Jørgensen JO. Effects of growth hormone on glucose, lipid, and protein metabolism in human subjects. Endocr Rev. 2009;30(2):152–177. 17.Cambiaso P, Schiaffini R, Pontrelli G, et al. Nocturnal hypoglycemia in ACTH and GH deficient children: role of continuous glucose monitoring. Clin Endocrinol (Oxf). 2013;79:232–237. 18.Salmon WD, Daughaday WH. A hormonally controlled serum factor which stimulates sulfate incorporation by cartilage in vitro. J Lab Clin Med. 1957;49:825–826. 19.Daughaday WH, Hall K, Raben MS, et al. Somatomedin: proposed designation for sulphation factor. Nature. 1972;235:107. 20.Klapper DG, Svoboda ME, Van Wyk JJ. Sequence analysis of somatomedin-C: confirmation of identity with insulin-like growth factor I. Endocrinology. 1983;112:2215–2217. 21.Rinderknecht E, Humbel RE. The amino acid sequence of human insulin-like growth factor I and its

structural homology with proinsulin. J Biol Chem. 1978;253:2769–2776. 22.Melmed S, Colao A, Barkan A, et al. Guidelines for acromegaly management: an update. J Clin Endocrinol Metab. 2009;94(5):1509–1517. 23.Khowaja A, Johnson-Rabbett B, Bantle J, et al. Hypoglycemia mediated by paraneoplastic production of insulin-like growth factor-2 from a malignant renal solitary fibrous tumor—clinical case and literature review. BMC Endocr Disord. 2014;14:49. 24.Li G, Barrett EJ, Ko S-H, et al. Insulin and insulin-like growth factor-I receptors differentially mediate insulin-stimulated adhesion molecule production by endothelial cells. Endocrinology. 2009;150:3475–3482. 25.Forbes BE, McCarthy P, Norton RS. Insulin-like growth factor binding proteins: a structural perspective. Front Endocrinol (Lausanne). 2012;3:38. 26.Jogie-Brahim S, Feldman D, Oh Y. Unraveling insulin-like growth factor binding protein-3 actions in human disease. Endocr Rev. 2009;30(5):417–437. 27.Torng, P-L, Lin C-W, Chan MWY, et al. Promoter methylation of IGFBP-3 and p53 expression in ovarian endometrioid carcinoma. Mol Cancer. 2009;8:120. 28.Jing M, Pollack MN, Giovannucci E, et al. Prospective study of colorectal cancer risk in men and plasma levels of insulin-like growth factor (IGF)-I and IGF-binding protein-3. J Natl Cancer Inst. 1999;91(7):620–625. 29.Kwan AYM, Hartman ML. IGF-I measurements in the diagnosis of adult growth hormone deficiency. Pituitary. 2007;10(2):151–157. 30.Bancos I, Algeciras-Schimnich A, Grebe S, et al. Evaluation of variables influencing the measurement of insulin-like growth factor-I. Endocr Pract. 2014;20:421–426. 31.Clemmons, D.R..Consensus statement on the standardization and evaluation of growth hormone and insulin-like growth factor assays. Clin Chem. 2011;57:555–559. 32.Ben-Shlomo A, Melmed S. Acromegaly. Endocrinol Metab Clin North Am. 2008;37(1):101–122. 33.Melmed S. Medical progress: acromegaly. N Engl J Med. 2006;355(24):2558–2573. 34.Molitch ME, Clemmons DR, Malozowski S, et al. Evaluation and treatment of adult growth hormone deficiency: an Endocrine Society clinical practice guideline. J Clin Endocrinol Metab. 2011;96(6):1587–1609. 35.Bidlingmaier M, Freda PU. Measurement of human growth hormone by immunoassays: current status, unsolved problems and clinical consequences. Growth Horm IGF Res. 2010;20(1):19–25. 36.Faglia G, Arosio M, Bazzoni N. Ectopic acromegaly. Endocrinol Metab Clin North Am. 1992;21:575–596. 37.Ezzat S, Ezrin C, Yamashita S, et al. Recurrent acromegaly resulting from ectopic growth hormone gene expression by a metastatic pancreatic tumor. Cancer. 1993;71:66–70. 38.Gola M, Doga M, Bonadonna S, et al. Neuroendocrine tumors secreting growth hormone-releasing hormone: pathophysiological and clinical aspects. Pituitary. 2006;9(3):221–229. 39.Chahal HS, Stals K, Unterländer M, et al. AIP mutation in pituitary adenomas in the 18th century and today. N Engl J Med. 2011;364:43–50. 40.Ciganoka D, Balcere I, Kapa I, et al. Identification of somatostatin receptor type 5 gene polymorphisms associated with acromegaly. Eur J Endocrinol. 2011;165:517–525. 41.de Herder WW. Acromegaly and gigantism in the medical literature. Case descriptions in the era before and the early years after the initial publication of Pierre Marie (1886). Pituitary. 2009;12(3):236–244. 42.Reddy R, Hope S, Wass J. Acromegaly. BMJ. 2010;341:c4189. 43.Killinger Z, Rovenský J. Arthropathy in acromegaly. Rheum Dis Clin North Am. 2010;36(4):713–720. 44.McNab TL, Khandwala HM. Acromegaly as an endocrine form of myopathy: case report and review of literature. Endocr Pract. 2005;11(1):18–22. 45.Lopes MB. Growth hormone-secreting adenomas: pathology and cell biology. Neurosurg Focus. 2010;29(4):E2. 46.Johnston PC, Hamrahian AH, Prayson RA, et al. Thyrotoxicosis with the absence of clinical features

of acromegaly in a TSH- and GH-secreting, invasive pituitary macroadenoma. Endocrinol Diabetes Metab Case Rep. 2015;2015:140070. 47.Del Porto LA, Liubinas SV, Kaye AH. Treatment of persistent and recurrent acromegaly. J Clin Neurosci. 2011;18(2):181–190. 48.Rowland NC, Aghi MK. Radiation treatment strategies for acromegaly. Neurosurg Focus. 2010;29(4):E12. 49.Sherlock M, Woods C, Sheppard MC. Medical therapy in acromegaly. Nat Rev Endocrinol. 2011;7(5):291–300. 50.Brue T, Castinetti F, Lundgren F, et al. Which patients with acromegaly are treated with pegvisomant? An overview of methodology and baseline data in ACROSTUDY. Eur J Endocrinol. 2009;161(suppl 1):S11–S17. 51.Mullis PE. Genetics of isolated growth hormone deficiency. J Clin Res Pediatr Endocrinol. 2010;2:52–62. 52.Bartke A. Growth hormone and aging: a challenging controversy. Clin Interv Aging. 2008;3(4):659–665. 53.Allen DB, Cuttler L. Clinical practice. Short stature in childhood—challenges and choices. N Engl J Med. 2013;368:1220–1228. 54.Reed ML, Merriam GR, Kargi AY. Adult growth hormone deficiency—benefits, side effects, and risks of growth hormone replacement. Front Endocrinol (Lausanne) 2013;4:64. 55.Friedrich N, Schneider H, Dörr M, et al. All-cause mortality and serum insulin-like growth factor I in primary care patients. Growth Horm IGF Res. 2011;21(2):102–106. 56.Woodhouse LJ, Mukerjee A, Shalet SM, et al. The influence of growth hormone status on physical impairments, functional limitations, and health-related quality of life in adults. Endocr Rev. 2006;27(3):287–317. 57.Svensson J, Bengtsson BA. Safety aspects of GH replacement. Eur J Endocrinol. 2009;161(suppl 1):S65-S74. 58.Erotokritou-Mulligan I, Holt RIG, Sönksen PH. Growth hormone doping: a review. J Sports Med. 2011;2:99–111. 59.Hekimsoy Z, Kafesçiler S, Güçlü F, et al. The prevalence of hyperprolactinemia in overt and subclinical hypothyroidism. Endocr J. 2010;57(12):1011–1015. 60.Shibli-Rahhal A, Schlechte J. Hyperprolactinemia and infertility. Endocrinol Metab Clin North Am. 2011;40(4):837–846. 61.Shibli-Rahhal A, Schlechte J. The effects of hyperprolactinemia on bone and fat. Pituitary. 2009;12(2):96–104. 62.Chahal J, Schlechte J. Hyperprolactinemia. Pituitary. 2008;11(2):141–146. 63.Gibney J, Smith TP, McKenna TJ. The impact on clinical practice of routine screening for macroprolactin. J Clin Endocrinol Metab. 2005;90:3927–3932. 64.Melmed S, Montori VM, Schlechte JA, et al.; Endocrine Society. Diagnosis and treatment of hyperprolactinemia: an Endocrine Society clinical practice guideline. J Clin Endocrinol Metab. 2011;96(2):273–288. 65.Dekkers OM, Lagro J, Burman P, et al. Recurrence of hyperprolactinemia after withdrawal of dopamine agonists: systematic review and meta-analysis. J Clin Endocrinol Metab. 2010;95(1):43–51. 66.Delgado V, Biermasz NR, van Thiel SW, et al. Changes in heart valve structure and function in patients treated with dopamine agonists for prolactinomas, a 2-year follow-up study. Clin Endocrinol (Oxf). 2012;77(1):99–105. 67.Nawar RN, Abdelmannan D, Selman WR, et al. Pituitary tumor apoplexy: a review. J Intensive Care Med. 2008;23:75–90. 68.Caturegli P, Lupi I, Landek-Salgado M, et al. Pituitary autoimmunity: 30 years later. Autoimmun Rev. 2008;7:631–637. 69.Hodi FS, O'Day SJ, McDermott DF, et al. Improved survival with ipilimumab in patients with metastatic melanoma. N Engl J Med. 2010;363:711–723.

70.Romero CJ, Nesi-Franca S, Radovick S. Molecular basis of hypopituitarism. Trends Endocrinol Metab. 2009;20:506–516. 71.Fechner A, Fong S, McGovern P. A review of Kallmann syndrome: genetics, pathophysiology, and clinical management. Obstet Gynecol Surv. 2008;63(3):189–194. 72.Du X, Yuan Q, Yao Y, et al. Hypopituitarism and successful pregnancy. Int J Clin Exp Med. 2014;7:4660–4665. 73.Elphick MR. NG peptides: a novel family of neurophysin-associated neuropeptides. Gene. 2010;458(1–2):20–26. 74.Lee H-J, Macbeth AH, Pagani J, et al. Oxytocin: the great facilitator OF LIFE. Prog Neurobiol. 2009;88:127–151. 75.Vrachnis N, Iliodromiti Z. The oxytocin-oxytocin receptor system and its antagonists as tocolytic agents. Int J Endocrinol. 2011;2011:350546. 76.Ross HE, Young LJ. Oxytocin and neural mechanisms regulating social cognition and affiliative behavior. Front Neuroendocrinol. 2009;30:534–547. 77.Wilson JLL, Miranda CA, Knepper MA. Vasopressin and the regulation of aquaporin-2. Clin Exp Nephrol. 2013;17:751–764. 78.Ozier Y, Bellamy L. Pharmacological agents: antifibrinolytics and desmopressin. Best Pract Res Clin Anaesthesiol. 2010;24(1):107–119. 79.Jane JA Jr, Vance ML, Laws ER. Neurogenic diabetes insipidus. Pituitary. 2006;9(4):327–329. 80.Cassagnol M, Shogbon AO, Saad M. The therapeutic use of vaptans for the treatment of dilutional hyponatremia. J Pharm Pract. 2011;24(4):391–399. 81.Clemmons DR. Value of insulin-like growth factor system markers in the assessment of growth hormone status. Endocrinol Metab Clin North Am. 2007;36(1):109–129. 82.Murad MH, Elamin MB, Malaga G, et al. The accuracy of diagnostic tests for GH deficiency in adults: a systematic review and meta-analysis. Eur J Endocrinol. 2011;165:841–849.

21 Adrenal Function VISHNU SUNDARESH and DEEPIKA S. REDDY

Chapter Outline The Adrenal Gland: An Overview Embryology and Anatomy

The Adrenal Cortex by Zone Cortex Steroidogenesis Congenital Adrenal Hyperplasia

Primary Aldosteronism Overview Etiology Diagnosis Treatment Isolated Hypoaldosteronism Adrenal Cortical Physiology

Adrenal Insufficiency Overview Symptoms Diagnosis Treatment

Hypercortisolism (Cushing's Syndrome) Overview Etiology Diagnosis Treatment

Adrenal Androgens Androgen Excess Diagnosis Treatment

The Adrenal Medulla Embryology Biosynthesis, Storage, and Secretion of Catecholamines Metabolism and Excretion of Catecholamines

Pheochromocytoma and Paraganglioma Overview Epidemiology Clinical Presentation Diagnosis Interfering Medications Biochemical Testing Plasma-Free Metanephrines 24-Hour Urine Fractionated Metanephrines and Catecholamines Normal Results Case Detection 24-Hour Urine Fractionated Metanephrines and Catecholamines Plasma-Fractionated Metanephrines Radiographic Localization Treatment Outcome, Prognosis, and Follow-up Genetic Testing

Adrenal Incidentaloma Case Studies Questions References Chapter Objectives Upon completion of this chapter, the clinical laboratorian should be able to do the following: Explain how the adrenal gland functions to maintain blood pressure, potassium, and glucose homeostasis. Describe steroid biosynthesis, regulation, and actions according to anatomic location within the adrenal gland. Discuss the pathophysiology of adrenal cortex disorders, namely, Cushing's syndrome and Addison's disease. List the appropriate laboratory tests to differentially diagnose primary and secondary Cushing's syndrome and Addison's disease. Differentiate the adrenal enzyme deficiencies and their blocking pathways in establishing a diagnosis. Describe the synthesis, storage, and metabolism of catecholamines.

State the most useful measurements in supporting the diagnosis of pheochromocytoma. List the clinical findings associated with hypertension that suggest an underlying adrenal etiology is causing high blood pressure.

Key Terms Adrenocorticotropic hormone (ACTH) Aldosterone Angiotensin II (AT II) Atrial natriuretic peptide (ANP) Cardiovascular disease Corticotropin-releasing hormone (CRH) Dehydroepiandrosterone (DHEA) Dehydroepiandrosterone sulfate (DHEA-S) Dopamine (DA) 5-Dihydrotestosterone Epinephrine (EPI) Homovanillic acid Hypertension Monoamine oxidase (MAO) Norepinephrine (NE) Phenylethanolamine N-methyltransferase Pheochromocytoma–Paraganglioma (PPGL) Primary Aldosteronism (PA) Vasoactive inhibitory peptide Vesicular monoamine transporters (VMATs) Zona fasciculata (F-zone) Zona glomerulosa (G-zone) Zona reticularis (R-zone)

For additional student resources, please visit

at http://thepoint.lww.com

THE ADRENAL GLAND: AN OVERVIEW The adrenal gland is a multifunctional organ that produces steroid hormones and neuropeptides essential for life. Despite the complex actions of adrenal hormones, most pathological conditions of the adrenal gland are manifested by their impact on blood pressure, electrolyte balance, and androgen excess.1 An adrenal etiology should be considered in the differential diagnosis when patients presents with (i) hypertension, hypokalemia, and metabolic alkalosis (suspect hyperaldosteronism); (ii) hypertension, spells of anxiety, palpitations, dizziness, and diaphoresis (pheochromocytoma); (iii) hypertension, rapid unexplained

weight gain, red/purple stretch marks, and proximal muscle weakness (Cushing's syndrome); (iv) inappropriate hirsutism/virilization and inability to conceive (congenital adrenal hyperplasia); and (v) loss of appetite, unintentional weight loss, and pigmented skin (primary adrenal insufficiency). In clinical practice, patients often present with underproduction or overproduction of one or more adrenal hormones. Adrenal nodules are also discovered incidentally on abdominal imaging performed for other reasons. Hypofunction is treated with hormone replacement, and hyperfunction is treated with pharmacologic suppression and/or surgery.

EMBRYOLOGY AND ANATOMY The adrenal gland is composed of two embryologically distinct tissue—the outer adrenal cortex and inner adrenal medulla. The cortex is derived from mesenchymal cells located near the urogenital ridge that differentiate into three structurally and functionally distinct zones (Fig. 21.1). The medulla arises from neural crest cells that invade the cortex during the 2nd month of fetal development. By adulthood, the medulla contributes 10% of total adrenal weight.

FIGURE 21.1 Adrenal gland by layer. Adult adrenal glands are shaped like pyramids, located superior and medial to the upper pole of the kidneys in the retroperitoneal space (hence, also known as suprarenal glands). On cross section, both regions remain distinct; the cortex appears yellow, while the medulla is dark mahogany.2 Adrenal arterial supply is symmetric. Small arterioles branch to form a dense subcapsular plexus that drains into the sinusoidal plexus of the cortex. There is no direct arterial blood supply to middle and inner zones. In contrast, venous

drainage from the central vein displays laterality. After crossing the medulla, the right adrenal vein empties into the inferior vena cava, and the left adrenal vein drains into the left renal vein. There is a separate capillary sinusoidal network from the medullary arterioles that also drains into the central vein and limits the exposure of cortical cells to medullary venous blood. Glucocorticoids from the cortex are carried directly to the adrenal medulla via the portal system, where they stimulate enzyme production of epinephrine (EPI) (Fig. 21.15). Sympathetic and parasympathetic axons reach the medulla through the cortex. En route, these axons release neurotransmitters (e.g., catecholamines and neuropeptide Y) that modulate cortex blood flow, cell growth, and function. Medullary projections into the cortex have been found to contain cells that also synthesize and release neuropeptides, such as vasoactive inhibitory peptide (VIP), adrenomedullin, and atrial natriuretic peptide (ANP), and potentially influence cortex function.

THE ADRENAL CORTEX BY ZONE The major adrenal cortical hormones, aldosterone, cortisol, and dehydroepiandrosterone sulfate (DHEA-S), are uniquely synthesized from a common precursor cholesterol by cells located in one of three functionally distinct zonal layers of the adrenal cortex. These zonal layers are zona glomerulosa, zona fasciculata, and zona reticularis, respectively (Fig. 21.1). Zona glomerulosa (G-zone) cells (outer 10%) synthesize aldosterone, a mineralocorticoid critical for sodium retention, potassium excretion, acid–base homeostasis, and regulation of blood pressure. They have low cytoplasmic-tonuclear ratios and small nuclei with dense chromatin with intermediate lipid inclusions. Zona fasciculata (F-zone) cells (middle 75%) synthesize glucocorticoids, such as cortisol and cortisone, critical for glucose homeostasis and blood pressure. Fasciculata cells are cords of clear cells, with a high cytoplasmic-tonuclear ratio and lipids laden with “foamy” cytoplasm. The fasciculata cells also generate androgen precursors such as dehydroepiandrosterone (DHEA), which is sulfated in the innermost zona reticularis (R-zone). Subcapsular adrenal cortex remnants can regenerate into fasciculate adrenals, metastasize, and survive in ectopic locations such as the liver, gallbladder wall, broad ligaments, celiac plexus, ovaries, scrotum, and cranium. Zona reticularis cells (inner 15%) sulfate DHEA to DHEA-S, which is the

main adrenal androgen. The zone is sharply demarcated with lipid-deficient cords of irregular, dense cells with lipofuscin deposits. Adrenal cell types are presumed to arise from stem cells. A proposed tissue layer between the zona glomerulosa and fasciculata may serve as a site for progenitor cells to regenerate zonal cells.3

Cortex Steroidogenesis Control of steroid hormone biosynthesis is complex, including adrenocorticotropic hormone (ACTH) and angiotensin II (AT II). It occurs via substrate availability, enzyme activity, and inhibitory feedback loops that are layer specific. All adrenal steroids are derived by sequential enzymatic conversion of a common substrate, cholesterol. Adrenal parenchymal cells accumulate and store circulating low-density lipoproteins (LDLs). The adrenal gland can also synthesize additional cholesterol using the enzyme acetyl-CoA, ensuring that adrenal steroidogenesis remains normal in patients with variable lipid disorders and in patients on lipid-lowering agents. Only free cholesterol can enter steroidogenic pathways in response to ACTH. The availability of free intracellular cholesterol is metabolically regulated by ACTH (stimulatory) and LDL (inhibitory) through multiple mechanisms. Corticotropin-releasing hormone (CRH) is secreted from the hypothalamus in response to circadian signals, low serum cortisol levels, and stress. CRH stimulates release of stored ACTH from the anterior pituitary gland, which stimulates transport of free cholesterol into adrenal mitochondria, initiating steroid production. Conversion of cholesterol to pregnenolone is a rate-limiting step in steroid biosynthesis: six carbon atoms are removed from cholesterol by enzyme cytochrome P450 (CYP450) present in the mitochondrial membrane (Fig. 21.2). Newly synthesized pregnenolone is then returned to the cytosol for subsequent zonal conversion by microsomal enzymes in each layer by F-zone enzymes and/or androgens by enzymes in the R-zone (Fig. 21.3).

FIGURE 21.2 Conversion of cholesterol to pregnenolone. ACTH, adrenocorticotropic hormone; CRH, corticotropin-releasing hormone.

FIGURE 21.3 Adrenocortical hormone synthesis by zone. High serum glucocorticoids suppress release of CRH and ACTH via a negative feedback mechanism. Cortisol is the primary feedback regulator of ACTH-stimulated hormone production in the adrenal cortex. ACTH generally does not impact G-zone aldosterone synthesis, although cortisol has mineralocorticoid action. Decreased activity of any enzyme required for biosynthesis can occur as an acquired or inherited (autosomal recessive) trait. Defects that decrease the

production of cortisol cause increases in ACTH and CRH secretion in an attempt to stimulate cortisol levels and lead to adrenal hyperplasia or overproduction of androgens, depending on the affected enzyme. Evaluation of adrenal function requires measuring relevant adrenal hormones, metabolites, and regulatory secretagogues. Diagnosis is based on the correlation of clinical and laboratory findings.4

Congenital Adrenal Hyperplasia Congenital adrenal hyperplasia refers to a group of clinical entities that arise from absent or diminished activity of enzymes involved in steroidogenesis. The mineralocorticoid, glucocorticoid, and androgen production pathways can be affected to varying degrees based on the enzyme affected. Blocks in one pathway result in upstream substrate buildup and potential upregulation of another pathway. The most common enzyme affected is 21-hydroxylase. Deficiency in this enzyme results in decreased glucocorticoid, in some cases mineralocorticoid and increased adrenal androgen production.5 A very high serum concentration of 17-hydroxyprogesterone, the normal substrate for 21hydroxylase, is diagnostic of classic 21-hydroxylase deficiency. The “classic” presentation is seen in infants. It presents with features such as failure to thrive and low blood pressure. These infants need both glucocorticoid and mineralocorticoid replacement. A second “nonclassic” form is seen in adults. Woman with this form present in their reproductive years with complaints of hirsutism, menstrual irregularities, and infertility. They may need steroids during pregnancy. The different enzyme defects along with clinical and biochemcial abnormalities have been summarized in Figure 21.4.

FIGURE 21.4 Congenital adrenal hyperplasia syndromes.

PRIMARY ALDOSTERONISM Overview

The clinical entity in which excessive secretion of aldosterone cannot be suppressed with salt or volume replacement is termed primary aldosteronism (PA). PA results in hypertension, hypokalemia, metabolic alkalosis, and increased risk of vascular disease such as stroke as depicted in Figure 21.5. It is estimated that 5% to 10% of patients with hypertension have PA.6 Endocrine Society guidelines on PA suggest that hypokalemia is found only in 37% of patients with PA.7 The guidelines recommend more aggressive screening may be appropriate in those with SBP greater than 160/DBP greater than 100, drugresistant hypertension, hypokalemia associated with hypertension, presence of adrenal mass, family history of early hypertension or stroke, and first-degree relatives of patients with PA.

FIGURE 21.5 G-zone function and pathology. HTN, hypertension.

Etiology The most common causes of PA are: Aldosterone-producing adrenal adenoma Unilateral or bilateral adrenal hyperplasia Other causes are as follows: Familial hyperaldosteronism such as glucocorticoid aldosteronism (GRA) Adrenocortical carcinomas that secrete aldosterone Ectopic aldosterone secretion

Diagnosis

remediable

Plasma aldosterone concentration (PAC) and plasma renin activity (PRA) are used in the diagnosis of PA.8 Presence of both PAC of greater than 15 ng/dL and PAC/PRA (plasma renin activity) of 30 or greater is suggestive of PA. The 2008 Endocrine Society guidelines recommend the labs be checked in AM after the patient has been up for at least 2 hours. The patient should be seated for 5 to 15 minutes before the draw and sodium intake should be unrestricted. Most antihypertensive medications do not need to be stopped prior to testing except mineralocorticoid antagonist. If clinical suspicion is high in a patient on ACE-I and testing is negative, the ACE-I should be discontinued and testing repeated. If the initial screening is suggestive of PA, confirmatory tests include aldosterone measurement following either oral salt loading or IV saline infusion. Oral salt loading requires oral consumption of about 5,000 mg NaCl per day. On day 3, urine sodium, creatinine, and aldosterone are measured in a 24-hour urine collection. Presence of urine sodium greater than 200 mEq and urine aldosterone greater than 12 μg/24 hours is suggestive of PA. The saline infusion test involves IV infusion of 2 L NaCl in 2 hours. Following this, PAC normally will be less than 5 ng/dL, and in those with PA, the value is often greater than 10 ng/dL. Biochemical evaluation is followed by adrenal imaging (CT, MRI). Presence of an adrenal mass should still prompt further testing before surgical intervention since adrenal adenomas are often nonfunctional. Aldosterone secretion may be due to bilateral hyperplasia. Adrenal venous sampling (AVS) is recommended to distinguish between hyperplasia and aldosterone-secreting adenoma.9 Some authors do suggest that in persons less than 35 years, one may be able to proceed to surgery without AVS. The study requires technical expertise and should be performed by experienced practitioners. Adrenal vein to IVC cortisol ratios and aldosterone levels corrected for cortisol are used to determine appropriate placement of sampling catheters. If there is asymmetry in aldosterone release between adrenal glands (greater than 4:1), it is likely to be unilateral excess aldosterone production and the patient would benefit from surgical intervention. If less than 3:1 ratio exists, it is likely to be secondary to adrenal hyperplasia and the patient should be treated medically.

Treatment Surgery is the treatment of choice for aldosterone-producing adenoma. Hypertension is controlled in 30% to 60% of the patients treated surgically.10 Mineralocorticoid antagonists such as spironolactone or eplerenone should be

used for adrenal hyperplasia. If GRA is suspected in patients with multiple family members with early onset of hypertension, steroids such as prednisone are the treatment of choice.

Isolated Hypoaldosteronism Insufficient aldosterone secretion is seen with adrenal gland destruction, with chronic heparin therapy, following unilateral adrenalectomy (transient), and with G-zone enzyme deficiencies. Most hypoaldosteronism occurs in patients with mild renal insufficiency such as persons with diabetes who present with mild metabolic acidosis, high serum potassium+, low urinary potassium+ excretion (urine K+ < urine Na+), and low renin. Treatment is with dietary changes and Fludrocortisone (a synthetic mineralocorticoid), which enhances salt retention and the secretion of potassium+ and hydrogen+. Types of aldosteronism are diagrammed in Figure 21.6 according to their PA (y-axis) and PRA (x-axis) values.

FIGURE 21.6 Types of aldosteronism according to plasma aldosterone:plasma renin activity (PRA) ratio. HTN, hypertension.

ADRENAL CORTICAL PHYSIOLOGY Cortisol synthesis (8 to 15 mg/d) is critical to hemodynamics and glucose homeostasis. F-zone disorders manifest with blood pressure and glucose abnormalities (Fig. 21.7). Glucocorticoids maintain blood glucose by inducing lipolysis and causing amino acid release from muscle for conversion into glucose (gluconeogenesis) and storage as liver glycogen.

FIGURE 21.7 F-zone function and pathology. ACTH, adrenocorticotropic hormone.

ADRENAL INSUFFICIENCY Overview Adrenal insufficiency is a term that describes inadequate hormone secretion from the adrenal cortex. This may be a primary adrenal problem in which case there is inadequate release of glucocorticoids (such as cortisol), mineralocorticoids (such as aldosterone), and adrenal androgens. In primary adrenal insufficiency, there is reduced production of adrenal hormones despite adequate stimulation. In secondary adrenal insufficiency, adrenal gland function is preserved but the stimulus for hormone release is insufficient or absent. This would occur if the pituitary gland failed to release adequate adrenocorticotropic hormone (ACTH). The most common cause of adrenal insufficiency is autoimmune destruction of the adrenal gland.11 Other causes of adrenal insufficiency include infections (tuberculosis and histoplasmosis), tumors, bilateral adrenal hemorrhage, etc.

Symptoms

Clinical symptoms of chronic adrenal insufficiency can be nonspecific with most patients complaining of fatigue, decreased appetite, weight loss, and nausea. Patients may also present with more severe findings such as low blood pressure, low blood sugar, low serum sodium, and high potassium levels as depicted in Figure 21.8.

FIGURE 21.8 Signs and symptoms of adrenal insufficiency.

Diagnosis The diagnosis of adrenal insufficiency can be suspected if the 8 AM serum cortisol is low with concomitant elevation in ACTH levels. A cortisol level less than 3 μg/dL in the morning is highly suggestive of adrenal insufficiency. The diagnosis of primary adrenal insufficiency is made by performing an ACTH stimulation test. The test is performed at 8 AM in the fasting state.12 Baseline cortisol and ACTH levels are obtained. The patient is then given 250 μg of cosyntropin (synthetic ACTH) intravenously and cortisol level is checked at 30 and 60 minutes post ACTH administration. A cortisol of 18 or greater at either 30 or 60 minutes post ACTH time point suggests normal adrenal function. In secondary adrenal insufficiency, the ACTH stimulation test may be normal or abnormal based on the duration of the disease. If secondary adrenal insufficiency is suspected, metyrapone suppression testing can be done.13 Metyrapone blocks certain enzymes in the steroidogenesis pathway. When metyrapone is administered orally at midnight, in normal individuals, it will block 11βhydroxylase increasing 11-deoxycortisol (>7 μg/dL) while cortisol decreases (20 g) amount of calcium and large amounts of milk to control gastric acid levels in patients with peptic ulcer disease. The hallmark of this disorder includes hypercalcemia, alkalosis, and renal impairment. Serum phosphate is often elevated. It is very rare to see the typical form of MAS now because of the use of proton pump inhibitors and H2 blockers. The reappearance of MAS in the 1990s follows the supplementation with lower amounts (even 1.5 to 2 g) of calcium carbonate for the treatment of osteoporosis in elderly patients with renal insufficiency. The presenting features are hypercalcemia, alkalosis, dehydration, and renal impairment with mental status changes. Serum phosphate level is low (administered salt is carbonate or citrate). Therapy includes intravenous and oral hydration and cessation of excessive calcium supplementation.

Medications That Cause Hypercalcemia HCTZ and lithium are the most common of drugs that induce hypercalcemia. HCTZ enhances distal tubular calcium absorption. Lithium affects the formation of intracellular inositol triphosphate, thereby upsetting the CSR function (similar to FHH). Hypervitaminosis D is a condition that may result from excessive intake of vitamin D. It may also result from aberrant production of 1,25(OH)2D as a result of extrarenal 1α-hydroxylation of 25-hydroxy vitamin D. Granulomatous diseases such as sarcoidosis and tuberculosis and certain lymphomas are capable of this effect. The granulomas or lymphoid tissue that hydroxylates vitamin D is autonomous, and this is not regulated by the normal feedback mechanism; therefore, calcium elevation can be significant. Hypercalcemia results from increased GI absorption of calcium, without increase in bone resorption; therefore, bisphosphonates are not useful to control blood calcium levels. Also, the normal parathyroid glands respond to the elevation of calcium with reduction of PTH production, resulting in low PTH levels. These subjects also have elevated urine calcium levels.18

A variety of cancers may lead to hypercalcemia as a result of production and release of cytokines or PTH-like substances. These tumor-produced humoral substances are not responsive to negative feedback by calcium; therefore, these subjects can have massive elevation of serum calcium levels with low serum PTH levels.18 Parathyroid hormone–related protein (PTHrP) is a substance secreted by cancers that shares structural similarities to the N-terminal portion of human PTH molecule. Therefore, it retains functional features of PTH, with some critical differences. Both PTH and PTHrP bind to the same receptor in the kidney and bone as well as a variety of other tissues. In humans, PTHrP functions in the paracrine regulation of cartilage, skin, brain, and lactating breast tissue. In healthy humans, circulating levels of PTHrP are very low or immeasurable. Normal lactating breast tissue is capable of producing PTHrP, which can result in hypercalcemia and resolves following stopping breastfeeding. In most clinical situations of hypercalcemia due to PTHrP secretion, an underlying cancer is the root cause.19 PTHrP can be secreted by a variety of cancers as described earlier in this chapter. The secretion is not regulated by even very high blood calcium levels, which results in severe hypercalcemia (Fig. 24.7). When humoral hypercalcemia of malignancy is suspected, PTHrP can be measured by specific immunoassay. Also, the intact PTH assay does not crossreact with circulating PTHrP. Normal parathyroid glands sense the elevated calcium levels and markedly reduce the secretion of PTH; therefore, PTH is very low. One salient difference in the biological functions between PTH and PTHrP is the inability of the latter to facilitate renal hydroxylation of 25-hydroxy vitamin D. This dichotomy in function between these otherwise functionally similar protein hormones remains unexplained.

FIGURE 24.7 PTHrP endocrine pathophysiology. This demonstrates the effect of tumors that overproduce PTHrP. The pathophysiologic effect is via the same

organ systems used by PTH to increase blood calcium. The difference between PTH and PTHrP is that PTH is subject to feedback regulation, whereas PTHrP is not subject to any feedback regulation by calcium (compare with Fig. 24.4). There are a variety of medications that can cause hypercalcemia. Thiazide diuretics are used in the treatment of hypertension. This agent reduces the renal excretion of calcium, leading to calcium retention and hypercalcemia. At doses routinely used to treat hypertension, hypercalcemia is uncommon in subjects with normal calcium axis. However, when thiazide diuretics are used in subjects with other conditions associated with hypercalcemia such as PHPT, hypercalcemia can be worsened. In patients with subclinical PHPT, hypercalcemia can be precipitated by thiazide diuretics, leading to “unmasking” of PHPT. Lithium carbonate, when used to treat bipolar disorder, can cause hypercalcemia. Lithium reduces intracellular formation of inositol triphosphate levels altering the “set point” of CSRs, a mechanism similar to FHH mentioned earlier. High doses of vitamin A, or vitamin A analogs/metabolites in the retinoic acid family, may cause hypercalcemia. Vitamin A is believed to activate osteoclasts and enhance bone resorption, elevating blood calcium. In this condition, both PTH and 1,25(OH)2D are suppressed.

CASE STUDY 24.2 A 58-year-old man has been a smoker since childhood. He has been smoking three packs per day since he can remember but insists his cigarettes “don't hurt me none, doc.” He has been feeling ill recently, however, with loss of appetite, malaise, and weight loss. His mental ability has been dulled recently, and he can't remember from “one minute to the next” especially notable during his work as a cowboy. His cigarettes have not been as enjoyable for him as they used to be. His baseline cough has worsened, and he has noticed blood streaking his sputum when he clears sputum from his throat. He has no significant past medical history other than his tobacco abuse. He takes no medications. His family history is only notable for his father dying of lung cancer at age 63 and his mother dying of emphysema at age 68. On physical examination, he is a thin man who looks much older than his chronologic age and appears unwell. When he

produces some sputum at the physician's request, it does indeed have a pink tinge and is streaked with blood. Chest examination reveals some scattered wheezing and some rales in the right upper lung region. He is diffusely weak on muscle strength testing. Labs are notable for calcium 16.8 mg/dL (normal, 8.5 to 10.2 mg/dL), albumin 3.4 (normal, 3.5 to 4.8 g/dL), BUN 27 mg/dL, and creatinine 1.3 mg/dL. Chest radiograph reveals a 3-cm proximal right hilar mass with distal streaking. Further testing is prompted and reveals a PTH of less than 1 pg/mL (normal, 11 to 54 pg/mL) and PTHrP elevated at 18.3 pmol/L (normal 0.0 to 1.5 pmol/L).

questions 1. Do you think this patient's smoking is related to his hypercalcemia? 2. What other laboratory results are abnormal? 3. What is this patient's diagnosis? His prognosis?

HYPOCALCEMIA Hypocalcemia refers to low blood calcium levels. It can result from a wide variety of conditions, from organ system dysfunction to lack of hormone effect to acid–base disturbances. The signs and symptoms of hypocalcemia are described below: Neuromuscular: Tetany (involuntary muscle contraction) affecting primarily the muscles in the hands, feet, legs, and back may be seen. Percussion on cranial nerve VII (facial nerve) just anterior to the ear may elicit twitching in the ipsilateral corner of the mouth (Chvostek's sign). Numbness and tingling in the face, hands, and feet may be seen. Inflation of a blood pressure cuff to 20 mm Hg above the patient's systolic blood pressure to induce a state of ischemia in the arm (metabolic acidosis) may cause spasm in the muscles of the wrist and hand (Trousseau's sign). CNS: Irritability, seizures, personality changes, and impaired intellectual functioning may be seen. Cardiovascular: Calcium plays a crucial role not only in the slow inward calcium current of the QRS complex of ventricular depolarization but also

in electromechanical coupling. In hypocalcemia, QT prolongation may be seen on the ECG. In the extreme, electromechanical dissociation may be seen. Cardiac contractile dysfunction is rare but in the extreme can result in congestive heart failure. Cardiac dysfunction from hypocalcemia should be treated with emergent intravenous calcium.

Causes of Hypocalcemia 1. Endocrine causes: hypoparathyroidism (lack of PTH): Postoperative Autoimmune (isolated or part of polyglandular autoimmune syndrome) Congenital (mutations of CSR, PTH, and parathyroid aplasia) Pseudohypoparathyroidism, types 1a, 1b, and 2 2. Deficiency of vitamin D: malnutrition, malabsorption due to celiac sprue, weight loss, gastric surgeries, short bowel syndrome, abdominal irradiation, pancreatic disease, liver disease, and renal disease. 3. Hypomagnesemia: magnesium depletion leads to lack of PTH synthesis and release. The causes of hypocalcemia will be discussed in the setting of both endocrine and organ system dysfunction. However, one key concept should be emphasized when considering hypocalcemia: when functioning properly, the parathyroid glands will not only correct falling blood calcium but also prevent it, by increasing PTH secretion. The compensatory rise in PTH secretion, in response to factors that would lower blood calcium, is known as secondary hyperparathyroidism. Thus, an individual may have an elevated PTH level—for example, in response to low 25-hydroxy vitamin D—and thus maintain normocalcemia. Secondary hyperparathyroidism is notable for the normal response of parathyroid glands with appropriate and vigorous secretion of PTH. The biochemical constellation includes low blood calcium, elevated PTH, low serum phosphate, elevated alkaline phosphatase, hypocalciuria, phosphaturia, and vitamin D deficiency or lack of vitamin D effect.20 Treatment of secondary hyperparathyroidism is directed toward correcting the process inducing hypocalcemia and/or vitamin D deficiency.21

CASE STUDY 24.3

A 26-year-old man presents to his physician 3 weeks after having his thyroid surgically removed for thyroid cancer. His doctor is certain that she “got it all.” However, since the time he went home from the hospital, he has noticed painful, involuntary muscle cramping. He also feels numbness and tingling around his mouth and in his hands and feet. His girlfriend says he has been irritable for the last couple of weeks. His past medical history is notable only for the recent diagnosis of thyroid cancer, and its resection 3 weeks prior to this visit. His only medication is levothyroxine. Family history contributes no relevant information. On physical examination, he has a well-healing thyroidectomy scar. Tapping on the face interior to the ears causes twitching in the ipsilateral corner of the mouth (Chvostek's sign). There are no palpable masses in the thyroid bed. A blood pressure cuff inflated above the systolic pressure induces involuntary muscle contracture in the ipsilateral hand after 60 seconds (Trousseau's sign). Labs are notable for calcium 5.6 mg/dL (normal, 8.5 to 10.2 mg/dL), albumin 4.1 g/dL, BUN 20 mg/dL, and creatinine 1.0 mg/dL. PTH is undetectable at less than 1 pg/mL.

questions 1. Which laboratory results are abnormal? 2. What condition is he experiencing since his thyroidectomy? 3. What is the cause of this symptomatic condition? 4. What is the treatment for this patient, in addition to thyroxine medication?

Endocrine Causes of Hypocalcemia Because proper PTH secretion and action are necessary to maintain normocalcemia, any inadequacy of parathyroid gland function will cause hypocalcemia. The most common cause of hypoparathyroidism is neck surgery, especially, after thyroidectomy with lymph node dissection. This results from accidental removal of these glands due to their small size. The most common outcome is temporary hypoparathyroidism following surgery, due to damage of the delicate parathyroid blood supply, which results in full recovery in most

subjects. Parathyroid glands can be reimplanted into pouches created in skeletal muscles (deltoid, sternomastoid, or forearm muscles). Parathyroid glands also survive careful cryopreservation and reimplantation back into its original owner. Additional causes of hypoparathyroidism include autoimmune destruction of parathyroid tissue. This condition is often associated with other autoimmune diseases such as type 1 diabetes, Hashimoto's thyroiditis, and Addison's disease. Magnesium deficiency can inhibit the secretion of PTH and also blunt its actions on target tissues. Following the correction of hypomagnesemia, PTH secretion and function are re-established. Depending on the cause, hypoparathyroidism can usually be treated with relatively high doses of vitamin D and calcium. In the absence of PTH, even the small fraction of passively absorbed calcium may simply be excreted in the urine. Hypercalciuria increases the risk of development of kidney stones in these subjects. The use of HCTZ reduces urine calcium losses and elevates serum calcium levels in PTH-deficient subjects. Pseudohypoparathyroidism is a heritable disorder resulting in a lack of responsiveness to PTH in the target tissue. This results from uncoupling of the PTH receptor from adenylate cyclase, due to a mutant stimulatory G protein (Gs). PTH binds its receptor but cannot activate the second messenger, cAMP, and thus, there is no response. Hypocalcemia develops, although unlike other forms of hypoparathyroidism mentioned, those with pseudohypoparathyroidism have markedly elevated levels of PTH. This is an example of a hormone resistance syndrome. Treatment is with calcium and vitamin D supplementation. Hypovitaminosis D describes a collection of conditions, including low vitamin D availability, defective metabolism of vitamin D, or mutations in the vitamin D receptor, all of which predispose to hypocalcemia.

Organ System Causes of Hypocalcemia A variety of intestinal disorders can result in malabsorption of calcium or vitamin D resulting in hypocalcemia. Causes include short bowel syndrome, abdominal irradiation, weight loss surgeries, celiac sprue, and bowel fistulation. Treatment end points are normalization of urine calcium excretion and normalization of PTH.

METABOLIC BONE DISEASES A variety of disease states can affect skeletal architecture, strength, and integrity. Only rickets, osteomalacia, and osteoporosis are described here.

Rickets and Osteomalacia Rickets and osteomalacia are diseases caused by abnormal mineralization of bone. They result from vitamin D deficiency. Rickets refers to the disease state affecting growing bones (in children); therefore, permanent skeletal deformity can be seen. Osteomalacia refers to the abnormal mineralization of bone in adults or after completion of skeletal maturation. Rickets is associated with bony deformities because of bending of long bones under weight loading and effects of gravity. Bone deformity is not seen in adults. Both conditions are associated with similar biochemical findings of secondary hyperparathyroidism. Fractures may result in either case because of poor bone structure. Hypocalcemia may be seen when the response of secondary hyperparathyroidism is inadequate to counteract the threat of hypocalcemia posed by the vitamin D deficiency. Food fortified with Vitamin D is intended to reduce the risk of developing vitamin D deficiency. Despite these attempts, due to poor consumption of dairy and avoidance of sun exposure, deficiency states do occur. However, those of any age who live indoors, with minimal or no sun exposure, or who lack dietary vitamin D, are at risk for developing this condition. As mentioned in the discussion on vitamin D physiology, adequacy of vitamin D in the body can be assessed by measuring the blood level of 25-hydroxy vitamin D. Because secondary hyperparathyroidism is also expected in the setting of rickets or osteomalacia, PTH and calcium should be obtained to further confirm the suspected diagnosis.

CASE STUDY 24.4 An 82-year-old woman living in a nursing home feels unsteady on her feet and does not wander outside anymore. She has lactose intolerance and has never been able to drink milk. She does not take any dietary supplements. She says, “I feel my age, doc,” but otherwise has no specific complaints. On her annual laboratory assessment, her calcium level is found to be slightly low at 8.2 mg/dL (normal, 8.5 to 10.2 mg/dL) with albumin 3.5 g/dL (normal, 3.5 to 4.8 g/dL), BUN 28 mg/dL, and creatinine 1.1 mg/dL. This prompts further evaluation, which reveals PTH elevated at 181 pg/mL and 25-hydroxy vitamin D low at 6 ng/mL (normal, 20 to 50 ng/mL).

questions

1. What diagnostic possibility is suggested from the initial laboratory results of her annual assessment? 2. This patient's differential diagnosis includes two diseases. What are they? 3. Is her renal function related to either of these two diseases? 4. How should this patient be treated?

Rickets can, however, develop under conditions of adequate amounts of vitamin D. This unique situation may develop from genetic defects in vitamin D metabolism or in the vitamin D receptor. Although 25-hydroxy vitamin D level is often normal, 1,25(OH)2D level may be low, normal, or high depending on the genetic defect. Defects in vitamin D metabolism are best treated by supplying the metabolically active compound, 1,25(OH)2D (calcitriol). A wide variety of vitamin D receptor defects have been described, including abnormal ligand binding, abnormal DNA binding, and abnormal transactivation of transcriptional machinery at the regulatory site of vitamin D–responsive genes. The type of defect present determines how well the patient will respond to pharmacologic doses of calcitriol.

Osteoporosis Osteoporosis is the most prevalent metabolic bone disease in adults. Osteoporosis affects an estimated 20 to 25 million Americans with an estimated 4:1 female-to-male predominance. It is believed to cause approximately 1.5 million fractures annually in the United States. A recent study estimates that 4.5 million women, aged greater than 50 years, have osteoporosis of the hip.22 The most devastating consequence of osteoporosis is a hip fracture. While as many as half of vertebral compressions may be asymptomatic, a hip fracture carries with it a significant morbidity as well as an increased mortality. Most hip fractures require surgery at the very least. Mortality from hip fracture is increased by about 20% in the first year following the fracture, and it is estimated that the number of deaths related to hip fracture are now on par with those from breast cancer. Multiple additional conditions have been identified as significant risk factors for reduced bone mass and a consequent increased risk of fracture, although risk factor assessment alone is generally not sufficient to characterize or quantify

bone mass and diagnose osteoporosis. The following are validated risk factors for the prediction of fracture, independent of formal bone density evaluation: decreased bone mass due to previous fracture, advanced age, family history of osteoporosis or fracture, body weight less than 127 lb, long-term glucocorticoid therapy, cigarette smoking, or excess alcohol intake. Other conditions known to also alter calcium metabolism and increase fracture risk include Cushing's syndrome, hyperparathyroidism, disorders of vitamin D metabolism, hyperthyroidism, and certain malignancies (mast cell disease). Several medications have negative effects on the skeleton, leading to low bone mass and increase in the risk of fractures. The most notable are the glucocorticoids. They are widely used to treat a variety of inflammatory conditions such as asthma, rheumatoid arthritis, and lupus, as well as to prevent rejection after organ transplantation. First, they limit bone formation by inhibiting the action of osteoblasts while also inducing osteoblast apoptosis. They also increase the bone breakdown by stimulating the formation and action of osteoclasts. Therefore, the net loss of bone mass is compounded. “Glucocorticoid-induced osteoporosis” is a major source of morbidity associated with pharmacologic doses of glucocorticoids. Two bisphosphonates, alendronate and risedronate, are approved by the Food and Drug Administration (FDA) for the treatment of glucocorticoid-induced bone loss. Other medications that induce low bone mass include anticonvulsants (particularly phenytoin) and cyclosporin A.

CASE STUDY 24.5 A 6-year-old girl is brought to a pediatrician by her parents who report that her height is not progressing as they think it should (or like it did for her 8year-old sister) and her legs look bowed. The patient drinks milk, and other than her shorter stature and bowed legs, she has the normal characteristics of her 6-year-old friends. She takes no medications. Family history is notable for some cousins on the father's side with a similar problem back in the Appalachian Hill country along Virginia/Tennessee border where the family hails from. The pediatrician obtains lab studies that are notable for a calcium level of 7.2 mg/dL (normal, 8.5 to 10.2 mg/dL) with albumin 4.1 g/dL (normal, 3.5 to 4.8 g/dL). Lower extremity radiographs show bowing of the long bones and generalized demineralization. This prompts the measurement of several other laboratory tests, which reveal intact PTH elevated at 866 pg/mL (normal, 11 to 54 pg/mL), 25-hydroxy vitamin D normal at 35 ng/mL (normal, 20 to 57 ng/mL), and 1,25(OH)2D

undetectable at less than 1 pg/mL (normal, 20 to 75 pg/mL).

questions 1. What condition do the preliminary lab tests indicate? 2. What is the significance of 25-hydroxy vitamin D and 1,25(OH)2D levels in the follow-up laboratory tests? 3. Describe the inborn error of metabolism with this patient. 4. What secondary condition will recur if vitamin D treatment is discontinued later in her life?

Diagnosis of Osteoporosis Several laboratory tests are available and useful for the evaluation of a subject with osteoporosis and other bone disorders. There are no specific laboratory tests that can be used to diagnose osteoporosis. The vast majority of osteoporotic subjects will have normal biochemical tests. The diagnosis rests on clinical risk factors and dual-energy x-ray absorptiometry (DEXA) scan. In 2008, the WHO created a well-validated fracture assessment tool, which enables clinicians to identify a given individual's 10-year fracture risk based on simple data, even in the absence of DEXA scan results. The hallmark of osteoporosis is skeletal fragility; therefore, it can be diagnosed without any additional testing when a fracture occurs at an inappropriate degree of trauma or following trivial trauma or no trauma at all. This is termed a fragility fracture (Fig. 24.8). The occurrence of one fragility fracture predicts further fragility fractures. Currently, osteoporosis is also diagnosed based upon calculated bone density using DEXA of the lumbar spine and the hip. This imaging technique, commonly referred to as bone mineral densitometry, measures grams of calcium per square centimeter of cross-sectional area of bone (g/cm2). Density is mass/volume, or g/cm3, so the term bone density is a misnomer. Peak bone mass usually occurs at about the age of 30 years for both men and women and is associated with the lowest risk of fracture. The bone “density,” as calculated by DEXA, is then compared with that expected in a healthy 30-year-old individual of the same race and gender. This is reported as SDs above or below the mean peak bone mass using a “T-score.” A positive or “+” T-score equates to above the average peak bone mass and

negative or “–” is for below the average peak bone mass. Normal bone density is defined as 1 SD from the mean, or a T-score between −1.0 and +1.0. Osteoporosis is defined as a T-score of −2.5 or below. An intermediate between normal and osteoporosis, termed osteopenia, is diagnosed as a T-score between −1.0 and −2.5. In summary, osteoporosis is diagnosed by either the occurrence of a fragility fracture or a T-score of −2.5 or below by DEXA imaging.

FIGURE 24.8 Bone mass as a function of age; perturbations that can affect bone mass. Shown is the accrual of bone mass with age to a peak that occurs in the mid-twenties to early thirties (for both sexes) and the bone loss that occurs throughout life after the peak. Fracture risk increases as bone mass is lost. The “theoretical fracture threshold” is an artificial notion but may be useful in demonstrating that at a given degree of trauma (varying from the simple act of weight bearing to increasing levels of impact) factors that are associated with low bone mass increase fracture risk and that the age that fracture risk is reached is relatively lower if the lower bone mass is reached earlier.

Treatment of Osteoporosis Treatment of osteoporosis is directed at prevention of the primary consequence of this disease: fracture. All treatment plans should include modification of preventable risk factors such as smoking and alcohol consumption. They should also include an evaluation of fall risk and consideration of walkers, handrails, night-lights, hip pads, etc. Further, all patients with osteoporosis—or those at high risk for developing osteoporosis—should have adequate dietary calcium (usually 1,200 to 1,500 mg daily) and vitamin D (usually 400 to 800 IU daily).

The most commonly used medications for the prevention and treatment of osteoporosis are the antiresorptive agents. Examples of commonly used bisphosphonates include alendronate, risedronate, ibandronate, and zoledronate. The bisphosphonate agents have unusual side effects. They can cause upper GI ulceration if the pill gets retained in the esophagus; therefore, patients are instructed to stay upright after swallowing the pills with water, to prevent retention. Chronic use is associated with osteonecrosis of the jaw and low bone turnover fracture of the thigh bone.22,23 The most commonly used SERM is raloxifene. Testosterone is commonly used in males with hypogonadism and estrogen ± progestin in women. Teriparatide is an anabolic agent, one which forms bone, rather than blocks resorption, unlike bisphosphonates. It is very effective for the treatment of those with osteoporosis. Teriparatide stimulates the formation of both cortical and trabecular bones without increasing the bone resorption. Furthermore, it can be used sequentially with an antiresorptive drug, which can further potentiate the positive effects on the skeleton. Because it is a peptide hormone, it requires subcutaneous injections. Currently, it is only approved for the treatment of severe osteoporosis, and its use for the treatment of hypoparathyroidism is being evaluated. This agent is used for a period of 2 years. A unique side effect in experimental animals is the formation of bone tumor, which has not been observed in human subjects.

CASE STUDY 24.6 A 74-year-old woman slipped while mopping the kitchen floor, fell, and sustained a hip fracture. The hip fracture was treated with open reduction and internal fixation. After discharge from the hospital, she presents to her physician and asks if she has osteoporosis and, if so, what should be done. She only drinks milk on her cereal and takes no dietary supplements of calcium or vitamin D. She has asthma and has been treated with prednisone bursts and tapers about six times in her life (as best she can recall). She went through menopause at age 49 and never took hormone replacement. Other than her hip, she reports no other fractures in adulthood but does report that she thinks she has lost about 2.5 in. in height. She thinks her mom had osteoporosis, because she had a “dowager's hump.” The physician orders bone densitometry, which shows posteroanterior spine T-score –3.8 and hip T-score –3.1 (done on the nonfractured hip!). Labs revealed normal calcium and albumin, renal function, thyroid function, albumin, and CBC. Alkaline phosphatase is slightly elevated, but she has a recent fracture.

questions 1. What is this patient's diagnosis? 2. Name four or five risk factors for this diagnosis. 3. In addition to adequate calcium and vitamin D supplements, this patient would be a candidate for which new therapeutic drug?

SECONDARY HYPERPARATHYROIDISM IN RENAL FAILURE The kidney's central role in the regulation of bone and mineral metabolism was discussed earlier. Chronic kidney disease (CKD) results in striking bone mineral and skeletal changes. The diseased kidneys fail to excrete phosphate; this along with the impaired formation of 1,25(OH)2D leads to a vicious cycle of events resulting in parathyroid gland stimulation and hyperplasia. The bone mineral metabolic changes are progressive and proportional to the severity of renal dysfunction and eventually ubiquitous in patients with end-stage renal disease. In severe cases, the ensuing parathyroid hyperplasia results in autonomy of the parathyroid gland leading to hypercalcemia; this is also referred to as tertiary hyperparathyroidism. The difference between secondary and tertiary hyperparathyroidism is the development of sustained hypercalcemia. Patients who develop transient, iatrogenic hypercalcemia encountered in a renal failure patient should not be given the diagnosis of tertiary hyperparathyroidism. The term tertiary is not widely used nowadays. In early stages of CKD, in response to low blood calcium and elevation of phosphate levels, compensatory elevation of PTH and FGF23 maintains near-normal calcium and phosphorus levels.24 As kidney disease becomes severe, the compensatory mechanisms are overwhelmed, leading to permanent abnormalities of calcium, phosphorus, PTH, vitamin D, bone mineralization, and vascular and soft tissue calcification. These biochemical changes completely resolve following renal transplantation. Renal replacement therapies such as hemodialysis and peritoneal dialysis only induce partial correction of these changes. One of the earliest changes in CKD is reduction of urinary phosphorus excretion, leading to an increase in serum phosphorus levels. Compensatory

increase in the phosphaturic hormones such as FGF23 and PTH helps maintain near-normal serum phosphate levels for a period. When renal function declines to lower than 30 mL/min, hyperphosphatemia is observed. Hyperphosphatemia also stimulates PTH secretion and inhibits 1,25(OH)2D production. Low 1,25(OH)2D leads to poor GI absorption of calcium. Hyperphosphatemia and elevation of FGF23 levels are independent predictors of survival; higher levels are associated with poorer outcomes.24 Hyperphosphatemia is treated with a combination of dietary restriction of phosphorus and oral phosphate binders. Commonly used phosphate binders include calcium carbonate, calcium acetate, and noncalcium binders including sevelamer and lanthanum. Patients with CKD have low blood calcium levels. The imbalance in calcium and phosphate levels results in extracellular deposition of calcium resulting in vascular and tissue calcification, a condition called calciphylaxis, which results in increased mortality. Most patients with CKD have low 1,25(OH)2D levels. As phosphorus levels increase, FGF23 levels increase as a compensatory mechanism, which in turn inhibits 1α-hydroxylation of 25-hydroxy vitamin D. The decreased 1,25(OH)2D leads to impaired calcium absorption, and the lower calcium levels stimulate PTH release. This triggers a cascade of bone mineral metabolic events leading to poor bone mineralization. PTH is secreted in response to hypocalcemia, hyperphosphatemia, and/or low 1,25(OH)2D levels. PTH levels trend up with increasing severity of renal disease. Secondary hyperparathyroidism is treated with vitamin D analogs such as calcitriol, doxercalciferol, and paricalcitol.25 Calcimimetics are allosteric activators of the extracellular CSR, sensitizing the parathyroid gland to extracellular calcium and decreasing PTH secretion independent of vitamin D. Calcimimetics such as cinacalcet have been shown to decrease PTH, calcium, and phosphorus.26 For additional student resources, please visit http://thepoint.lww.com

at

questions 1. True or false? PTH and 1,25(OH)2D (vitamin D) are the principal hormones

involved in the normal physiologic regulation of calcium homeostasis. 2. The primary organs involved in the maintenance of calcium homeostasis are the intestine, _____, and kidney. 3. Skin, _____, and kidneys are involved in the production of the active metabolite of vitamin D. 4. True or false? Cod liver oil (ugh!) is a source of vitamin D. 5. True or false? 1,25(OH)2D is the best blood test for determining adequacy of vitamin D stores in the body. 6. True or false? PTHrP is produced by some cancers and often leads to cancerassociated hypercalcemia. 7. True or false? 1,25(OH)2D, due to 1-hydroxylase activity in macrophages, may be produced to excess in granulomatous diseases and lymphoid disorders, leading to hypercalcemia. 8. In PHPT, the defect primarily lies in _____. In secondary hyperparathyroidism, the defect primarily lies with the threat of _____ to the body. 9. Development of _____ _____ is the primary complication of hypercalciuria (increased urinary excretion of calcium). 10. _____ _____ is the most common cause of hypoparathyroidism. 11. _____ is a type of bone most rapidly lost in response to hypogonadism and glucocorticoid therapy. 12. _____ cells in bone are responsible for bone resorption, and _____ cells are responsible for bone formation. 13. _____ is the most prevalent metabolic bone disease in the United States. 14. True or false? Hormone replacement does not inhibit bone resorption in osteoporotic patients. 15. True or false? Teriparatide is the only drug currently approved by the FDA for the treatment of osteoporosis that directly stimulates bone formation (i.e., it is not an antiresorptive drug).

references 1. Neer R, Berman M, Fisher L, et al. Multicompartmental analysis of calcium kinetics in normal adult males. J Clin Invest. 1967;46:1364–1379.

2. Norman AW, Roth J, Orci L. The vitamin D endocrine system: steroid metabolism, hormone receptors and biological response. Endocrine Rev. 1982;3:331–366. 3. Sheikh MD, Ramirez A, Emmett M, et al. Role of vitamin D dependent and vitamin D independent mechanisms in absorption of food calcium. J Clin Invest. 1988;81:126–132. 4. Gallagher JC, Riggs BL, Eisman J, et al. Intestinal calcium absorption and serum vitamin D metabolites in normal subjects and osteoporotic patients. J Clin Invest. 1979;64:729–736. 5. Owen R. On the anatomy of the Indian Rhinoceros (Rh. Unicornis L.). Trans Zool Soc Lond. 1862;4:31–58. 6. Gilmore JR. The gross anatomy of parathyroid glands. J Pathol. 1938;46:133. 7. Alveryd A. Parathyroid glands in thyroid surgery. Acta Chir Scand. 1968;389:1. 8. Block GA, Klassen PS, Lazarus JM, et al. Mineral metabolism, mortality, and morbidity in maintenance hemodialysis. J Am Soc Nephrol. 2004;15(8):2208–2218. 9. Sakou T. Bone morphogenetic proteins: from basic studies to clinical approaches. Bone. 1998;22:591–603. 10.Heath H, Hodgson SR, Kennedy MA. Primary hyperparathyroidism: incidence, morbidity and economic impact in a community. N Eng J Med. 1980;302:189–193. 11. Albright E, Reifenstein EC. The Parathyroid Glands and Metabolic Bone Disease. Baltimore, MD: Williams and Wilkins; 1948. 12.Yeh MW, Ituarte PHG, Zhou HC, et al. Incidence and prevalence of primary hyperparathyroidism in a racially mixed population. J Clin Endocrinol Metab. 2013;98(3):1122–1129. 13.Abdulla AG, Ituarte P, Harari A, et al. Trends in the frequency and quality of parathyroid surgery: analysis of 17,082 cases over 10 years. Ann Surg. 2015;261(4):746–750. 14.Eastell R, Brandi ML, Costa AG, et al. Diagnosis of asymptomatic primary hyperparathyroidism: Proceedings of the Fourth International Workshop. J Clin Endocrinol Metab. 2014;99:3570–3579. 15.Abraham D, Sharma PK, Bentz J, et al. The utility of ultrasound guided FNA of parathyroid adenomas for pre-operative localization prior to minimally invasive parathyroidectomy. Endocr Pract. 2007;13(4):333–337. 16.Brown EM, Gamba G, Rccardi D, et al. Cloning and characterization of an extracellular calcium sensing receptor from bovine parathyroid. Nature. 1993;366:575. 17.Hendy GN, D'Souza-Li L, Yang B, et al. Mutations of the calcium-sensing receptor (CASR) in familial hypocalciuric hypercalcemia, neonatal severe hyperparathyroidism, and autosomal dominant hypocalcemia. Hum Mutat. 2000;16:281. 18.Stewart AF, Horst F, Deftos LJ, et al. Biochemical evaluation of patients with cancer-associated hypercalcemia: evidence for humoral and non-humoral groups. N Eng J Med. 1980;303:1377–1383. 19.Lips P, van Schoor NM, Bravenboer N. Vitamin D-related disorders. In: Rosen CJ, ed. Primer on the Metabolic Bone Diseases and Disorders of Mineral Metabolism. 7th ed. Washington, DC: American Society of Bone and Mineral Research; 2008:329. 20.Holick MF, Binkley NC, Bischoff-Ferrari HA, et al. Evaluation, treatment, and prevention of vitamin D deficiency: an Endocrine Society clinical practice guideline. J Clin Endocrinol Metab. 2011;96:1911. 21.Looker AC, Melton LJ III, Harris TB, et al. Prevalence and trends in low femur bone density among older US adults: NHANES 2005–2006 compared with NHANES III. J Bone Miner Res. 2010;25(1):64–71. 22.Watts NB, Diab DL. Long-term use of bisphosphonates in osteoporosis. J Clin Endocrinol Metab. 2010;95:1555. 23.Neer RM, Arnaud CD, Zanchetta JR, et al. Effect of parathyroid hormone (1-34) on fractures and bone mineral density in postmenopausal women with osteoporosis. N Engl J Med. 2001;344:1434–1441. 24.Gutierrez OM, Mannstadt M, Isakova T, et al. Fibroblast growth factor 23 and mortality among patients undergoing hemodialysis. N Engl J Med. 2008;359(6):584–592. 25.Sprague SM, Coyne D. Control of secondary hyperparathyroidism by vitamin D receptor agonists in chronic kidney disease. Clin J Am Soc Nephrol. 2010;5(3):512–518.

26.Block GA, Martin KJ, de Francisco AL, et al. Cinacalcet for secondary hyperparathyroidism in patients receiving hemodialysis. N Engl J Med. 2004;350(15):1516–1525.

25 Liver Function JANELLE M. CHIASERA and XIN XU

Chapter Outline Anatomy Gross Anatomy Microscopic Anatomy

Biochemical functions Excretory and Secretory Metabolism Detoxification and Drug Metabolism

Liver function alterations during disease Jaundice Cirrhosis Tumors Reye's Syndrome Drug- and Alcohol-Related Disorders

Assessment of liver function/liver function tests Bilirubin

Methods Urobilinogen in Urine and Feces Serum Bile Acids Enzymes Tests Measuring Hepatic Synthetic Ability Tests Measuring Nitrogen Metabolism Hepatitis

Questions References Chapter Objectives Upon completion of this chapter, the clinical laboratorian will be able to do the following: Diagram the anatomy of the liver. Explain the following functions of the liver: bile secretion, synthetic activity, and

detoxification. List two important cell types associated with the liver and state the function of each. Define jaundice and classify the three different types of jaundice. Discuss the basic disorders of the liver and which laboratory tests may be performed to diagnose them. Evaluate liver-related data and correlate that data with normal or pathology states. Compare and contrast how total and direct bilirubin measurements are performed. List the enzymes most commonly used to assess hepatocellular and hepatobiliary disorders. Describe the various types of hepatitis to include cause, transmission, occurrence, alternate name, physiology, diagnosis, and treatment.

Key Terms Bile Bilirubin Cirrhosis Conjugated bilirubin Crigler-Najjardelta bilirubinhepatic jaundiceHepatitis hepatocellular carcinomaHepatoma Icterus jaundiceKupffer cells Lobules posthepatic jaundice prehepatic jaundice Sinusoids unconjugateduridine diphosphate glucuronosyltransferaseUrobilinogen

For additional student resources, please visit

at http://thepoint.lww.com

The liver is a very large and complex organ responsible for performing vital tasks that impact all body systems. Its complex functions include metabolism of carbohydrates, lipids, proteins, and bilirubin; detoxification of harmful substances; storage of essential compounds; and excretion of substances to prevent harm. The liver is unique in the sense that it is a relatively resilient organ that can regenerate cells that have been destroyed by some short-term injury or disease or have been removed. However, if the liver is damaged repeatedly over a long period of time, it may undergo irreversible changes that permanently interfere with its essential functions. If the liver becomes completely nonfunctional for any reason, death will occur within approximately 24 hours due to hypoglycemia. This chapter focuses on the normal structure and function of the liver, the pathology associated with it, and the laboratory tests used to aid in the diagnosis of liver disorders.

ANATOMY Gross Anatomy Understanding the function and dysfunction of the liver depends on understanding its gross and microscopic structure. The liver is a large and complex organ weighing approximately 1.2 to 1.5 kg in the healthy adult. It is located beneath and attached to the diaphragm, is protected by the lower rib cage, and is held in place by ligamentous attachments. Despite the functional complexity of the liver, it is relatively simple in structure. It is divided unequally into two lobes by the falciform ligament. The right lobe is approximately six times larger than the left lobe. The lobes are functionally insignificant; however, communication flows freely between all areas of the liver (Fig. 25.1).

FIGURE 25.1 Gross anatomy of the liver. Unlike most organs, which have a single blood supply, the liver is an extremely vascular organ that receives its blood supply from two major sources: the hepatic artery and the portal vein. The hepatic artery, a branch of the aorta, supplies oxygen-rich blood from the heart to the liver and is responsible for providing approximately 25% of the total blood supply to the liver. The portal vein supplies nutrient-rich blood (collected as food is digested) from the digestive tract, and it is responsible for providing approximately 75% of the total

blood supply to the liver. The two blood supplies eventually merge into the hepatic sinusoid, which is lined with hepatocytes capable of removing potentially toxic substances from the blood. From the sinusoid, blood flows to the central canal (central vein) of each lobule. It is through the central canal that blood leaves the liver. Approximately 1,500 mL of blood passes through the liver per minute. The liver is drained by a collecting system of veins that empties into the hepatic veins and ultimately into the inferior vena cava (Fig. 25.2).

FIGURE 25.2 Blood supply to the liver. The excretory system of the liver begins at the bile canaliculi. The bile canaliculi are small spaces between the hepatocytes that form intrahepatic ducts, where excretory products of the cells can drain. The intrahepatic ducts join to form the right and left hepatic ducts, which drain the secretions from the liver. The right and left hepatic ducts merge to form the common hepatic duct, which is eventually joined with the cystic duct of the gallbladder to form the common bile duct. Combined digestive secretions are then expelled into the duodenum (Fig. 25.3).

FIGURE 25.3 Excretory system of the liver.

Microscopic Anatomy The liver is divided into microscopic units called lobules. The lobules are the functional units of the liver, responsible for all metabolic and excretory functions performed by the liver. Each lobule is roughly a six-sided structure with a centrally located vein (called the central vein) with portal triads at each of the corners. Each portal triad contains a hepatic artery, a portal vein, and a bile duct surrounded by connective tissue. The liver contains two major cell types: hepatocytes and Kupffer cells. The hepatocytes, making up approximately 80% of the volume of the organ, are large cells that radiate outward in plates from the central vein to the periphery of the lobule. These cells perform the major functions associated with the liver and are responsible for the regenerative properties of the liver. Kupffer cells are macrophages that line the sinusoids of the liver and act as active phagocytes capable of engulfing bacteria, debris, toxins, and other substances flowing through the sinusoids (Fig. 25.4).

FIGURE 25.4 Microscopic anatomy of the liver.

BIOCHEMICAL FUNCTIONS The liver performs four major functions: excretion/secretion, metabolism, detoxification, and storage. The liver is so important that if the liver becomes nonfunctional, death will occur within 24 hours due to hypoglycemia. Although the liver is responsible for a number of functions, this chapter focuses on the four major functions mentioned previously.

Excretory and Secretory One of the most important functions of the liver is the processing and excretion of endogenous and exogenous substances into the bile or urine such as the major heme waste product, bilirubin. The liver is the only organ that has the capacity to rid the body of heme waste products. Bile is made up of bile acids or salts, bile pigments, cholesterol, and other substances extracted from the blood. The body produces approximately 3 L of bile per day and excretes 1 L of what is produced. Bilirubin is the principal pigment in bile, and it is derived from the breakdown of red blood cells. Approximately 126 days after the emergence from the reticuloendothelial tissue, red blood cells are phagocytized and hemoglobin

is released. Hemoglobin is degraded to heme, globin, and iron. The iron is bound by transferrin and is returned to iron stores in the liver or bone marrow for reuse. The globin is degraded to its constituent amino acids, which are reused by the body. The heme portion of hemoglobin is converted to bilirubin in 2 to 3 hours. Bilirubin is bound by albumin and transported to the liver. This form of bilirubin is referred to as unconjugated or indirect bilirubin. Unconjugated bilirubin is insoluble in water and cannot be removed from the body until it has been conjugated by the liver. Once at the liver cell, unconjugated bilirubin flows into the sinusoidal spaces and is released from albumin so it can be picked up by a carrier protein called ligandin. Ligandin, which is located in the hepatocyte, is responsible for transporting unconjugated bilirubin to the endoplasmic reticulum, where it may be rapidly conjugated. The conjugation (esterification) of bilirubin occurs in the presence of the enzyme uridine diphosphate glucuronosyltransferase (UDPGT), which transfers a glucuronic acid molecule to each of the two propionic acid side chains of bilirubin to form bilirubin diglucuronide, also known as conjugated bilirubin. Conjugated bilirubin is water soluble and is able to be secreted from the hepatocyte into the bile canaliculi. Once in the hepatic duct, it combines with secretions from the gallbladder through the cystic duct and is expelled through the common bile duct to the intestines. Intestinal bacteria (especially the bacteria in the lower portion of the intestinal tract) work on conjugated bilirubin to produce mesobilirubin, which is reduced to form mesobilirubinogen and then urobilinogen (a colorless product). Most of the urobilinogen formed (roughly 80%) is oxidized to an orange-colored product called urobilin (stercobilin) and is excreted in the feces. The urobilin or stercobilin is what gives stool its brown color. There are two things that can happen to the remaining 20% of urobilinogen formed. The majority will be absorbed by extrahepatic circulation to be recycled through the liver and re-excreted. The other very small quantity left will enter systemic circulation and will subsequently be filtered by the kidney and excreted in the urine (Fig. 25.5).1

FIGURE 25.5 Metabolism of bilirubin. (Reprinted by permission of Waveland Press, Inc. from Anderson SC, Cockayne S. Clinical Chemistry/Concepts and Applications/2003. Long Grove, IL: Waveland Press, Inc.; 2007. All rights reserved.) Approximately 200 to 300 mg of bilirubin is produced per day, and it takes a normally functioning liver to process the bilirubin and eliminate it from the body. This, as stated earlier, requires that bilirubin be conjugated. Almost all the bilirubin formed is eliminated in the feces, and a small amount of the colorless product, urobilinogen, is excreted in the urine. The healthy adult has very low levels of total bilirubin (0.2 to 1.0 mg/dL) in the serum, and of this amount, the majority is in the unconjugated form.2

Metabolism The liver has extensive metabolic capacity; it is responsible for metabolizing many biological compounds including carbohydrates, lipids, and proteins. The metabolism of carbohydrates is one of the most important functions of the liver. When carbohydrates are ingested and absorbed, the liver can do three things: (1) use the glucose for its own cellular energy requirements, (2) circulate the glucose for use at the peripheral tissues, or (3) store glucose as glycogen (principal storage form of glucose) within the liver itself or within other tissues. The liver is the major player in maintaining stable glucose concentrations due to its ability to store glucose as glycogen (glycogenesis) and degrade glycogen (glycogenolysis) depending on the body's needs. Under conditions of stress or in a fasting state when there is an increased requirement for glucose, the liver will break down stored glycogen (glycogenolysis), and when the supply of glycogen becomes depleted, the liver will create glucose from nonsugar carbon substrates like pyruvate, lactate, and amino acids (gluconeogenesis). Lipids are metabolized in the liver under normal circumstances when nutrition is adequate and the demand for glucose is being met. The liver is responsible for metabolizing both lipids and the lipoproteins and is responsible for gathering free fatty acids from the diet, and those produced by the liver itself, and breaking them down to produce acetyl-CoA. Acetyl-CoA can then enter several pathways to form triglycerides, phospholipids, or cholesterol. Despite popular belief, the greatest source of cholesterol in the body comes from what is produced by the liver, not from dietary sources. In fact, approximately 70% of the daily production of cholesterol (roughly 1.5 to 2.0 g) is produced by the liver.3 A more thorough discussion of lipid metabolism may be found in Chapter 15. Almost all proteins are synthesized by the liver except for the immunoglobulins and adult hemoglobin. The liver plays an essential role in the development of hemoglobin in infants. One of the most important proteins synthesized by the liver is albumin, which carries with it a wide range of important functions. The liver is also responsible for synthesizing the positive and negative acute-phase reactants and coagulation proteins, and it also serves to store a pool of amino acids through protein degradation. The most critical aspect of protein metabolism includes transamination and deamination of amino acids. Transamination (via a transaminase) results in the exchange of an amino group on one acid with a ketone group on another acid. After transamination, deamination degrades them to produce ammonium ions that are consumed in the

synthesis of urea and urea is excreted by the kidneys. Although it would seem logical that any damage to the liver would result in a loss of synthetic and metabolic functions of the liver, that is not the case. The liver must be extensively impaired before it loses its ability to perform these essential functions.

CASE STUDY 25.1 The following laboratory test results were obtained in a patient with severe jaundice, right upper quadrant abdominal pain, fever, and chills (Case Study Table 25.1.1). CASE STUDY TABLE 25.1.1 Laboratory Results

Questions 1. What is the most likely cause of jaundice in the patient?

Detoxification and Drug Metabolism The liver serves as a gatekeeper between substances absorbed by the gastrointestinal tract and those released into systemic circulation. Every substance that is absorbed in the gastrointestinal tract must first pass through the liver; this is referred to as first pass. This is an important function of the liver because it can allow important substances to reach the systemic circulation and can serve as a barrier to prevent toxic or harmful substances from reaching systemic circulation. The body has two mechanisms for detoxification of foreign materials (drugs and poisons) and metabolic products (bilirubin and ammonia). Either it may bind the material reversibly to inactivate the compound or it may chemically modify the compound so it can be excreted in its chemically modified form. One of the most important functions of the liver is the drugmetabolizing system of the liver. This system is responsible for the detoxification of many drugs through oxidation, reduction, hydrolysis, hydroxylation, carboxylation, and demethylation. Many of these take place in the liver microsomes via the cytochrome P-450 isoenzymes.

LIVER FUNCTION DURING DISEASE

ALTERATIONS

Jaundice The word jaundice comes from the French word jaune, which means “yellow,” and it is one of the oldest known pathologic conditions reported, having been described by Hippocratic physicians.4 Jaundice, or icterus, is used to describe the yellow discoloration of the skin, eyes, and mucous membranes most often resulting from the retention of bilirubin; however, it may also occur due to the retention of other substances. Although the upper limit of normal for total bilirubin is 1.0 to 1.5 mg/dL, jaundice is usually not noticeable to the naked eye (known as overt jaundice) until bilirubin levels reach 3.0 to 5.0 mg/dL. Although the terms jaundice and icterus are used interchangeably, the term icterus is most commonly used in the clinical laboratory to refer to a serum or plasma sample with a yellow discoloration due to an elevated bilirubin level. Jaundice is most commonly classified based on the site of the disorder: prehepatic jaundice, hepatic jaundice, and posthepatic jaundice. This classification is important because knowing the classification of jaundice will aid health care providers in formulating an appropriate treatment or management plan. Prehepatic and

posthepatic jaundice, as the names imply, are caused by abnormalities outside the liver, either before, as in “prehepatic,” or after, as in “posthepatic.” In these conditions, liver function is normal or it may be functioning at a maximum to compensate for abnormalities occurring elsewhere. This is not the case with hepatic jaundice, where the jaundice is due to a problem with the liver itself—an intrinsic liver defect or disease. Prehepatic jaundice occurs when the problem causing the jaundice occurs prior to liver metabolism. It is most commonly caused by an increased amount of bilirubin being presented to the liver such as that seen in acute and chronic hemolytic anemias. Hemolytic anemia causes an increased amount of red blood cell destruction and the subsequent release of increased amounts of bilirubin presented to the liver for processing. The liver responds by functioning at maximum capacity; therefore, people with prehepatic jaundice rarely have bilirubin levels that exceed 5.0 mg/dL because the liver is capable of handling the overload. This type of jaundice may also be referred to as unconjugated hyperbilirubinemia because the fraction of bilirubin increased in people with prehepatic jaundice is the unconjugated fraction. This fraction of bilirubin (unconjugated bilirubin) is not water soluble, is bound to albumin, is not filtered by the kidneys, and is not seen in the urine. Hepatic jaundice occurs when the primary problem causing the jaundice resides in the liver (intrinsic liver defect or disease). This intrinsic liver defect or disease can be due to disorders of bilirubin metabolism and transport defects (Crigler-Najjar syndrome, Dubin-Johnson syndrome, Gilbert's disease, and neonatal physiologic jaundice of the newborn) or due to diseases resulting in hepatocellular injury or destruction. Gilbert's disease, Crigler-Najjar syndrome, and physiologic jaundice of the newborn are hepatic causes of jaundice that result in elevations in unconjugated bilirubin. Conditions such as Dubin-Johnson and Rotor's syndrome are hepatic causes of jaundice that result in elevations in conjugated bilirubin. Gilbert's syndrome, first described in the early twentieth century, is a benign autosomal recessive hereditary disorder that affects approximately 5% of the US population.5 Gilbert's syndrome results from a genetic mutation in the UGT1A1 gene that produces the enzyme uridine diphosphate glucuronosyltransferase, one of the enzymes important for bilirubin metabolism. The UGT1A1 gene is located on chromosome 2, and other mutations of this same gene produce Crigler-Najjar syndrome, a more severe and dangerous form of hyperbilirubinemia.6 Of the many causes of jaundice, Gilbert's syndrome is the most common

cause, and interestingly, it carries no morbidity or mortality in the majority of those affected and carries generally no clinical consequences. It is characterized by intermittent unconjugated hyperbilirubinemia, underlying liver disease due to a defective conjugation system in the absence of hemolysis. The hyperbilirubinemia usually manifests during adolescence or early adulthood. Total serum bilirubin usually fluctuates between 1.5 and 3.0 mg/dL, and it rarely exceeds 4.5 mg/dL. The molecular basis of Gilbert's syndrome (in whites) is related to the UDPGT superfamily, which is responsible for encoding enzymes that catalyze the conjugation of bilirubin. The UGT1A1 (the hepatic 1A1 isoform of UDPGT) contributes substantially to the process of conjugating bilirubin. The UGT1A1 promoter contains the sequence (TA)6TAA. The insertion of an extra TA in the sequence, as seen in Gilbert's syndrome, reduces the expression of the UGT1A1 gene to 20% to 30% of normal values. That is, the liver's conjugation system in Gilbert's syndrome is working at approximately 30% of normal.7,8 Crigler-Najjar syndrome was first described by Crigler and Najjar in 1952 as a syndrome of chronic nonhemolytic unconjugated hyperbilirubinemia.9 Crigler-Najjar syndrome, like Gilbert's syndrome, is an inherited disorder of bilirubin metabolism resulting from a molecular defect within the gene involved with bilirubin conjugation. Crigler-Najjar syndrome may be divided into two types: type 1, where there is a complete absence of enzymatic bilirubin conjugation, and type II, where there is a mutation causing a severe deficiency of the enzyme responsible for bilirubin conjugation. Unlike Gilbert's syndrome, Crigler-Najjar syndrome is rare and is a more serious disorder that may result in death.10 While Gilbert's disease and Crigler-Najjar syndrome are characterized as primarily unconjugated hyperbilirubinemias, Dubin-Johnson syndrome and Rotor's syndrome are characterized as conjugated hyperbilirubinemias. DubinJohnson syndrome is a rare autosomal recessive inherited disorder caused by a deficiency of the canalicular multidrug resistance/multispecific organic anionic transporter protein (MDR2/cMOAT). In other words, the liver's ability to uptake and conjugate bilirubin is functional; however, the removal of conjugated bilirubin from the liver cell and the excretion into the bile are defective. This results in accumulation of conjugated and, to some extent, unconjugated bilirubin in the blood, leading to hyperbilirubinemia and bilirubinuria. DubinJohnson is a condition that is obstructive in nature, so much of the conjugated bilirubin circulates bound to albumin. This type of bilirubin (conjugated bilirubin bound to albumin) is referred to as delta bilirubin. An increase in delta

bilirubin poses a problem in laboratory evaluation because the delta bilirubin fraction reacts as conjugated bilirubin in the laboratory method to measure conjugated or direct bilirubin. A distinguishing feature of Dubin-Johnson syndrome is the appearance of dark-stained granules (thought to be pigmented lysosomes) on a liver biopsy sample. Usually, the total bilirubin concentration remains between 2 and 5 mg/dL, with more than 50% due to the conjugated fraction. This syndrome is relatively mild in nature with an excellent prognosis. People with Dubin-Johnson syndrome have a normal life expectancy, so no treatment is necessary.11,12 Rotor's syndrome is clinically similar to Dubin-Johnson syndrome, but the defect causing Rotor's syndrome is not known.13 It is hypothesized to be due to a reduction in the concentration or activity of intracellular binding proteins such as ligandin. Unlike in Dubin-Johnson syndrome, a liver biopsy does not show dark pigmented granules. Rotor's syndrome is seen less commonly than DubinJohnson syndrome; it is a relatively benign condition and carries an excellent prognosis, and therefore, treatment is not warranted. However, an accurate diagnosis is required to aid in distinguishing it from more serious liver diseases that require treatment. Physiologic jaundice of the newborn is a result of a deficiency in the enzyme UDPGT, one of the last liver functions to be activated in prenatal life since bilirubin processing is handled by the mother of the fetus. Premature infants may be born without UDPGT, the enzyme responsible for bilirubin conjugation. This deficiency results in the rapid buildup of unconjugated bilirubin, which can be life threatening. When unconjugated bilirubin builds up in the neonate, it cannot be processed and it is deposited in the nuclei of brain and degenerate nerve cells, causing kernicterus. Kernicterus often results in cell damage and death in the newborn, and this condition will continue until UDPGT is produced. Infants with this type of jaundice are usually treated with phototherapy to destroy the bilirubin as it passes through the capillaries of the skin. Since it was first described in 1958, phototherapy has been effectively used as a relatively inexpensive and noninvasive method of treating neonatal hyperbilirubinemia through photo oxidation. Conventional phototherapy lowers serum bilirubin levels by using halogen or fluorescent lights to transform bilirubin into watersoluble isomers that can be eliminated without conjugation in the liver. During phototherapy, the baby is undressed so that as much of the skin as possible is exposed to the light, his/her eyes are covered to protect the nerve layer at the back of the eye (retina) from the bright light, and the bilirubin levels are measured at least once a day. Alternatively, fiberoptic phototherapy is also

available via a blanket, called a bili-blanket, that consists of a pad of woven fibers used to transport light from a light source to the baby's back. The light generated through the bili-blanket breaks down the bilirubin through photo oxidation. The dose of phototherapy is a key factor in how quickly it works and the dose is determined by the wavelength of the light, the intensity of the light (irradiance), the distance between the light and the baby, and the body surface area exposed to the light. Commercially available phototherapy systems include those that deliver light via fluorescent bulbs, halogen quartz lamps, lightemitting diodes, and fiberoptic mattresses. In extreme cases of hyperbilirubinemia, an exchange transfusion may be used as the second-line treatment when phototherapy fails. An exchange transfusion involves removing aliquots of blood and replacing it with donor blood in order to remove abnormal blood components and circulating toxins while maintaining adequate circulating blood volume. Because hyperbilirubinemia is so serious in newborns, bilirubin levels are carefully and frequently monitored so the dangerously high levels of unconjugated bilirubin (approximately 20 mg/dL) can be detected and treated appropriately.14 Posthepatic jaundice results from biliary obstructive disease, usually from physical obstructions (gallstones or tumors) that prevent the flow of conjugated bilirubin into the bile canaliculi. Since the liver cell itself is functioning, bilirubin is effectively conjugated; however, it is unable to be properly excreted from the liver. Since bile is not being brought to the intestines, stool loses its source of normal pigmentation and becomes clay-colored. The laboratory findings for bilirubin and its metabolites in the abovementioned types of jaundice are summarized in Table 25.1. Mechanisms of hyperbilirubinemia may be found in Figure 25.6. TABLE 25.1 Changes in Concentration of Bilirubin in Those with Jaundice

Adapted from Table 27.3 of Kaplan LA, Pesce AJ, Kazmierczak S. Clinical Chemistry: Theory, Analysis,

Correlation. 4th ed. St. Louis, MO: Mosby; 2003.

FIGURE 25.6 Mechanisms of hyperbilirubinemia. (A) Normal bilirubin metabolism, (B) hemolytic jaundice, (C) Gilbert's disease, (D) physiologic jaundice, (E) Dubin-Johnson syndrome, and (F) intrahepatic or extrahepatic obstruction. (Adapted from Kaplan LA, Pesce AJ, Kazmierczak S. Clinical Chemistry: Theory, Analysis, Correlation. 4th ed. St. Louis, MO: Mosby; 2003:449.)

Cirrhosis Cirrhosis is a clinical condition in which scar tissue replaces normal, healthy liver tissue. As the scar tissue replaces the normal liver tissue, it blocks the flow of blood through the organ and prevents the liver from functioning properly. Cirrhosis rarely causes signs and symptoms in its early stages, but as liver function deteriorates, the signs and symptoms appear, including fatigue, nausea, unintended weight loss, jaundice, bleeding from the gastrointestinal tract, intense itching, and swelling in the legs and abdomen. Although some patients with

cirrhosis may have prolonged survival, they generally have a poor prognosis. Cirrhosis was the twelfth leading cause of death by disease in 2013, killing just over 36,400 people.15 In the United States, the most common cause of cirrhosis is chronic alcoholism. Other causes of cirrhosis include chronic hepatitis B (HBV), C (HCV), and D virus (HDV) infection, autoimmune hepatitis, inherited disorders (e.g., α1-antitrypsin deficiency, Wilson's disease, hemochromatosis, and galactosemia), nonalcoholic steatohepatitis, blocked bile ducts, drugs, toxins, and infections. Liver damage from cirrhosis cannot easily be reversed, but treatment can stop or delay further progression of the disorder. Treatment depends on the cause of cirrhosis and any complications a person is experiencing. For example, cirrhosis caused by alcohol abuse is treated by abstaining from alcohol. Treatment for hepatitis-related cirrhosis involves medications used to treat the different types of hepatitis, such as interferon for viral hepatitis and corticosteroids for autoimmune hepatitis.

Tumors Cancers of the liver are classified as primary or metastatic. Primary liver cancer is cancer that begins in the liver cells while metastatic cancer occurs when tumors from other parts of the body spread (metastasize) to the liver. Metastatic liver cancer is much more common than primary liver cancer; 90% to 95% of all hepatic malignancies are classified as metastatic. Cancers that commonly spread to the liver include colon, lung, and breast cancer. Tumors of the liver may also be classified as benign or malignant. The common benign tumors of the liver include hepatocellular adenoma (a condition occurring almost exclusively in females of childbearing age) and hemangiomas (masses of blood vessels with no known etiology). Malignant tumors of the liver include hepatocellular carcinoma (HCC) (also known as hepatocarcinoma, and hepatoma) and bile duct carcinoma. Of those, HCC is the most common malignant tumor of the liver. Hepatoblastoma is an uncommon hepatic malignancy of children. HCC has become increasingly important in the United States. Approximately 85% of the new cases of this liver cancer occur in developing countries, with the highest incidence of HCC reported in regions where HBV is endemic such as Southeast Asia and sub-Saharan Africa. Approximately 500,000 people are diagnosed annually with HCC worldwide, of which 28,000 new cases and 20,000 deaths were reported in the United States in 2012.16,17 It is estimated that in the United States, chronic HBV and HCV infections account

for approximately 30% and 40% of cases of HCC.18 Approximately 80% of cases worldwide are attributable to HBV and HCV; however, the mechanism by which the infection leads to HCC is not well established. While surgical resection of HCC is sometimes possible, currently, orthotopic liver transplantation in people with HCC with underlying cirrhosis who meet the Milan criteria (single tumor ≤5 cm in size or ≤3 tumors each ≤3 cm in size, and no macrovascular invasion) is also available for HCC. The estimated 4-year survival rate is 85% and the recurrence-free survival rate is 92%.19 Whether primary or metastatic, any malignant tumor in the liver is a serious finding and carries a poor prognosis, with survival times measured in months.

Reye's Syndrome Reye's syndrome is a term used to describe a group of disorders caused by infectious, metabolic, toxic, or drug-induced disease found almost exclusively in children, although adult cases of Reye's syndrome have been reported.20 Although the precise cause of Reye's syndrome is unknown, it is often preceded by a viral syndrome such as varicella, gastroenteritis, or an upper respiratory tract infection such as influenza.21, 22, 23 Although not described as the precise cause of Reye's syndrome, studies have demonstrated a strong epidemiologic association between the ingestion of aspirin during a viral syndrome and the subsequent development of Reye's syndrome.24,25 As a result, the Centers for Disease Control and Prevention (CDC) cautioned physicians and parents to avoid salicylate use in children with a viral syndrome, and the U.S. Surgeon General mandated that a warning label be added to all aspirin-containing medications, beginning in 1986.26,27 Reye's syndrome is an acute illness characterized by noninflammatory encephalopathy and fatty degeneration of the liver, with a clinical presentation of profuse vomiting accompanied with varying degrees of neurologic impairment such as fluctuating personality changes and deterioration in consciousness. The encephalopathy is characterized by a progression from mild confusion (stage 1) through progressive loss of neurologic function to loss of brain stem reflexes (stage 5). The degeneration of the liver is characterized by a mild hyperbilirubinemia and threefold increases in ammonia and the aminotransferases (aspartate aminotransferase [AST] and alanine aminotransferase [ALT]). Without treatment, rapid clinical deterioration leading to death may occur.28,29

Drug- and Alcohol-Related Disorders Drug-induced liver disease is a major problem in the United States, accounting for one-third to one-half of all reported cases of acute liver failure. The liver is a primary target organ for adverse drug reactions because it plays a central role in drug metabolism. Many drugs are known to cause liver damage, ranging from very mild transient forms to fulminant liver failure. Drugs can cause liver injury by a variety of mechanisms, but the most common mechanism of toxicity is via an immune-mediated injury to the hepatocytes.30 In this type of mechanism, the drug induces an adverse immune response directed against the liver itself and results in hepatic and/or cholestatic disease.31 Of all the drugs associated with hepatic toxicity, the most important is ethanol. In very small amounts, ethanol causes very mild, transient, and unnoticed injury to the liver; however, with heavier and prolonged consumption, it can lead to alcoholic cirrhosis. While the exact amount of alcohol needed to cause cirrhosis is unknown, a small minority of people with alcoholism develop this condition.32 Approximately 90% of the alcohol absorbed from the stomach and small intestines is transported to the liver for metabolism. Within the liver, the elimination of alcohol requires the enzymes alcohol dehydrogenase and acetaldehyde dehydrogenase to convert alcohol to acetaldehyde and subsequently to acetate. The acetate can then be oxidized to water and carbon dioxide, or it may enter the citric acid cycle. Long-term excessive consumption of alcohol can result in a spectrum of liver abnormalities that may range from alcoholic fatty liver with inflammation (steatohepatitis) to scar tissue formation, as in hepatic fibrosis, to the destruction of normal liver structure seen in hepatic cirrhosis. Alcohol-induced liver injury may be categorized into three stages: alcoholic fatty liver, alcoholic hepatitis, and alcoholic cirrhosis. The risk for the development of cirrhosis increases proportionally with the consumption of more than 30 g (the equivalent of 3 to 4 drinks) of alcohol per day, with the highest degree of risk seen with the consumption of greater than 120 g (the equivalent of 12 to 16 drinks) per day.33 Alcoholic fatty liver represents the mildest category where very few changes in liver function are measurable. This stage is characterized by slight elevations in AST, ALT, and γ-glutamyltransferase (GGT), and on biopsy, fatty infiltrates are noted in the vacuoles of the liver. This stage tends to affect young to middleaged people with a history of moderate alcohol consumption. A complete recovery within 1 month is seen when the drug is removed. Alcoholic hepatitis

presents with common signs and symptoms including fever, ascites, proximal muscle loss, and far more laboratory evidence of liver damage such as moderately elevated AST, ALT, GGT, and alkaline phosphatase (ALP) and elevations in total bilirubin greater than 5 mg/dL. The elevations in AST are more than twice the upper reference of normal but rarely exceed 300 IU/mL. The elevations in ALT are comparatively lower than AST, resulting in an AST/ALT ratio (De Ritis ratio) greater than 2. Serum proteins, especially albumin, are decreased and the international normalized ratio (INR, which is a ratio of coagulation time in the patient compared with a normal coagulation time) is elevated. Prognosis is dependent on the type and severity of damage to the liver, and when serum creatinine levels begin to increase, it is a threatening sign, which may precede the onset of hepatorenal syndrome and death.34 There are a variety of scoring systems that have been used to assess the severity of alcoholic hepatitis and to guide treatment including the Maddrey's discriminant function,35 the Glasgow score,36 and the Model for End-Stage Liver Disease (MELD) score.37 All three scoring systems use bilirubin, INR, creatinine, age, white cell counts, blood urea nitrogen, and albumin levels to stage and guide treatment. The last and most severe stage is alcoholic cirrhosis. The prognosis associated with alcoholic cirrhosis is dependent on the nature and severity of associated conditions such as a gastrointestinal bleeding or ascites; however, the 5-year survival rate is 60% in those who abstain from alcohol and 30% in those who continue to drink. This condition appears to be more common in males than in females, and the symptoms tend to be nonspecific and include weight loss, weakness, hepatomegaly, splenomegaly, jaundice, ascites, fever, malnutrition, and edema. Laboratory abnormalities include increased liver function tests (AST, ALT, GGT, ALP, and total bilirubin), decreased albumin, and a prolonged prothrombin time. A liver biopsy is the only method by which a definitive diagnosis may be made.38 Other drugs, including tranquilizers, some antibiotics, antineoplastic agents, lipid-lowering medication, and anti-inflammatory drugs, may cause liver injury ranging from mild damage to massive hepatic failure and cirrhosis. One of the most common drugs associated with serious hepatic injury is acetaminophen. When acetaminophen is taken in massive doses, it is virtually certain to produce fatal hepatic necrosis unless rapid treatment is initiated.

CASE STUDY 25.2 The following laboratory test results were found in a patient with mild

weight loss and nausea and vomiting, who later developed jaundice and an enlarged liver (Case Study Table 25.2.1) CASE STUDY TABLE 25.2.1 Laboratory Results

Questions 1. What disease process is most likely in this patient?

ASSESSMENT OF LIVER FUNCTION/LIVER FUNCTION TESTS Bilirubin Analysis of Bilirubin: A Brief Review The reaction of bilirubin with a diazotized sulfanilic acid solution to form a colored product was first described by Ehrlich in 1883 using urine samples.

Since then, this type of reaction (bilirubin with a diazotized sulfanilic acid solution) has been referred to as the classic diazo reaction, a reaction on which all commonly used methods today are based. In 1913, van den Bergh found that the diazo reaction may be applied to serum samples but only in the presence of an accelerator (solubilizer). However, this methodology had errors associated with it. It was not until 1937 that Malloy and Evelyn developed the first clinically useful methodology for the quantitation of bilirubin in serum samples using the classic diazo reaction with a 50% methanol solution as an accelerator. In 1938, Jendrassik and Grof described a method using the diazo reaction with caffeine–benzoate–acetate as an accelerator. Today, all commonly used methods for measuring bilirubin and its fractions are modifications of the technique described by Malloy and Evelyn. Total bilirubin and conjugated bilirubin (direct bilirubin) are measured and unconjugated bilirubin (indirect bilirubin) is determined by subtracting conjugated bilirubin from total bilirubin (see Fig. 25.7).

FIGURE 25.7 Methods to measure different fractions of bilirubin. Bilirubin has also been quantified using bilirubinometry in the neonatal population. This methodology is only useful in the neonatal population because of the presence of carotenoid compounds in adult serum that causes strong positive interference in the adult population. Bilirubinometry involves the measurement of reflected light from the skin using two wavelengths that provide a numerical index based on spectral reflectance. Newer-generation bilirubinometers use microspectrophotometers that determine the optical densities of bilirubin, hemoglobin, and melanin in the subcutaneous layers of the infant's skin. Mathematical isolation of hemoglobin and melanin allows measurement of the optical density created by bilirubin.39 When using the several methods described earlier, two of the three fractions of bilirubin were identified: conjugated (direct) and unconjugated (indirect) bilirubin. Unconjugated (indirect) bilirubin is a nonpolar and water-insoluble substance that is found in plasma bound to albumin. Because of these characteristics, unconjugated bilirubin will only react with the diazotized

sulfanilic acid solution (diazo reagent) in the presence of an accelerator (solubilizer). Conjugated (direct) bilirubin is a polar and water-soluble compound that is found in plasma in the free state (not bound to any protein). This type of bilirubin will react with the diazotized sulfanilic acid solution directly (without an accelerator). Thus, conjugated and unconjugated bilirubin fractions have historically been differentiated by solubility of the fractions. Conjugated bilirubin reacts in the absence of an accelerator, whereas unconjugated bilirubin requires an accelerator. While for many years bilirubin results were reported as direct and indirect, this terminology is now outdated. Direct and indirect bilirubin results should be reported as conjugated and unconjugated, respectively.40 The third fraction of bilirubin is referred to as “delta” bilirubin. Delta bilirubin is conjugated bilirubin that is covalently bound to albumin. This fraction of bilirubin is seen only when there is significant hepatic obstruction. Because the molecule is attached to albumin, it is too large to be filtered by the glomerulus and excreted in the urine. This fraction of bilirubin, when present, will react in most laboratory methods as conjugated bilirubin. Thus, total bilirubin is made up of three fractions: conjugated, unconjugated, and delta bilirubin. The three fractions together are known as total bilirubin.

Specimen Collection and Storage Total bilirubin methods using a diazotized sulfanilic acid solution may be performed on either serum or plasma. Serum, however, is preferred for the Malloy-Evelyn procedure because the addition of the alcohol in the analysis can precipitate proteins and cause interference with the method. A fasting sample is preferred as the presence of lipemia will increase measured bilirubin concentrations. Hemolyzed samples should be avoided as they may decrease the reaction of bilirubin with the diazo reagent. Bilirubin is very sensitive to and is destroyed by light; therefore, specimens should be protected from light. If left unprotected from light, bilirubin values may reduce by 30% to 50% per hour. If serum or plasma is separated from the cells and stored in the dark, it is stable for 2 days at room temperature, 1 week at 4°C, and indefinitely at −20°C.41

METHODS There is no preferred reference method or standardization of bilirubin analysis; however, the American Association for Clinical Chemistry and the National

Bureau of Standards have published a candidate reference method for total bilirubin, a modified Jendrassik-Grof procedure using caffeine–benzoate as a solubilizer.42 Because they both have acceptable precision and are adapted to many automated instruments, the Jendrassik-Grof or Malloy-Evelyn procedure is the most frequently used method to measure bilirubin. The Jendrassik-Grof method is slightly more complex, but it has the following advantages over the Malloy-Evelyn method: Not affected by pH changes Insensitive to a 50-fold variation in protein concentration of the sample Maintains optical sensitivity even at low bilirubin concentrations Has minimal turbidity and a relatively constant serum blank Is not affected by hemoglobin up to 750 mg/dL Because this chapter does not allow for a detailed description of all previously mentioned bilirubin test methodologies, only the most widely used principles for measuring bilirubin in the adult and pediatric populations are covered.43, 44, 45

Malloy-Evelyn Procedure Bilirubin pigments in serum or plasma are reacted with a diazo reagent. The diazotized sulfanilic acid reacts at the central methylene carbon of bilirubin to split the molecule forming two molecules of azobilirubin. This method is typically performed at pH 1.2 where the azobilirubin produced is red-purple in color with a maximal absorption of 560 nm. The most commonly used accelerator to solubilize unconjugated bilirubin is methanol, although other chemicals have been used.46

Jendrassik-Grof Method for Total and Conjugated Bilirubin Determination Principle Bilirubin pigments in serum or plasma are reacted with a diazo reagent (sulfanilic acid in hydrochloric acid and sodium nitrite), resulting in the production of the purple product azobilirubin. The product azobilirubin may be measured spectrophotometrically. The individual fractions of bilirubin are determined by taking two aliquots of sample and reacting one aliquot with the diazo reagent only and the other aliquot with the diazo reagent and an

accelerator (caffeine–benzoate). The addition of caffeine–benzoate will solubilize the water-insoluble fraction of bilirubin and will yield a total bilirubin value (all fractions). The reaction without the accelerator will yield conjugated bilirubin only. After a short period of time, the reaction of the aliquots with the diazo reagent is terminated by the addition of ascorbic acid. The ascorbic acid destroys the excess diazo reagent. The solution is then alkalinized using an alkaline tartrate solution, which shifts the absorbance spectrum of the azobilirubin to a more intense blue color that is less subject to interfering substances in the sample. The final blue product is measured at 600 nm, with the intensity of color produced directly proportional to bilirubin concentration. Indirect (unconjugated) bilirubin may be calculated by subtracting the conjugated bilirubin concentration from the total bilirubin concentration.

Comments and Sources of Error Instruments should be frequently standardized to maintain reliable bilirubin results, and careful preparation of bilirubin standards is critical as these are subject to deterioration from exposure to light. Hemolysis and lipemia should be avoided as they will alter bilirubin concentrations. Serious loss of bilirubin occurs after exposure to fluorescent and indirect and direct sunlight; therefore, it is imperative that exposure of samples and standards to light be kept to a minimum. Specimens and standards should be refrigerated in the dark until testing can be performed.

Reference Range See Table 25.2. TABLE 25.2 Reference Ranges for Bilirubin in Adults and Infants

Urobilinogen in Urine and Feces Urobilinogen is a colorless end product of bilirubin metabolism that is oxidized by intestinal bacteria to the brown pigment urobilin. In the normal individual, part of the urobilinogen is excreted in feces, and the remainder is reabsorbed into the portal blood and returned to the liver. A small portion that is not taken up by the hepatocytes is excreted by the kidney as urobilinogen. Increased levels of urinary urobilinogen are found in hemolytic disease and in defective liver cell function, such as that seen in hepatitis. Absence of urobilinogen from the urine and stool is most often seen with complete biliary obstruction. Fecal urobilinogen is also decreased in biliary obstruction, as well as in HCC.47 Most quantitative methods for urobilinogen are based on a reaction first described by Ehrlich in 1901: the reaction of urobilinogen with pdimethylaminobenzaldehyde (Ehrlich's reagent) to form a red color. Many modifications of this procedure have been made over the years to improve specificity. However, because the modifications did not completely recover urobilinogen from the urine, most laboratories use the less laborious, more rapid, semiquantitative method described next.

Determination of (Semiquantitative)

Urine

Urobilinogen

Principle Urobilinogen reacts with p-dimethylaminobenzaldehyde (Ehrlich's reagent) to form a red color, which is then measured spectrophotometrically. Ascorbic acid is added as a reducing agent to maintain urobilinogen in the reduced state. The use of saturated sodium acetate stops the reaction and minimizes the combination of other chromogens with the Ehrlich's reagent.48 Specimen A fresh 2-hour urine specimen is collected. This specimen should be kept cool and protected from light. Comments and Sources of Error

1. The results of this test are reported in Ehrlich units rather than in milligrams of urobilinogen because substances other than urobilinogen account for some of the final color development. 2. Compounds, other than urobilinogen, that may be present in the urine and react with Ehrlich's reagent include porphobilinogen, sulfonamides, procaine, and 5-hydroxyindoleacetic acid. Bilirubin will form a green color and, therefore, must be removed, as previously described. 3. Fresh urine is necessary, and the test must be performed without delay to prevent oxidation of urobilinogen to urobilin. Similarly, the spectrophotometric readings should be made within 5 minutes after color production because the urobilinogen–aldehyde color slowly decreases in intensity. Reference Range Urine urobilinogen, 0.1 to 1.0 Ehrlich units every 2 hours or 0.5 to 4.0 Ehrlich units per day (0.86 to 8 mmol/d); 1 Ehrlich unit is equivalent to approximately 1 mg of urobilinogen.

Fecal Urobilinogen Visual inspection of the feces is usually sufficient to detect decreased urobilinogen. However, the semiquantitative determination of fecal urobilinogen is available and involves the same principle described earlier for the urine. It is carried out in an aqueous extract of fresh feces, and any urobilin present is reduced to urobilinogen by treatment with alkaline ferrous hydroxide before Ehrlich's reagent is added. A range of 75 to 275 Ehrlich units per 100 g of fresh feces or 75 to 400 Ehrlich units per 24-hour specimen is considered a normal reference range.48

Serum Bile Acids Serum bile acid analysis is rarely performed because the methods required are very complex. These involve extraction with organic solvents, partition chromatography, gas chromatography–mass spectrometry, spectrophotometry, ultraviolet light absorption, fluorescence, radioimmunoassay, and enzyme immunoassay (EIA) methods. Although serum bile acid levels are elevated in liver disease, the total concentration is extremely variable and adds no diagnostic value to other tests of liver function. The variability of the type of bile acids

present in serum, together with their existence in different conjugated forms, suggests that more relevant information of liver dysfunction may be gained by examining patterns of individual bile acids and their state of conjugation. For example, it has been suggested that the ratio of the trihydroxy to dihydroxy bile acids in serum will differentiate patients with obstructive jaundice from those with hepatocellular injury and that the diagnosis of primary biliary cirrhosis and extrahepatic cholestasis can be made on the basis of the ratio of the cholic to chenodeoxycholic acids. However, the high cost of these tests, the time required to do them, and the current controversy concerning their clinical usefulness render this approach unsatisfactory for routine use.49,50

Enzymes Liver enzymes play an important role in the assessment of liver function because injury to the liver resulting in cytolysis or necrosis will cause the release of enzymes into circulation. Enzymes also play an important role in differentiating hepatocellular (functional) from obstructive (mechanical) liver disease, which is an important clinical distinction because failure to identify an obstruction will result in liver failure if the obstruction is not rapidly treated. Although many enzymes have been identified as useful in the assessment of liver function, the most clinically useful include the aminotransferases (ALT and AST), the phosphatases (ALP and 5′-neucleotidase), GGT, and lactate dehydrogenase (LD). The methods used to measure these enzymes, the normal reference ranges, and other general aspects of enzymology are discussed in Chapter 13. Discussion in this chapter focuses on the characteristic changes in serum enzyme levels seen in various hepatic disorders. It is important to note that the diagnosis of disease depends on a combination of patient history, physical examination, laboratory testing, and sometimes radiologic studies and biopsy, and therefore, abnormalities in liver enzymes alone are not diagnostic in and of themselves.51,52

Aminotransferases The two most common aminotransferases measured in the clinical laboratory are AST (formerly referred to as serum glutamic oxaloacetic transaminase [SGOT]) and ALT (formerly referred to as serum glutamic pyruvic transaminase [SGPT]). The aminotransferases are responsible for catalyzing the conversion of aspartate and alanine to oxaloacetate and pyruvate, respectively. In the absence of acute necrosis or ischemia of other organs, these enzymes are most useful in the

detection of hepatocellular (functional) damage to the liver. ALT is found mainly in the liver (lesser amounts in skeletal muscle and kidney), whereas AST is widely distributed in equal amounts in the heart, skeletal muscle, and liver, making ALT a more “liver-specific” marker than AST. Regardless, the serum activity of both transaminases rises rapidly in almost all diseases of the liver and may remain elevated for up to 2 to 6 weeks. The highest levels of AST and ALT are found in acute conditions such as viral hepatitis, drug- and toxin-induced liver necrosis, and hepatic ischemia. The increase in ALT activity is usually greater than that for AST. Only moderate increases are found in less severe conditions. AST and ALT are found to be normal or only mildly elevated in cases of obstructive liver damage. Because AST and ALT are present in other tissues besides the liver, elevations in these enzymes may be a result of other organ dysfunction or failure such as acute myocardial infarction, renal infarction, progressive muscular dystrophy, and those conditions that result in secondary liver disease such as infectious mononucleosis, diabetic ketoacidosis, and hyperthyroidism. It is often helpful to conduct serial determinations of aminotransferases when following the course of a patient with acute or chronic hepatitis, and caution should be used in interpreting abnormal levels because serum transaminases may actually decrease in some patients with severe acute hepatitis, owing to the exhaustive release of hepatocellular enzymes.51,52

Phosphatases Alkaline Phosphatase The ALP family of enzymes are zinc metalloenzymes that are widely distributed in all tissues; however, highest activity is seen in the liver, bone, intestine, kidney, and placenta. The clinical utility of ALP lies in its ability to differentiate hepatobiliary disease from osteogenic bone disease. In the liver, the enzyme is localized to the microvilli of the bile canaliculi, and therefore, it serves as a marker of extrahepatic biliary obstruction, such as a stone in the common bile duct, or in intrahepatic cholestasis, such as drug cholestasis or primary biliary cirrhosis. ALP is found in very high concentrations in cases of extrahepatic obstruction with only slight to moderate increases seen in those with hepatocellular disorders such as hepatitis and cirrhosis. Because bone is also a source of ALP, it may be elevated in bone-related disorders such as Paget's disease, bony metastases, diseases associated with an increase in osteoblastic activity, and rapid bone growth during puberty. ALP is also found elevated in

pregnancy due to its release from the placenta, where it may remain elevated up to several weeks postdelivery. As a result, interpretation of ALP concentrations is difficult because enzyme activity of ALP can increase in the absence of liver damage.51,52 5′-Nucleotidase 5′-Nucleotidase (5NT) is a phosphatase that is responsible for catalyzing the hydrolysis of nucleoside-5′-phosphate esters. Although 5NT is found in a wide variety of cells, serum levels become significantly elevated in hepatobiliary disease. There is no bone source of 5NT, so it is useful in differentiating ALP elevations due to the liver from other conditions where ALP may be seen in increased concentrations (bone diseases, pregnancy, and childhood growth). Levels of both 5NT and ALP are elevated in liver disease, whereas in primary bone disease, ALP level is elevated, but the 5NT level is usually normal or only slightly elevated. This enzyme is much more sensitive to metastatic liver disease than is ALP because, unlike ALP, its level is not significantly elevated in other conditions, such as in pregnancy or during childhood. In addition, some increase in enzyme activity may be noted after abdominal surgery.51, 52, 53, 54 γ-Glutamyltransferase GGT is a membrane-localized enzyme found in high concentrations in the kidney, liver, pancreas, intestine, and prostate but not in bone. Similar to the clinical utility of 5NT (see earlier), GGT plays a role in differentiating the cause of elevated levels of ALP as the highest levels of GGT are seen in biliary obstruction. GGT is a hepatic microsomal enzyme; therefore, ingestion of alcohol or certain drugs (barbiturates, tricyclic antidepressants, and anticonvulsants) elevates GGT. It is a sensitive test for cholestasis caused by chronic alcohol or drug ingestion. Measurement of this enzyme is also useful if jaundice is absent for the confirmation of hepatic neoplasms.51, 52, 53, 54, 55 Lactate Dehydrogenase LD is an enzyme with a very wide distribution throughout the body. It is released into circulation when cells of the body are damaged or destroyed, serving as a general, nonspecific marker of cellular injury. Moderate elevations of total serum

LD levels are common in acute viral hepatitis and in cirrhosis, whereas biliary tract disease may produce only slight elevations. High serum levels may be found in metastatic carcinoma of the liver. As a result of its wide distribution, LD measurements provide no additional clinical information above that which is provided by the previously mentioned enzymes. However, fractionation of LD into its five tissue-specific isoenzymes may give useful information about the site of origin of the LD elevation.

Tests Measuring Hepatic Synthetic Ability A healthy functioning liver is required for the synthesis of serum proteins (except the immunoglobulins). The measurement of serum proteins, therefore, can be used to assess the synthetic ability of the liver. Although these tests are not sensitive to minimal liver damage, they may be useful in quantitating the severity of hepatic dysfunction. A decreased serum albumin may be a result of decreased liver protein synthesis, and the albumin level correlates well with the severity of functional impairment and is found more often in chronic rather than in acute liver disease. The serum α-globulins also tend to decrease with chronic liver disease. However, a low or absent α-globulin suggests α-antitrypsin deficiency as the cause of the chronic liver disease. Serum γ-globulin levels are transiently increased in acute liver disease and remain elevated in chronic liver disease. The highest elevations are found in chronic active hepatitis and postnecrotic cirrhosis. In particular, immunoglobulin G (IgG) and IgM levels are more consistently elevated in chronic active hepatitis; IgM, in primary biliary cirrhosis; and IgA, in alcoholic cirrhosis. Prothrombin time is commonly increased in liver disease because the liver is unable to manufacture adequate amounts of clotting factor or because the disruption of bile flow results in inadequate absorption of vitamin K from the intestine. However, a prothrombin time is not routinely used to aid in the diagnosis of liver disease. Rather, serial measurements of prothrombin times may be useful in following the progression of disease and the assessment of the risk of bleeding. A marked prolongation of the prothrombin time indicates severe diffuse liver disease and a poor prognosis.

Tests Measuring Nitrogen Metabolism The liver plays a major role in removing ammonia from the bloodstream and converting it to urea so that it can be removed by the kidneys. A plasma

ammonia level, therefore, is a reflection of the liver's ability to perform this conversion. In liver failure, ammonia and other toxins increase in the bloodstream and may ultimately cause hepatic coma. In this condition, the patient becomes increasingly disoriented and gradually lapses into unconsciousness. The cause of hepatic coma is not fully known, although ammonia is presumed to play a major role. However, the correlation between blood ammonia levels and the severity of the hepatic coma is poor. Therefore, ammonia levels are most useful when multiple measurements are made over time. The most common laboratory determination of ammonia concentrations is based on the following reaction: (Eq. 25-1) The resulting decrease in absorbance at 340 nm is measured and is proportional to ammonia concentration. The sample of choice is plasma collected in ethylenediaminetetraacetic acid (EDTA), lithium heparin, or potassium oxalate, and the samples should be immediately placed on ice to prevent metabolism of other nitrogenous compounds to ammonia in the sample, leading to false elevations in ammonia. If analysis cannot be performed immediately, the plasma should be removed and placed on ice or frozen. Frozen (−70°C) samples are stable for several days. Hemolyzed samples should be rejected for analysis as red blood cells have a concentration of ammonia two to three times higher than that of plasma.56 Lipemic samples and those with high bilirubin concentrations may be unsuitable for analysis in some systems. The glutaminase activity of GGT is a major contributor to the endogenous production of ammonia; therefore, concentrations may be artifactually increased in samples with raised GGT activity.57

Hepatitis Hepatitis implies injury to the liver characterized by the presence of inflammation in the liver tissue. Infectious causes for the inflammation of liver include viral, bacterial, and parasitic infections, as well as noninfectious causes, such as radiation, drugs, chemicals, and autoimmune diseases and toxins. Viral infections account for the majority of hepatitis cases observed in the clinical setting. Major hepatitis subtypes include HAV, HBV, HCV, HDV, and HEV. Infections with these viruses can lead to the onset of acute disease with

symptoms, including jaundice, dark urine, fatigue, nausea, vomiting, and abdominal pain. Some subtypes, such as HBV and HCV, can lead to the prolonged elevation of serum transaminase level (longer than 6 months), a condition termed chronic hepatitis. Routes of transmission vary from one viral subtype to another. HAV and HEV are typically caused by ingestion of contaminated food or water. HBV, HCV, and HDV usually occur as a result of parenteral contact with infected body fluids (e.g., from blood transfusions or invasive medical procedures using contaminated equipment) and sexual contact. Refer Table 25.3 for a list of hepatitis viruses. TABLE 25.3 The Hepatitis Viruses

Hepatitis A HAV, also known as infectious hepatitis or short incubation hepatitis, is the most common form of viral hepatitis worldwide. It is caused by a nonenveloped RNA virus of the picornavirus family. Tens of millions of HAV infections occur annually, with the most common reported source of infection in the household occurring via contaminated or improperly handled food.58 Because HAV is excreted in bile and shed in the feces, which can contain up to 109 infectious virions per gram, the fecal–oral route is the primary means of HAV transmission.59,60 Patients with HAV infection present with symptoms of fever, malaise, anorexia, nausea, abdominal discomfort, dark urine, and jaundice. Symptoms are generally self-limited and resolve within 3 weeks. However, in rare instances, patients develop fulminant liver failure. Chronic infection with HAV is not found and there is no evidence of a carrier state or long-term sequelae in humans.58 Clinical markers for the diagnosis and the progression of HAV infection are measured through the presence of serologic antibodies. IgM antibodies to HAV (IgM anti-HAV) are detectable at or prior to the onset of clinical illness and decline in 3 to 6 months, when it becomes undetectable by commercially available diagnostic tests.61 IgG antibodies to HAV (IgG anti-HAV) appear soon

after IgM, persist for years after infection, and confer lifelong immunity.62 IgM anti-HAV has been used as the primary marker of acute infection.63 The presence of elevated titers of IgG anti-HAV in the absence of IgM indicates a past infection. Another reliable method to detect acute infection in patients is assaying for the presence of viral antigen, which is shed in the feces. However, the antigen is no longer present soon after liver enzymes have reached their peak levels. Another method of detecting HAV infection is amplification of viral RNA by reverse transcription polymerase chain reaction (RT-PCR). Nucleic acid detection techniques are more sensitive than immunoassays for viral antigen to detect HAV in samples of different origins (e.g., clinical specimens, environmental samples, or food). Because of the high proportion of asymptomatic HAV infections, nucleic acid amplification techniques are useful to determine the extent to which unidentified infection occurs.64 The availability of vaccines to provide long-term immunity against HAV infection has the potential to significantly reduce the incidence of disease and possibly eliminate the transmission of this virus worldwide.65, 66, 67, 68 In 2006, following the approval of the HAV vaccine for children in the United States, the U.S. Food and Drug Administration (FDA)/CDC recommended that all children receive the HAV vaccine as early as age 12 to 23 months. The use of this vaccine has significantly reduced the incidence of HAV in the United States and has therefore changed the epidemiology of this infection. In addition to children, the HAV vaccine is recommended for the following people: those traveling to countries where hepatitis A is common, those who are family and caregivers of adoptees from countries where hepatitis A is common, those who have sexual encounters with other men, those who are users of recreational drugs (injected or not), those with chronic or long-term liver disease (including hepatitis B or hepatitis C), and those with clotting factor disorders.69

Hepatitis B Known as serum hepatitis or long-incubation hepatitis, HBV can cause both acute and chronic hepatitis and is the most ubiquitous of the hepatitis viruses. Two billion individuals are infected globally and between 350 and 400 million persons are carriers of the virus. In the United States, 12 million people have been infected with 2 million of those individuals estimated to be chronic carriers of the virus.58 The highest incidence of acute HBV was among adults aged 25 to 45 years.70 HBV is comparatively stable in the environment and remains viable for longer than 7 days on environmental surfaces at room temperature.71 It is

detected in virtually all body fluids, including blood, feces, urine, saliva, semen, tears, and breast milk; the three major routes of transmission are parenteral, perinatal, and sexual. Persons at high risk for infection in the United States include persons who engage in the sharing of body fluids, such as high-risk sexual behaviors (e.g., prostitution and male homosexuality), and the sharing of drug injection needles. Children born to mothers who are hepatitis B surface antigen (HBsAg) positive at the time of delivery, immigrants from endemic areas, and sexual partners and household contacts of patients who have HBV are high-risk groups for HBV infection. Although transmission of HBV by blood transfusion occurs, effective screening tests now make this transmission route rare. Health care workers, including laboratory personnel, may be at increased risk for developing HBV, depending on their degree of exposure to blood and body fluids.72 Serologic Markers of HBV Infection HBV is a 42-nm DNA virus classified in the Hepadnaviridae family. The liver is the primary site of HBV replication. Following an HBV infection, the core of the antigen is synthesized in the nuclei of hepatocytes and then passed into the cytoplasm of the liver cell, where it is surrounded by the protein coat. An antigen present in the core of the virus (HBcAg) and a surface antigen present on the surface protein (HBsAg) have been identified by serologic studies. Another antigen, called the e antigen (HBeAg), has also been identified.73 Hepatitis B Surface Antigen Previously known as the Australia antigen and hepatitis-associated antigen, HBsAg is the antigen for which routine testing is performed on all donated units of blood. HBsAg is a useful serologic marker in patients before the onset of clinical symptoms because it is present during the prodrome of acute HBV. HBsAg is not infectious; however, its presence in the serum may indicate the presence of the hepatitis virus. Therefore, persons who chronically carry HBsAg in their serum must be considered potentially infectious because the presence of the intact virus cannot be excluded. HBsAg is the only serologic marker detected during the first 3 to 5 weeks after infection in newly infected patients. The average time from exposure to detection of HBsAg is 30 days (range 6 to 60 days).74, 75, 76 Highly sensitive single-sample nucleic acid tests can detect HBV DNA in the serum of an infected person 10 to 20 days before detection of

HBsAg.76 HBsAg positivity has been reported for up to 18 days after HBV vaccination and is clinically insignificant.77,78 Patients who achieve complete viral clearance develop the antibody to the HBsAg, following the disappearance of the HBsAg (Fig. 25.8). The presence of anti-HBs antibody in patients is frequently observed in the general population, suggestive of past infection. Patients who have developed the antibody to the HBsAg are not susceptible to future reinfection with HBV.72

FIGURE 25.8 Serology of hepatitis B infection with recovery. Hepatitis B Core Antigen HBcAg has not been demonstrated to be present in the plasma of hepatitis victims or blood donors. This antigen is present only in the nuclei of hepatocytes during an acute infection with HBV. The antibody to the core antigen, anti-HBc, usually develops earlier in the course of infection than the antibody to the surface antigen (Fig. 25.8). A test for the IgM antibody to HBcAg was developed as a serologic marker for clinical use. The presence of this IgM antibody is specific for acute HBV infection. In patients who have chronic HBV infection, the IgM anti-HBc antibody titer can persist during chronic viral replication at low levels that typically are not detectable by assays used in the United States. However, persons with exacerbation of chronic infection can test positive for IgM anti-HBc.78 Another marker for acute infection is a viral DNA-dependent DNA polymerase that is closely associated with the presence of the core antigen.

This viral enzyme is required for viral replication and is detectable in serum early in the course of viral hepatitis, during the phase of active viral replication.79

CASE STUDY 25.3 The following laboratory results were obtained from a 19-year-old college student who consulted the Student Health Service because of fatigue and lack of appetite. She adds that she recently noted that her sclera appears somewhat yellowish and that her urine has become dark (Case Study Table 25.3.1). CASE STUDY TABLE 25.3.1 Laboratory Results

Questions

1. What is the most likely diagnosis? 2. What additional factors in the patient's history should be sought? 3. What is the prognosis?

Hepatitis B e Antigen The e antigen, an antigen closely associated with the core of the viral particle, is detected in the serum of persons with acute or chronic HBV infection. The presence of the e antigen appears to correlate well with both the number of infectious virus particles and the degree of infectivity of HBsAg-positive sera. The presence of HBeAg in HBsAg carriers is an unfavorable prognostic sign and predicts a severe course and chronic liver disease. Conversely, the presence of anti-HBe antibody in carriers indicates a low infectivity of the serum (Fig. 25.9). The e antigen is detected in serum only when surface antigen is present (Fig. 25.10).

FIGURE 25.9 No antibody is formed against HBsAg. The persistence of HBeAg implies high infectivity and a generally poor prognosis. This patient would likely develop cirrhosis unless seroconversion occurs or treatment is given.

FIGURE 25.10 Serology of chronic hepatitis with formation of antibody to HBeAg. This is a favorable sign and suggests that the chronic hepatitis may resolve. Complete recovery would be heralded by the disappearance of HBsAg and formation of its corresponding antibody. The serologic markers of HBV infection typically used to differentiate among acute, resolving, and chronic infections are HBsAg, anti-HBc, and antiHBs (Table 25.4). Persons who recover from natural infection typically will be positive for both anti-HBs and anti-HBc, whereas persons who respond to HBV vaccine have only anti-HBs. Persons who become chronically infected fail to develop antibody to the HBsAg, resulting in the persistent presence of HBsAg as well as the presence of anti-HBc in patient serum, typically for life.80, 81, 82, 83 HBeAg and anti-HBe screenings typically are used for the management of patients with chronic infection. Serologic assays are available commercially for all markers except HBcAg because no free HBcAg circulates in blood.70 TABLE 25.4 Typical Interpretation of Serologic Test Results for Hepatitis B Virus Infection

aHepatitis B surface antigen. bAntibody to hepatitis B core antigen. cImmunoglobulin M. dAntibody to HBsAg. eNegative test result. fPositive test result. gTo ensure that an HBsAg-positive test result is not a false positive, samples with reactive HBsAg results should be tested with a licensed neutralizing confirmatory test if recommended in the manufacturer's package insert. hPersons positive only for anti-HBc are unlikely to be infectious except under unusual circumstances in which they are the source for direct percutaneous exposure of susceptible recipients to large quantities of virus (e.g., blood transfusion or organ transplant).

Nucleic acid hybridization or PCR technique is used to detect HBV DNA in the blood and is another method used to measure disease progression. This technique provides a more sensitive measurement of infectivity and disease progression than serology. It may be used to monitor the effectiveness of antiviral therapy in patients with chronic HBV infection, but it supplements rather than replaces current HBV serologic assays.84, 85, 86 Chronic Infection with HBV Approximately 90% of patients infected with HBV recover within 6 months. Recovery is accompanied by the development of the antibody to the HBsAg. However, about 10% of patients progress to a chronic hepatitis infection. The likelihood of developing chronic HBV infection is higher in individuals infected

perinatally (90%) and during childhood (20% to 30%), when the immune system is thought to be less developed and unable to achieve efficient viral clearance, than in adult immunocompetent subjects (1 mm) in the ST segment of the ECG is strongly suggestive of acute ischemia.8 Less specific ST-segment changes or T-wave

abnormalities can also be helpful in risk-stratifying patients.8 A normal ECG, however, does not rule out the presence of ACS. Chest radiography can also assist in the evaluation of a patient with chest pain by identifying a noncardiac source of the pain (e.g., pneumonia, pneumothorax, and aortic dissection) or sequelae of underlying cardiac causes (e.g., pulmonary edema caused by cardiac dysfunction).8 The use of biomarkers is critical in the evaluation of acute chest pain to identify angina (chest pain) as an MI (heart attack). Universal definitions of MI were recently published in 2008 outlining the consensus reached by the joint European Society of Cardiology/ACC/AHA/World Health Federation Task Force (see Table 26.2). These guidelines emphasized the use of the term MI only in the setting of myocardial ischemia and not as a result of any other cause9 and designated several types of MI (Table 26.3). The 2014 ACC/AHA guidelines emphasize cardiac troponins as the most sensitive and specific biomarkers for non–ST-segment elevation ACS.8 TABLE 26.2 Universal Definition of MI Released by the Joint European Society of Cardiology/American College of Cardiology/American Heart Association/World Health Federation Task Force (2007)9

ECG, electrocardiogram; PCI, percutaneous coronary intervention (also known as “coronary angioplasty” or “angioplasty”); CABG, coronary artery bypass graft (also known as “bypass surgery”); URL, upper reference limit. TABLE 26.3 Classification of MI According to the Universal Definition of Myocardial Infarction Released by the Joint European Society of Cardiology/American College of Cardiology/American Heart

Association/World Health Federation Task Force (2007)9

MI, myocardial infarction; LBBB, left bundle branch block; PCI, percutaneous coronary intervention (also known as “coronary angioplasty” or “angioplasty”); CABG, coronary artery bypass graft (also known as “bypass surgery”).

CASE STUDY 26.1 A 15-month-old girl with a heart murmur since birth was evaluated for repeated pulmonary infections, failure to grow, cyanosis, and mild clubbing of fingers and toes. She had been on digitalis therapy by the referring physician. The radiograph showed a moderately enlarged heart and an enlarged pulmonary artery. Pertinent laboratory data were obtained.

A cardiac catheterization was performed, and a large ventricular septal defect was found.

questions

1. How does this congenital defect affect the body's circulation? 2. Why are the red cell measurements increased in this patient? 3. What treatment will be suggested for this patient? 4. What is this patient's prognosis?

THE PATHOPHYSIOLOGY OF ATHEROSCLEROSIS, THE DISEASE PROCESS UNDERLYING MI Atherosclerosis is a chronic disease process that occurs over a number of years and contributes to approximately 50% of all deaths in modern Western societies.10 Evidence of atherosclerosis can often be found in the human aorta before the age of 10,11 but atherosclerosis becomes pathologic with the development of atherosclerotic plaques (atheromas), which predispose the vasculature to thrombosis, leading to organ ischemia and infarction (Fig. 26.1). The pathophysiology of atherosclerosis is gradual and complicated, involving a progressive accumulation of lipids, smooth muscle cells, macrophages, and connective tissue within the intima of large- and medium-sized arteries, ultimately causing luminal narrowing and decreased perfusion (Fig. 26.1). Although the exact etiology of atherosclerosis remains unclear, the reaction to injury hypothesis is strongly favored by current evidence, proposing that atherosclerosis is due to a chronic inflammatory response to an accumulation of subtle vascular wall injuries.12

FIGURE 26.1 Comparison of normal and atherosclerotic arteries. Narrowing the arterial lumen due to atherosclerotic plaque leads to abnormal blood flow, contributing to progression of atherosclerosis. (Adapted from NHLBI Atherosclerosis homepage, http://www.nhlbi.nih.gov/health/healthtopics/topics/atherosclerosis/) Pathohistological evidence from human and experimental studies demonstrates that endothelial and inflammatory cells interact with chemical and inflammatory mediators to promote the development of atherosclerotic plaques. This process begins with vascular injury, which is initiated when endothelial cells are damaged or rendered dysfunctional by vascular abnormalities, such as turbulent blood flow, hyperlipidemia, and hyperhomocysteinemia. A damaged vascular endothelium has an increased permeability to circulating lipids, so having high cholesterol (hyperlipidemia) in the wake of a damaged endothelium favors the accumulation of lipoproteins, predominantly low-density lipoprotein (LDL), and very–low density lipoprotein (VLDL), within the arterial intima (Fig. 26.1).13 In addition, apoB-containing LDL has high affinity for arterial wall proteoglycans.14 Retention of lipids in arterial walls due to hypercholesterolemia and/or elevated levels of apoB-containing lipoproteins is thus a crucial step in the pathogenesis of atherosclerotic lesions. The central role of cholesterol accumulation (i.e., LDL and VLDL) in the progression of atherosclerosis is the reason treating high cholesterol is such a priority in preventing and attenuating

heart disease. Once lesion initiation has begun, LDL deposited within the intima is oxidized by endothelial cells, lipoxygenase, and free radicals generated by the auto-oxidation of homocysteine.11 The formation of oxidized LDL is central to lesion progression. Oxidized LDL is toxic to endothelial cells, causing additional intimal damage and subsequent retention of cholesterol-rich lipoproteins, and it elicits an inflammatory response that releases proinflammatory cytokines and recruits inflammatory cells to the early lesion. Among the earliest recruited leukocytes are neutrophils and monocytes, which play critical roles in early atherogenesis by maintaining a proinflammatory state around the initial lesion (Fig. 26.2).10

FIGURE 26.2 LDL deposition and oxidation within the vessel wall leads to monocyte recruitment and differentiation into activated macrophages that phagocytose oxidized LDL (oxLDL) to become foam cells. Foam cells release HDL and proinflammatory mediators, and their rupture contributes to lesion progression. (Adapted from Glass CK, Witztum JL. Atherosclerosis: the road ahead. Cell. 2001;104(4):503–516, Ref.10.)

Maturation of monocytes into activated macrophages is a key step in lesion progression. Macrophage scavenger receptors recognize oxidized LDL, but not native LDL, and activated macrophages rapidly phagocytose cholesterol-rich lipoproteins that have been oxidized within the vessel wall (Fig. 26.2).11 Excessive uptake of oxidized LDL transforms macrophages into bloated, cholesterol-filled cells called foam cells.15 Filled with cytoplasmic lipid droplets, foam cells exhibit a variety of functions that both promote lesion progression, including the production of proinflammatory signals, and counter lesion progression, such as the secretion of HDL (Fig. 26.2). Foam cells also display numerous metabolic abnormalities, including activation of inflammasomes and the NF-kB pathway, and it is generally agreed that foam cells play a harmful role in lesion progression.15 In addition, rupture of foam cells and release of their contents cause further damage to the vascular endothelium, stimulating more inflammation. Intimal deposition and subsequent oxidation of LDL and its effects on monocyte differentiation are thus of key importance in the early development of an atherosclerotic lesion. As the cycle of endothelial cell damage progresses, additional cell types are recruited to the plaque, in particular T and B lymphocytes and macrophages (Fig. 26.3). These cells are activated by a stream of cytokines, such as interleukin (IL)-1 and tumor necrosis factor-α, that are released by endothelial cells within the plaque.16 Interactions between T cells and foam cells promote a chronic inflammatory state and help recruit smooth muscle cells into the intima (Fig. 26.3).10 Additionally, growth factors, such as platelet-derived growth factor, fibroblast growth factor, and tissue growth factor-α, are released by lymphocytes and endothelial cells, further stimulating smooth muscle cell migration and activation.16

FIGURE 26.3 T cells and foam cells maintain a proinflammatory state that induces the migration of smooth muscle cells into the intima, where they secrete collagen, proteoglycans, and fibrin that form a fibrous cap around the atheroma. (Adapted from Glass CK, Witztum JL. Atherosclerosis: the road ahead. Cell. 2001;104(4):503–516, Ref.10.) Once smooth muscle cells migrate into the core of the atheroma, they proliferate and deposit extracellular matrix components that give stability and strength to the plaque.10 Cytokines released from T cells, such as interferon-γ, exert a number of both pro- and antiatherogenic effects on macrophages and smooth muscle cells, but the net effect of this cytokine release is proinflammatory and proatherogenic. Smooth muscle cells are induced to secrete collagen, elastin, and proteoglycans that form a fibrous backbone and outer shell that fix the plaque firmly in the vessel wall (Fig. 26.3). As the atheroma grows, its core becomes increasingly isolated from surrounding blood supply, leading to hypoxia that stimulates release of proangiogenic cytokines. This causes aberrant neovascularization around the periphery of the plaque, which predisposes to hemorrhage and deposition of erythrocyte components, such as hemosiderin and lipids, within the plaque core.17 Microvasculature hemorrhage and subsequent

deposition of additional debris provokes further inflammation, leukocyte recruitment, and remodeling of the plaque. The steps described above—beginning with lipid and cellular infiltration following vascular injury and progressing to chronic inflammation and fibrosis —continue in a feed-forward cycle that ultimately culminates in complete vessel occlusion, thrombosis, plaque rupture, or some combination of the three. Each of these possible outcomes predisposes to organ ischemia. In the context of the heart, coronary artery ischemia and resultant hypoxia pose an immediate risk of cellular injury due to the high metabolic activity and oxygen demand of the myocardium. The ultimate outcome of such injury is ischemic heart disease, varying in severity from angina to MI, the symptoms of which are dependent on the extent of coronary artery atherosclerosis. The most common locations for symptomatic cardiac atheroma formation are the proximal left anterior descending, the proximal left main coronary, and the entire right coronary arteries. Coronary atherosclerosis typically becomes symptomatic only after atherosclerotic plaques obstruct approximately 75% (or >50% in the left main) of the cross-sectional area of the vessel.10

MARKERS OF CARDIAC DAMAGE Initial Markers of Cardiac Damage The first biochemical markers of cardiac damage were discovered in 1954 when Karmen et al.18 hypothesized that “destruction of cardiac muscle, reported rich in transaminase activity, might result in a release of this enzyme into the blood stream and might thus increase the serum transaminase activity.” Today, serum biomarkers have become the centerpiece of evaluation and management of patients presenting with chest pain. The underlying principle behind serum biomarkers of cardiac damage relies on the fact that cell death releases intracellular proteins from the myocardium into the circulation. Detection of cardiac proteins in plasma provides insight into the occurrence, extent, and timing of MI, all of which are critical for proper medical management. The first cardiac markers to be used extensively in clinical practice were glutamic oxaloacetic transaminase, lactic dehydrogenase, and malic dehydrogenase. The first, glutamic oxaloacetic transaminase, now known as aspartate transaminase (AST), became widely used in the diagnosis of MI shortly after its discovery as a serum marker by Ladue et al. in 1954.19 But

because of the high false-negative rate of AST, the labor-intensive nature of the AST assay, and the short window of AST elevation, AST was soon replaced by lactate dehydrogenase (LD) as the marker of choice.20 Compared with AST, LD was found to be a more sensitive marker of MI that remains elevated for significantly longer post-MI, up to 2 weeks. But LD is involved in NADH-dependent reactions in the glycolytic pathway, which takes place in nearly all cells in the body, and its specificity to cardiac muscle is thus quite low. Early researchers observed that serum LD levels were elevated in other conditions such as cancer and anemia.20 These pitfalls were overcome to some extent through assays that differentiate between LD of cardiac and noncardiac origin.21 Specifically, five different isoforms of LD (LD1 to LD5) are found in human plasma,22 which correspond to the organ from where the enzyme originates. LD1 is most abundant in the myocardium, and LD5 is expressed mainly in the skeletal muscle and liver. Therefore, the diagnosis of MI was made by comparing plasma LD1 and LD2 levels. Because LD1 is specific to the myocardium, plasma normally contains greater levels of LD2 than LD1; however, damage to the myocardium and subsequent release of LD1 can cause plasma LD1 levels to surpass LD2.23 Early studies demonstrated that the plasma LD1:LD2 ratio exceeds approximately 0.75 24 to 48 hours past the onset of symptoms of MI and remains elevated for up to 2 weeks.23 The LD1:LD2 ratio was thus of great utility as an early biomarker in the management of patients presenting several days after possible MI. Creatine kinase (CK) was the next marker to gain favor. Like LD, CK is found in nearly all cells in the body, but unlike LD, CK catalyzes a reaction important for high-energy phosphate production (the conversion of creatine to creatine phosphate), which is greatly upregulated in the muscle cells and brain. High levels of CK are thus found in all muscle cells, in particular striated muscle. Damage to the muscle or brain results in rapidly detectable increases in plasma CK, which can be readily detected in plasma with a high sensitivity at short times after muscle injury. After the discovery in 1960 of elevated levels of plasma CK post-MI,24 CK became an important marker of cardiac damage. In a typical patient with acute MI, serum CK levels were found to exceed the normal range within 6 to 8 hours, to reach a peak of two- to tenfold normal by 24 hours, and then decline to the normal range after 3 to 4 days (Fig. 26.4).26 A CK plasma concentration greater than two times normal was shown to correlate with MI,25,27 and an elevation in serum CK, together with elevated AST and LD, was used as the primary means of enzymatic detection of MI for many years.

FIGURE 26.4 Temporal elevations of important markers of myocardial damage post-MI. ULRR, upper limit of reference range. (Adapted from French JK, White HD. Clinical implications of the new definition of myocardial infarction. Heart. 2004;90(1):99–106, Ref.25.) Unfortunately, the ubiquitous expression of CK in all striated muscle was a problem for specificity of CK as a marker for MI, in spite of its high sensitivity. CK elevations were readily detected in conditions such as stroke, pulmonary disease, and chronic alcoholism and after strenuous exercise.28 This obstacle was initially overcome through the discovery that CK exists in three cytoplasmic isoenzymes. The cytoplasmic isoenzymes are dimers composed of combinations of M and B subunits (where M is for muscle and B is for brain). Dimers of 2 M subunits are called CK-MM, 2 B subunits as CK-BB, and 1 M and 1 B subunit as creatine kinase MB (CK-MB). Elevated plasma levels of CK-MM or CKBB can be found after injury to the muscle or brain, respectively. After it was found that 15% to 30% of CK in the myocardium is MB, compared with 1% to 3% in normal striated muscle, detection of elevated CK-MB was shown to be highly specific to myocardial damage.29 Additionally, elevations in serum CKMB could be detected at 4 to 6 hours after the onset of MI symptoms, significantly less than the 24 to 48 hours necessary to detect peak LD plasma levels (Fig. 26.4). Like its rapid rise post-MI, CK-MB levels quickly drop to baseline levels by 2 to 4 days post-MI, compared with 10 to 14 days for LD.30 Because of its higher specificity and its rapid elevation post-MI, CK-MB was long considered the most reliable serum marker of MI and is still widely used today.31,32

Use of these three markers—AST, LD, and CK—was the cornerstone of post-MI management for several decades. Each had its individual applications: CK for the early presentation, LD for the late presentation, and AST for the intermediate.33 But their lack of specificity to the myocardium and to myocardial injury fueled the search for better markers. Extensive research in the 1970s led to the discovery that muscle cells express structural and regulatory proteins, such as troponin and tropomyosin, in a pattern that is tissue specific.32 The subsequent development of rapid and sensitive techniques for detection of the tissue-specific forms of troponin quickly made troponin a promising marker of cardiac injury.34

Cardiac Troponins Troponin is a complex of three proteins that regulate the calcium-dependent interactions of myosin heads with actin filaments during striated muscle contraction. Troponin T (TnT) binds the troponin complex to tropomyosin, troponin I (TnI) inhibits the binding of actin and myosin, and troponin C (TnC) binds to calcium to reverse the inhibitory activity of TnI.35 The troponin complex is responsible for transmitting the calcium signal that triggers muscle contraction (Fig. 26.5). Several properties of troponin made it attractive as a marker of myocardial damage.

FIGURE 26.5 Regulatory proteins in the contractile apparatus of striated muscle. Striated muscle is composed of bundles of myofibril-containing fibers, made up of repeating units of cross-striations called sarcomeres (A). The sarcomere is composed of thick and thin myofilaments, which tether the Z disk to the M line (B). Thick filaments are composed primarily of large molecules of myosin, which have a rodlike tail region and a protruding head region (C). Thin filaments are composed of actin, which plays a structural role, and troponin and tropomyosin, which line actin filaments and regulate calcium-dependent interactions with myosin on thick filaments to stimulate contraction. In contrast to other cardiac markers, troponins were found to have tissuespecific isoforms that could be used to detect damage to the heart. Although the same isoform of TnC is expressed in slow-twitch (type 2) and cardiac muscle, unique isoforms of TnI and TnT are expressed in fast-twitch (type 1), slowtwitch, and cardiac muscle.35 Each TnI, TnT, and TnC isoform is encoded by a

separate gene and expressed in a muscle-type–specific manner.36 This allows for highly specific detection of troponin of cardiac origin (e.g., cardiac TnI [cTnI]). Further increasing the sensitivity of troponin detection was the demonstration that cardiac troponins are very tightly complexed to the contractile apparatus, such that circulating levels of cardiac troponins are normally extremely low.35 Early studies demonstrated that normal levels of circulating cTnI are below 10 ng/mL, but patients with acute MI can have serum cTnI levels well over 100 ng/mL,37 demonstrating the specificity of cardiac troponins as indicators of myocardial cell damage. The development of sensitive detection methods was the next step toward integrating cardiac troponins into the diagnostic workup of MI. Because the skeletal and cardiac isoforms of TnT and TnI contain differing amino acid sequences, monoclonal antibodies can be raised against cardiac-specific epitopes. This allowed for the development of sensitive immunoassays for cardiac troponins that have a far greater specificity than other markers. For example, detection of cTnI was specific for cardiac injury even in acute muscle disease, chronic muscle disease, chronic renal failure, and following intense exercise, such as running a marathon.38 Additionally, measurement of plasma troponin levels allows for the detection of myocardial cell injury in syndromes, such as acute ischemic syndrome or myocarditis, that is undetectable using other markers.39 Finally, troponins offer additional advantages due to the timing of their release. CK-MB levels elevate rapidly post-MI (4 to 6 hours) but return to baseline after 2 to 4 days; CK-MB can thus only be used within a short window of time after a suspected MI. Conversely, LD levels remain elevated for up to 1 week but are not detectable until 24 to 48 hours post-MI (Fig. 26.4). Cardiac troponins, on the other hand, are detectable in the plasma at 3 to 12 hours after myocardial injury, peaking at 12 to 24 hours and remaining elevated for more than 1 week: 8 to 21 days for TnT and 7 to 14 days for TnI (Fig. 26.4).39 Cardiac troponins thus offer the widest window for detection post-MI and with the highest sensitivity and specificity. Taken together, the characteristics of specific, sensitive, and rapid detection propelled cardiac troponins to the center of MI diagnosis, where they still remain. Cardiac troponin (I or T) is currently the preferred biomarker for myocardial necrosis, recognized to have nearly absolute myocardial tissue specificity as well as high clinical sensitivity, detecting even minor cardiac damage (as utilized in the universal definition of MI).9 An elevated plasma

cardiac troponin level is defined as a measurement greater than the 99th percentile of a normal reference value range, specified as the upper limit of the reference range (Fig. 26.4). Current guidelines recommend drawing blood for the measurement of troponin as soon as possible after onset of symptoms of a possible MI (universal definition of MI).9 If troponin assays are not available, the next best alternative is CK-MB, and measurement of total CK is no longer recommended for the diagnosis of MI for the reasons outlined above (universal definition of MI).9

CK-MB and Troponin I/Troponin T Considerations in Kidney Disease Patients Prognosis in chronic kidney disease. Increases in cTnI/cTnT are associated with increases in short-term cardiac outcomes in patients with chronic kidney disease diagnosed with acute MI.40, 41, 42 Stably increased serum troponin levels predict worse long-term cardiovascular outcomes and worse survival in asymptomatic CKD patients in the absence of acute MI.43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54 Dialysis patients. In a recent meta-analysis of 11 studies, elevated cTnT was associated with an increase in all-cause mortality (95% confidence interval [CI] 2.4 to 43).42 Few studies here used the high-sensitivity assay; cTnT measured with high-sensitive assay has also been reported as an independent predictor of all-cause mortality in dialysis patients.55,56 Effect of hemodialysis. In kidney failure, the need to dialyze patients is common when failure is severe. Hemodialysis involves filtering the blood to remove the built-up toxins and results in altering the concentration of cardiac enzymes. Studies have yielded conflicting results. Dialysis minimally changes CK-MB and did not change cTnI concentrations.57 Decreased cTnI and cTnT levels decreased 27% to 37% following dialysis with a high-flux membrane (but not low-flux membrane).58 cTnT rose after dialysis in patients with CVD, suggesting the rise in cTnT resulting from cardiac injury.59 While the cut-offs for the diagnosis for AMI should be the same for dialysis patients and nondialysis chronic kidney disease patients, considerations above are important with the growing number of patients surviving with chronic kidney disease.

Other Markers of Cardiac Damage In spite of the nearly universal use of troponin and CK-MB in diagnosing

cardiac damage, it is worthwhile mentioning additional markers of cardiac damage that have been used in the past and/or proposed experimentally. These include myoglobin, heart-type fatty acid–binding protein (H-FABP), ischemia-modified albumin (IMA), myeloperoxidase (MPO), C-reactive protein (CRP), myeloid-related protein (MRP)-8/14, and pregnancy-associated plasma protein A (PAPP-A). While their clinical utility has not been universally established, they are discussed frequently as potential alternatives, so they have been discussed here briefly.

CASE STUDY 26.2 A 59-year-old woman came to the emergency department of a local hospital complaining of pain and a feeling of heaviness in her abdomen for several days. She reported no weakness, chest pain, or left arm pain. She has no chronic health problems except for seasonal allergies and slightly elevated total cholesterol.

Questions 1. Do the symptoms and personal history of this patient suggest acute MI? 2. Based on the preceding laboratory data, would this diagnosis be acute MI? 3. Why or why not?

Myoglobin Myoglobin is an iron- and oxygen-binding protein found exclusively in the muscle and is normally absent from the circulation. The myoglobin found in the muscle forms pigments, giving the muscle its characteristic red color. The exact color of the muscle depends on the oxidation state of the iron atom in myoglobin and the amount of oxygen attached to it. Myoglobin is a small protein, which is released quickly when the muscle is damaged (see Fig. 26.4 for serum release pattern).60 It has a very short half-life of 9 minutes.61 Because myoglobin is released so quickly, it has been proposed as an adjunct marker for troponin or CK-MB in the early diagnosis of MI.62 There are limitations to using myoglobin as a marker of cardiac damage. It is not specific for the heart, being elevated with any cause of skeletal muscle damage, and is cleared in an irregular pattern.63 Recent studies have also shown that high-sensitivity troponin assays can detect elevated troponin levels prior to elevations in myoglobin,64, 65, 66 making myoglobin's use less attractive.

Heart-Type Fatty Acid–Binding Protein H-FABP is a small protein, which behaves much like myoglobin in its kinetics and release. It is found in all muscles, but in contrast to myoglobin, H-FABP is found relatively more abundantly in the heart.67 The utility of H-FABP has been described for the diagnosis of early acute MI. The sensitivity for the diagnosis of an acute MI has been reported to be higher than cardiac TnT (cTnT) or myoglobin.68 However, cTnT has a higher specificity than H-FABP (94% vs. 52%) indicating that elevated H-FABP was found in patients without disease.68 Recent studies have confirmed the utility of using both H-FABP and cTnI for the early diagnosis of MI/ACS.69 Using both markers may also be a valuable ruleout test for patients presenting between 3 and 6 hours after chest pain,69 though H-FABP is not used in the routine evaluation of chest pain.

Ischemia-Modified Albumin In contrast to the markers described above that detect myocardial tissue damage, the IMA test does not. Instead, it measures changes that occur in albumin in the presence of ischemia, giving it the advantage that it might detect ischemia before damage has occurred to the heart. The free radical formation that occurs during

tissue ischemia is believed to modify albumin within minutes of ischemia and lasts for about 6 hours.70, 71, 72, 73 These modifications in albumin alter its ability to bind transition metals, such as cobalt, by its N-terminal domain.72 The amount of IMA is measured by the spectrophotometric determination of albumin's binding to cobalt.70 While IMA is not specific for cardiac damage, it does have a clinical sensitivity for ACS of 50% to 90%.74,75 Additional studies will be necessary to determine the utility of IMA in the early diagnosis of ACS; its use will complement, not replace, troponin and CK-MB detection.

Novel Markers of Plaque Instability: MPO, CRP, MRP-8/14, and PAPP-A MPO, CRP, MRP-8/14, and PAPP-A have been studied in international multicenter studies for their utility in the early diagnosis and risk stratification of acute MI in emergency departments in patients with acute chest pain.76 The concentration of all four markers was significantly higher in patients with acute MI compared with patients with other diagnoses.76 But these markers were all inferior to troponins in detecting acute MI.76 MPO, MRP-8/14, and CRP were able to predict all-cause mortality and these provided independent prognostic information apart from clinical risk factors and high-sensitive cTnT; PAPP-A did not.76 Interestingly, these plaque instability markers did not correlate with each other or myocardial cell death determined by troponin.76 The exact mechanisms leading to plaque instability, plaque fissure, and subsequent rupture are not fully understood, but all are expressed in atherosclerotic plaques and contribute to vessel inflammation.77, 78, 79, 80

CASE STUDY 26.3 An 83-year-old man with known severe coronary artery disease, diffuse small vessel disease, and significant stenosis distal to a vein graft from previous CABG surgery was admitted when his physician referred him to the hospital after a routine office visit. His symptoms included 3+ pedal edema, jugular vein distention, and heart sound abnormalities. Significant laboratory data obtained on admission were as follows:

Questions 1. Do the symptoms of this patient suggest acute MI? 2. Based on the preceding laboratory data, would this diagnosis be acute MI? Why or why not? 3. Based on the preceding laboratory data, are there other organ system abnormalities present? 4. What are the indicators of these organ system abnormalities? 5. Is there a specific laboratory test that might indicate congestive heart failure in this patient?

CARDIAC INJURY OCCURS IN MANY DISEASE PROCESSES, BEYOND MI The damage that occurs in the context of MI is due to cardiac muscle cell (myocyte) death. A spectrum of cell death occurs in myocytes, ranging from apoptosis to necrosis to a mixture between the two. Apoptotic cell death is characterized by decreasing cell size, membrane blebbing, nuclear aggregation of chromosomal DNA, and purposeful degradation of DNA. The resulting apoptotic bodies formed from the process of apoptosis are recognized by macrophage, which then remove the debris in an effort to minimize the immune response.81 Apoptosis requires energy, so as ischemia progresses, the apoptotic process stalls and necrosis occurs if energy supply fails to meet the energy demand, which occurs rapidly in the heart. Thus, a rapid clinical response to myocardial ischemia is essential to limiting the extent of necrosis that occurs and is a central tenet of treating MI. The faster cardiac interventions are performed to restore perfusion (along with oxygen) to the heart, the greater the reduction in infarct size. The strategies for reperfusion include percutaneous interventions using intravascular balloons and stents to clear coronary artery blockages and prevent their recurrence, coronary artery bypass grafting (CABG or “bypass surgery”), and the use of chemical thrombolytics, such as tissue plasminogen activator (aka tPA or tenecteplase). Despite our focus here on the use of cardiac biomarkers in MI, a number of other causes of myocardial injury result in increased biomarker release. Like MI, these disease processes also have myocyte apoptosis and necrosis as part of their pathophysiology, which explains why these biomarkers may be elevated in these patients. Cell death, in the form of apoptosis and/or necrosis, is a key feature of cardiomyopathies due to genetic causes and/or stress-related causes and contributes to the worsening function that occurs with the disease82, 83, 84, 85 (Fig. 26.6). Cell death also occurs in decompensated heart failure and volume overload.86 In the cardiac hypertrophy “pre–heart failure state,” angiotensin II, catecholamine production, and cytokines release play a role in inducing cardiomyocyte apoptosis87, 88, 89 (Fig. 26.6), which can lead to low-level release of cardiac biomarkers. Cardiomyocyte cell death is also induced in myocarditis, whereby viruses and bacteria induce an autoimmune response, which recognizes cardiac-specific proteins as foreign and induces cardiac cells to die.90 Similarly, bacterial toxins in sepsis can cause apoptosis and myocardial depression. Certain drugs are also capable of causing apoptosis and myocardial dysfunction,

including doxorubicin and cyclophosphamide91, 92, 93 (Fig. 26.6), as well as alcohol, cocaine, and methamphetamines.

FIGURE 26.6 Common cardiac insults that result in cardiac injury and elevated cardiac biomarkers. (Adapted from McLean A, Huang S. Biomarkers of cardiac injury. In: Vaidya V, Bonventre J, eds. Biomarkers: In Medicine, Drug Discovery, and Environmental Health. New York, NY: John Wiley & Sons, Inc.; 2010:119–155, Ref.85.)

THE LABORATORY WORKUP OF PATIENTS SUSPECTED OF HEART FAILURE AND THE USE OF CARDIAC BIOMARKERS IN HEART FAILURE Heart failure is a pathological state in which the heart fails to adequately supply the metabolic needs of the body, typically due to a decrease in pumping function. The clinical manifestations of heart failure result largely from the retention of fluid, which is one of the body's maladaptive responses to decreased cardiac output. The most common symptoms of heart failure are shortness of breath, fatigue, and lower extremity edema.

The ACC/AHA/Heart Failure Society of America and the European Society of Cardiology have recommended specific laboratory and clinical tests for patients with suspected heart failure (see Table 26.4).97, 98, 99 Many of these tests may be familiar as routine laboratory tests; others are more specific to the atherosclerotic process and compensatory cardiac function mechanisms. TABLE 26.4 American College of Cardiology/American Heart Association/European Society of Cardiology Recommended Laboratory and Clinical Tests for Patients with Suspected Heart Failure

BUN, blood urea nitrogen; AST, aspartate aminotransferase; ALT, alanine aminotransferase; LD, lactate dehydrogenase; BNP, B-type natriuretic peptide; NT-proBNP, N-terminal pro-B–type natriuretic peptide; ECG, electrocardiogram; CV, cardiovascular; LDL, low-density lipoprotein; HDL, high-density lipoprotein; TSH, thyroid-stimulating hormone. The ACC/AHA recommendations also suggest determining a cTnI or cTnT if the clinical presentation is suggestive of an ACS; elevated troponin levels may also indicate the severity of the heart failure (discussed in the next section).97, 98, 99 A chest x-ray, 2D echocardiographic analysis, and Doppler flow studies to identify ventricular and/or valvular abnormalities are also recommended in evaluating patients with suspected heart failure.100,101 As CHD is the most common cause of heart failure, some evaluation for ischemic burden is

indicated. To this end, coronary angiography or exercise stress testing may be used depending on the clinical scenario. Pulmonary function testing is generally not helpful in the diagnosis of heart failure as cardiogenic pulmonary edema and intrinsic lung disease can result in similar patterns of abnormalities.99

THE USE OF NATRIURETIC PEPTIDES AND TROPONINS IN THE DIAGNOSIS AND RISK STRATIFICATION OF HEART FAILURE Though shortness of breath is the most common presentation of heart failure, it is a relatively nonspecific symptom. Measurement of circulating B-type natriuretic peptide (BNP), or its precursor N-terminal pro-B–type natriuretic peptide (NT-proBNP), can be particularly helpful in distinguishing cardiac from noncardiac causes of dyspnea and is widely used in emergency departments and other clinical settings. 97, 98, 99,102, 103, 104, 105 Natriuretic peptides are secreted from the heart in response to increased pressure and volume load. They play an important role in reducing intravascular volume by promoting natriuresis, diuresis, and vasodilation and inhibiting sympathetic nervous system signaling.106,107 The proBNP is released by myocardial cells in response to increased volume, increased pressure, and cardiac hypertrophy; this precursor is cleaved by the protease enzyme furin into the active BNP and the inactive NT-proBNP. Both NT-proBNP and BNP are elevated in patients with ventricular dysfunction and strongly predict morbidity and mortality in patients with heart failure.108, 109, 110, 111 Multiple clinical studies have assessed the utility of both NT-proBNP and BNP for ruling out heart failure as a cause of dyspnea (shortness of breath) in the acute clinical setting.111 Meta-analysis of these studies have found that the pooled estimates of sensitivity and specificity are equivalent for NT-proBNP and BNP. However, the optimum cutoff value for each peptide remains difficult to determine across all populations.111 Vasodilation and natriuresis are beneficial in the setting of heart failure, and thus, BNP was pursued as a target for drug development. Scios (later Johnson & Johnson) developed recombinant BNP as nesiritide. It is the first in a new class of therapies designed to treat heart failure, acting as a neurohormonal suppressor

just as endogenous BNP.112 The importance of mentioning that recombinant BNP is used therapeutically is that it can be picked up by laboratory tests for BNP, but not NT-proBNP. Therefore, there are clinical situations wherein NTproBNP may be the more appropriate test to use to follow heart failure patients.

Cardiac Troponins While used primarily for the diagnosis of myocardial injury caused by ischemia, elevations of cTnT and cTnI levels were recognized in heart failure patients more than a decade ago.113, 114, 115 The precise reason for elevated troponins in the non-ACS setting of heart failure is not clear. Ongoing cell death including apoptosis and necrosis may be one of the reasons, occurring as a result of increased myocardial wall stress116 resulting in subendocardial ischemia due to increased myocardial oxygen demand. Diminished cardiac perfusion and oxygen delivery to the heart itself and impaired renal clearance of troponins may also contribute.116 While the detection of troponins in heart failure is not diagnostic, it has been reported to add prognostic value. In acute heart failure patients without ACS, elevations in cTnI occur more often than cTnT elevations, although increases in either were related to increased mortality.116 Other studies have described a 2.6fold increased risk of in-hospital mortality for heart failure patients with elevated troponin levels at the time of admission.117 Elevated cardiac troponin has also been associated with lower systolic blood pressure and lower left ventricular ejection fraction at the time of admission,117 both of which are markers for worse outcome. In outpatients with more severe chronic heart failure (New York Heart Failure Class III or IV), the presence of elevated cardiac troponin was also associated with lower ejection fraction and deteriorating clinical course.115 Troponin levels were one of the strongest predictors of mortality, particularly if used in conjunction with BNP levels.118 In general, concomitant elevations in multiple markers (cardiac troponin, high-sensitivity CRP [hsCRP], along with NT-proBNP) are associated with escalating risks of adverse events.119 Few studies, however, have investigated how troponin levels may be helpful in the initial diagnosis of heart failure, so their current role in heart failure patients is limited to risk stratification.

MARKERS OF CHD RISK

C-Reactive Protein Inflammation plays an important role in the development and progression of atherosclerosis and CHD. CRP is an acute marker of inflammation that is currently used clinically in the evaluation of CVD risk. CRP is a pentameric protein consisting of five identical subunits that bind to specific ligands, such as LDL cholesterol, in a calcium-dependent manner. CRP is normally present in human plasma at levels less than 10 mg/L, but its rapid synthesis in the liver after stimulus from a variety of inflammatory cytokines may increase plasma levels by 1,000-fold, thus serving as a sensitive biomarker of systemic inflammation.120 Because atherosclerosis and CHD derive largely from an inflammatory etiology, CRP has long been targeted as a biomarker for CHD, but it has recently gained widest acceptance as a marker of CHD risk.

CASE STUDY 26.4 A 68-year-old man presented to the emergency department with sudden onset of chest pain, left arm pain, dyspnea, and weakness while away from home on a business trip. His prior medical history is not available, but he admits to being a 2-pack per day smoker for longer than 20 years. Cardiac markers were performed at admission and 8 hours postadmission with the following results:

Questions 1. Do these results indicate a specific diagnosis? 2. If so, what is the diagnosis?

3. What myoglobin, CK-MB, and TnT results would be expected if assayed at 4 PM on September 27? 4. Can any assumptions be made about the patient's lifestyle/habits/health that would increase his risk for this condition? 5. Are there any assays that might indicate his risk for further events of this type?

CRP was first described in 1930 when physicians studying the serum from patients with pneumonia observed high seroreactivity with pneumococcal bacterial extracts.121 After separating the extracts into discrete fractions, only one fraction—arbitrarily designated fraction C—was found to react heavily with serum from acutely ill patients. This fraction contained “nonprotein material” that appeared “to be a carbohydrate common to the Pneumococcus species,” which was later called C polysaccharide.121 Only serum from acutely ill patients reacted with the pneumococcal C polysaccharide, and this reactivity disappeared after resolution of their illness. In the early 1940s, a protein requiring calcium ions was identified to be present exclusively in the serum of acutely ill patients. This protein, called “reactive protein,” was responsible for reactivity with C polysaccharide.122,123 Although first observed in patients afflicted with pneumococcal pneumonia, it became clear that serum CRP was present in many more pathological conditions, and elevated CRP became associated with systemic inflammation.122 As a biomarker, CRP originally gained the widespread use as an index of acute rheumatic fever.124 Its association with heart disease was first demonstrated in 1947 when the serum of patients with congestive heart failure was found to contain detectable CRP.125 In the 1950s, the discovery of elevated CRP after MI supported the hypothesis that MI is associated with systemic inflammation,126 as did the subsequent finding of elevated CRP in patients with CHD.127 Despite these findings, interest in CRP as a marker of CVD did not develop until the 1980s, when it was shown that CRP levels correlated remarkably well not only with serum CK-MB post-MI but also with the symptoms of cardiac disease, such as chest pain,128 unstable angina, and chronic atherothrombotic disease.129 All of these findings helped support the long-held hypothesis that vascular injury and inflammation play important roles in CVD, but CRP failed to provide significant benefit over the other clinically used markers of CVD or MI.

In the 1990s, extensive analysis of epidemiological data revealed that CRP could be applied clinically in a prognostic manner, rather than as a serum biomarker post-MI. This prognostic value was evaluated through several prospective cohort studies that compared baseline CRP levels in healthy individuals with CRP levels after cardiac events.130 The most influential data were extracted from the Physicians Health Study (PHS), which found that baseline CRP levels were significantly higher in individuals that eventually experienced MI than those who did not.131 These results showed that baseline plasma levels of CRP in apparently healthy individuals could help predict the risk of first MI, thus demonstrating a novel and substantial application of CRP as a prognostic marker. CRP data from the PHS also demonstrated that the use of anti-inflammatory medication (aspirin) reduced the risk of vascular events, which supported the hypothesis that chronic inflammation contributes to atherosclerosis and CHD.130 In the late 1990s, the Cholesterol and Recurrent Events (CARE) trial showed that lipid-lowering drugs, such as statins, reduce CRP levels in a largely LDL-dependent manner, suggesting that CRP evaluation may help determine the efficacy of pharmacologic interventions used to treat CVD.132 These studies demonstrated that CRP had immense clinical value as a potential novel biomarker for both cardiovascular risk and cardiovascular therapy management. Importantly, the CRP levels in these and later studies were baseline values that were orders of magnitude less than the CRP levels typically present during acute inflammation. Whereas CRP levels present during acute inflammation are readily detectible using common clinical laboratory methods, the levels of CRP reported in these studies were far below the threshold of detection in most standard clinical assays. Because these studies demonstrated substantial prognostic value of monitoring CRP values at baseline levels, it was thus necessary to develop more sensitive methods of CRP measurement. Specifically, it was necessary to measure CRP levels with extremely high sensitivity (hs), in the range of 0.15 and 10 mg/L.120 In the early 2000s, such methods for hsCRP measurement were developed and validated,120 and the first set of clinical guidelines for the use of hsCRP as a marker of cardiovascular risk prediction was published by the AHA and the Centers for Disease Control and Prevention in early 2003.133 These guidelines recommended that hsCRP be the inflammatory marker of choice in the evaluation of cardiac heart disease risk, and they stated that hsCRP concentrations of less than 1, 1 to 3, and greater than 3 correspond clinically to low, moderate, and high relative risk of CVD, respectively.133

Homocysteine Homocysteine is a sulfur-containing amino acid formed in plasma from the metabolic demethylation of methionine, which is derived from dietary protein.134 Plasma homocysteine circulates in four forms: (1) free thiol (homocysteine, Hcys; ~1%), (2) disulfide (homocystine; 5% to 10%), (3) mixed disulfide (Hcys-Cys; 5% to 10%), and (4) protein-bound thiol groups (80% to 90%).135 Total plasma homocysteine refers to the combined pool of all forms of homocysteine. Normal total plasma homocysteine ranges from 5 to 15 μmol/L, moderate is 16 to 30 μmol/L, intermediate is 31 to 100 μmol/L, and severe hyperhomocystinemia is greater than 100 μmol/L. Homocysteine was first isolated in 1932 but connections between homocysteine and vascular disease were not made until 1964, when a high incidence of vascular anomalies and arterial thromboses was observed in patients with homocystinuria.136 Five years later, a physician studying vascular abnormalities in homocystinuria found an association between homocysteine and atherosclerosis, concluding that “elevated concentration of homocysteine, homocystine, or a derivative of homocysteine is the common factor leading to arterial damage.”137 Later studies demonstrated that premature vascular disease is extremely common in patients with homocystinuria, such that advanced atherosclerosis is frequently found in children with homocystinuria and approximately 50% of patients experience thromboembolic events in their lifetime.138 These early studies demonstrated a clear link between extremely high levels of plasma homocysteine (>100 μmol/L [>13.5 mg/L]) and CVD, but mildly elevated homocysteine was later shown to pose a risk of CVD as well. A 1976 study found that “a reduced ability to metabolize homocysteine” may contribute to premature coronary artery disease,139 and numerous cross-sectional, case– control, and prospective cohort studies further evaluated this relationship. Most, but not all, of these epidemiologic studies indicated that hyperhomocystinemia increases the risk of CVD, but the results varied significantly between studies and study type. Whereas cross-sectional and case–control studies consistently found that hyperhomocystinemia increases the risk of CVD, most prospective studies demonstrated little or no increased risk.140 One meta-analysis, for example, found that case–control studies estimate approximately an 80% risk of developing CVD due to hyperhomocystinemia, whereas prospective cohort studies estimate only 20% risk.141 Such variation in clinical and epidemiological

data raised questions about what role, if any, homocysteine actually plays in the development of CVD. Many of these questions have been addressed through a wide body of basic research investigating the mechanisms through which homocysteine may contribute to CVD. Potential mechanisms that have been proposed include homocysteine-induced damage to vascular endothelium,142 accelerated thrombin formation,143 promotion of lipid peroxidation,144 vascular smooth muscle proliferation,145 and attraction of monocytes to the vascular endothelium.146 Animal models have shown that mild hyperhomocystinemia contributes to atherosclerotic lesion development and to early lipid accumulation in vascular endothelium.147 And studies in rats have shown that hyperhomocystinemia stimulates the expression of vascular adhesion molecules, such as monocyte chemoattractant protein (MCP-1), vascular cell adhesion molecule 1, and Eselectin, which increases the binding of monocytes to the endothelium.148 Treatment of cultured human endothelial and smooth muscle cells with homocysteine also induces the expression of MCP-1, as well as expression of IL-8, a T lymphocyte and neutrophil chemoattractant.149 Homocysteine-induced expression of these chemokines promotes a proinflammatory state that may contribute to general vascular inflammation that drives atherosclerosis.150 Together, evidence from in vitro and animal models supports the hypothesis that hyperhomocystinemia promotes atherosclerotic lesion development, thereby increasing the risk of CVD. Collectively, there is a growing body of evidence implicating hyperhomocysteinemia as an independent risk factor of CVD. Clinically, it has been estimated that up to 40% of patients diagnosed with premature coronary artery disease, peripheral vascular disease, or recurrent venous thrombosis exhibit some extent of hyperhomocystinemia.150 Several recent meta-analyses found that for every 5 μmol/L (0.7 mg/L) increase in serum homocysteine concentration, the risk of ischemic heart disease increased 20% to 30%151,152 and that decreasing plasma homocysteine by 3 μmol/L (0.4 mg/L) (through folate supplementation) can reduce the risk of ischemic heart disease by 16%, deep vein thrombosis (DVT) by 25%, and stroke by 24%.152 The clinical, epidemiological, and biochemical data support a role for homocysteine in the development of atherosclerosis and CVD, but further research will be necessary to delineate the exact mechanisms through which it exerts this effect.

MARKERS OF PULMONARY EMBOLISM An embolus is a circulating mass of solid, liquid, or gas, and pulmonary embolism (PE) is an acute and serious condition in which an embolus becomes lodged within the pulmonary arteries, impairing blood flow through the pulmonary vasculature and increasing right ventricular pressure. The extent of pulmonary vascular occlusion and subsequent symptoms is a function of the size and location of the embolus. Although most pulmonary emboli involve a pulmonary vessel of second, third, or fourth order yielding mild or no clinical symptoms, extremely large emboli can lodge at the bifurcation of the main pulmonary artery to form saddle emboli that can rapidly block pulmonary circulation.153 Saddle emboli and other emboli that occlude over 60% of the pulmonary circulation greatly increase the risk of right heart failure, cardiovascular collapse, and sudden death. The coincidence of DVT and PE is quite high; approximately half of venous thromboemboli will develop into pulmonary emboli,154 and approximately 95% of pulmonary emboli originate from deep veins of the legs155 (Fig. 26.7).

FIGURE 26.7 Thromboemboli originating from the deep veins of the legs travel through venous circulation through the heart and into the pulmonary vasculature, where they lodge in vessels of decreasing diameter and occlude blood flow. (Adapted from Douma RA, Kamphuisen PW, Büller HR. Acute pulmonary embolism. Part 1: epidemiology and diagnosis. Nat Rev Cardiol.

2010;7(10):585–596, Ref.156.) The incidence of PE increases almost exponentially with age, ranging from approximately 5 per 100,000 in childhood to nearly 600 per 100,000 in persons over 75 years old156(Fig. 26.8). Women of reproductive age are at a greater risk for PE because of the associations between venous thromboembolism and pregnancy and the use of oral contraceptives. If left untreated, PE-related mortality can exceed 25%, but adequate treatment in the form of anticoagulation decreases this risk to approximately 5%.158 Initial therapy after diagnoses of PE involves low molecular weight heparin, unfractionated heparin, or fondaparinux (trade name Arixtra), and long-term treatment includes oral vitamin K antagonists.159

FIGURE 26.8 The incidence of pulmonary embolism (PE) as a function of age in the US population. (Adapted from Anderson FA, Wheeler HB, Goldberg RJ, et al. A population-based perspective of the hospital incidence and case-fatality rates of deep vein thrombosis and pulmonary embolism. The Worcester DVT Study. Arch Intern Med. 1991;151(5):933–938, Ref.157.) Diagnosis of PE is inherently challenging because of the similarity of its

symptoms to other more common conditions, such as ACS, and because signs and symptoms are frequently not present.153 The classical presentation of a patient with PE includes chest pain, dyspnea, tachycardia, tachypnea, and coughing.156 Unilateral leg swelling and redness may indicate DVT and increases the likelihood of PE. Syncope due to circulatory collapse is present in approximately 15% of patients with a large PE, and crackles or decreased breath sounds are common.156 Vital signs may reveal tachycardia or mild hypoxia. Evidence of increased venous pressure, such as neck vein distension, or increased right ventricular pressure, such as a loud P2 (pulmonic valve closure sound), increases the diagnostic suspicion for PE.156 Distinction between PE and ACS is particularly difficult because of their similar presentation, in particular chest pain, dyspnea, and ECG abnormalities.

Use of d-Dimer Detection in PE The first step in the diagnostic workup of patients with suspected PE is determining the pretest clinical probability of PE using one of several decision rules.156 The most widely used set of decision rules is the Wells score, which considers seven clinical variables obtained solely from medical history and physical examination, as well as the physician's judgment on the likelihood of PE versus other diagnoses.160 When the pretest probability of PE is low or intermediate, it is reasonable to order a D-dimer blood test.156 D-Dimer is a product of plasmin-mediated fibrin degradation that consists of two D-domains from adjacent fibrin monomers that are cross-linked by activated factor XIII. Because D-dimer is derived from crosslinked fibrin, not fibrinogen, the presence of D-dimer in the bloodstream is indicative of current or recent coagulation and subsequent fibrinolysis. D-Dimer thus serves as an indirect marker of coagulation and fibrinolysis.156 The choice of D-dimer assay is important as the sensitivity of various tests varies greatly. Enzyme-linked fluorescent assay, enzyme-linked immunosorbent assay (ELISA), and latex quantitative assay are highly sensitive quantitative assays for circulating D-dimer levels and are the tests of choice in the workup patients with suspected PE.161 D-Dimer levels are abnormal in approximately 90% of patients with PE,162

and numerous studies have shown that a normal D-dimer results can rule out PE safely in patients with low or intermediate clinical probabilities.156 However, Ddimer levels are normal in only 40% to 68% of patients without PE. Abnormal

levels are often seen in patients with malignancy, recent surgery, renal dysfunction, or increased age.162 The low specificity of D-dimer testing results in a poor positive predictive value and limited utility in patients with a high clinical probability of PE,161 for whom CT and ventilation–perfusion scanning are reasonable initial diagnostic tests. However, the high sensitivity of the test translates to a valuable negative predictive value, meaning that D-dimer testing is most useful for excluding PE rather than diagnosing it. One study found that the sensitivity of the D-dimer (by high-sensitivity ELISA) for acute PE was 96.4% and the negative predictive value was 99.6%,153 thus further evaluation of PE is not indicated for most patients with normal D-dimer levels.153

Value of Assaying Troponin and BNP in Acute PE A recent meta-analysis was performed to determine the prognostic value of elevated troponin levels in patients with acute PE.163 Based on publications from January 1998 to November 2006, 122 of 618 patients were found to have elevated troponin levels (19.7%) compared with 51 of 1,367 patients with normal troponin levels.161 An elevated troponin level was associated significantly with a short-term mortality (odds ratio 5.24), resulting from PE (odds ratio 9.44) and with adverse outcome events (odds ratio 7.03).161 These results were consistent between TnI and TnT and in both prospective and retrospective studies. Therefore, an elevated troponin measurement in patients presenting with PE does appear to have utility in determining their short-term mortality outcome. This information can drive the clinician's clinical management as to whether a more aggressive management approach is necessary, such as the use of thrombolysis. Similarly, BNP has been used as a predictor of adverse outcome in patients with PE. In a study of 110 consecutive patients with PE, the positive and negative predictive values of BNP levels were determined.164 The risk of death related to PE if BNP > 21.7 pmol/L was 17%; the negative predictive value for uneventful outcome of a BNP < 21.7 pmol/L was 99%.164 A larger meta-analysis of 12 studies, including 868 patients with acute PE, determined that elevated BNP levels were significantly associated with an increase in short-term mortality from all causes (odds ratio 6.57) and with death resulting from PE (odds ratio 6.10) or serious adverse events (odds ratio 7.47) with positive predictive values of 14% and negative predictive values of 95%.165 Together these studies suggest a role for elevated BNP in helping to identify patients with acute PE at high risk for an adverse outcome, in order to drive a more aggressive management

approach such as thrombolysis. The high negative predictive value of a normal BNP is also a particularly useful piece of data used to select patients with an uneventful clinical course.

SUMMARY CHD is an extremely common condition that causes substantial morbidity and death worldwide. CHD is due in large part to atherosclerosis, which is best defined as a proinflammatory process in which cells, lipids, and connective tissue cause intimal thickening within large- and medium-sized arteries. This process impairs normal blood flow and may progress to occlude the entire vessel diameter. This narrowing of the coronary arteries leads to cardiac ischemia, which may manifest as activity-induced chest pain (stable angina), or ACS. Severe occlusion of coronary vessels causes complete ischemia and subsequent necrosis of surrounding tissue, which is known as MI. The extent of MI and subsequent morbidity can vary tremendously, ranging from undetected infarction with little consequence to sudden death. Because of the physiological and pathological responses that take place within the myocardium immediately after infarction, rapid diagnosis is extremely important. Methods of rapidly diagnosing MI with a high sensitivity and specificity are thus extremely important for the management and potentially the survival of a patient that presents with symptoms of MI.

CASE STUDY 26.5 A 48-year-old woman was seen by her primary physician for a routine physical examination. Her father and his brother died before the age of 55 with acute MI and another uncle had CABG surgery at age 52. Because of this family history, she requested for any testing that might indicate a predisposition or increased risk factors for early cardiac disease. She does not smoke, does not have hypertension, is approximately 20 lb overweight, and exercises moderately. The following test results were obtained.

Questions 1. Do any of the results obtained indicate a high risk for development of cardiac disease? If so, which results? 2. Does this patient have risk factors for early cardiac disease that can be modified by diet or lifestyle modifications? If so, what changes can be made? 3. Is there any specific treatment that can be instituted to reduce this patient's risk? 4. How should this patient be monitored?

Plasma biomarkers have become the centerpiece of evaluation and diagnosis of such patients (Table 26.5). AST, LD, and CK-MB were widely used biomarkers in the diagnosis of MI but have largely been replaced by highsensitivity troponin assays. cTnI and cTnT assays have extremely high specificity to cardiac tissue, and detection methods are sensitive enough to pick up even very minor cardiac tissue damage. Current guidelines therefore recommend measurement of cardiac troponins in the circulation as soon as possible after symptoms of MI. Other experimental markers, such as myoglobin, H-FABP, and IMA, lack the specificity of troponin testing but may be useful when assayed together with troponins. TABLE 26.5 Comparison of Past and Present Biomarkers for Cardiac Damage and Function with a Summary of Their Current Clinical Utility

LD, lactate dehydrogenase; CK-MB, creatine kinase MB; IMA, ischemiamodified albumin; H-FABP, heart-type fatty acid–binding protein; TnI, troponin I; TnT, troponin T; hsTn, high-sensitive troponin; BNP, B-type natriuretic peptide; NT-proBNP, N-terminal pro-B–type natriuretic peptide. Plasma biomarkers of cardiac disease risk are also an important factor in the management of patients at risk for CHD, in particular CRP and homocysteine, both of which indicate systemic inflammation and correlate with the elevated risk of CHD and MI. Similarly, the management and diagnosis of heart failure is facilitated by the analysis of circulating biomarkers, most importantly cardiac troponins and BNP.

For additional student resources, please visit http://thepoint.lww.com

at

questions 1. A serum TnT concentration is of most value to the patient with an MI when a. The CK-MB has already peaked and returned to normal concentrations b. The onset of symptoms is within 3 to 6 hours of the sample being drawn c. The myoglobin concentration is extremely elevated d. The TnI concentration has returned to normal concentrations 2. A normal myoglobin concentration 8 hours after the onset of symptoms of a suspected MI will a. Essentially rule out an acute MI b. Provide a definitive diagnosis of acute MI c. Be interpreted with careful consideration of the TnT concentration d. Give the same information as a total CK-MB 3. Which of the following analytes has the highest specificity for cardiac injury? a. TnI b. CK-MB mass assays c. Total CK-MB d. AST 4. Which of the following newer markers of inflammation circulates in serum bound to LD and HDL? a. Lipoprotein-associated phospholipase A2 b. CK-MB c. cTnI d. hsCRP 5. A person with a confirmed blood pressure of 125/87 would be classified as a. Prehypertension b. Normal c. Stage 1 hypertension

d. Stage 2 hypertension 6. Rheumatic heart disease is a result of infection with which of the following organisms? a. Group A streptococci b. Staphylococcus aureus c. Pseudomonas aeruginosa d. Chlamydia pneumoniae 7. Which of the following defects is the most common type of congenital CVD encountered? a. Ventricular septal defects (VSD) b. Tetralogy of Fallot c. Coarctation of the aorta d. Transposition of the great arteries 8. Which of the following cardiac markers is the most useful indicator of congestive heart failure? a. BNP b. TnI c. CK-MB d. Glycogen phosphorylase isoenzyme BB 9. Which of the following is the preferred biomarker for the assessment of myocardial necrosis? a. CK b. AST c. CK-MB d. TnI 10. Which of the following is NOT a feature of an ideal cardiac marker? a. Ability to predict future occurrence of cardiac disease b. Absolute specificity c. High sensitivity d. Close estimation of the magnitude of cardiac damage

references

1. Lindsell C, Anantharaman V, Diercks D, et al. The Internet Tracking Registry of Acute Coronary Syndromes (i*trACS): a multicenter registry of patients with suspicion of acute coronary syndromes reported using the standardized reporting guidelines for emergency department chest pain studies. Ann Emerg Med. 2006;48(6):666–677. 2. Pope JH, Aufderheide TP, Ruthazer R, et al. Missed diagnoses of acute cardiac ischemia in the emergency department. N Engl J Med. 2000;342(16):1163–1170. 3. Morrow DA. Clinical application of sensitive troponin assays. N Engl J Med. 2009;361(9):913–915. 4. Runge M, Ohman M, Stouffer G. The history and physical exam. In: Runge M, Stouffer G, Patterson C, eds. Netter's Cardiology. 2nd ed. Philadelphia, PA: Saunders Elsevier; 2010. 5. Mosca L, Manson JE, Sutherland SE, et al. Cardiovascular disease in women: a statement for healthcare professionals from the American Heart Association. Writing Group. Circulation. 1997;96(7):2468–2482. 6. Douglas PS, Ginsburg GS. The evaluation of chest pain in women. N Engl J Med. 1996;334(20):1311–1315. 7. Kudenchuk PJ, Maynard C, Martin JS, et al. Comparison of presentation, treatment, and outcome of acute myocardial infarction in men versus women (the Myocardial Infarction Triage and Intervention Registry). Am J Cardiol. 1996;78(1):9–14. 8. Amsterdam EA, Wenger NK, Brindis RG, et al. 2014 AHA/ACC guideline for the management of patients with non-ST-elevation acute coronary syndromes: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. Circulation. 2014;130(25):e344–e426. 9. Thygesen K, Alpert J, White H, et al. Universal definition of myocardial infarction. Circulation. 2007;116(22):2634–2653. 10.Glass CK, Witztum JL. Atherosclerosis. the road ahead. Cell. 2001;104(4):503–516. 11. Lusis AJ. Atherosclerosis. Nature. 2000;407(6801):233–241. 12.Ross R. The pathogenesis of atherosclerosis—an update. N Engl J Med. 1986;314(8):488–500. 13.Nakashima Y, Wight TN, Sueishi K. Early atherosclerosis in humans: role of diffuse intimal thickening and extracellular matrix proteoglycans. Cardiovasc Res. 2008;79(1):14–23. 14.Skålén K, Gustafsson M, Rydberg EK, et al. Subendothelial retention of atherogenic lipoproteins in early atherosclerosis. Nature. 2002;417(6890):750–754. 15.Moore KJ, Tabas I. Macrophages in the pathogenesis of atherosclerosis. Cell. 2011;145(3):341–355. 16.Libby P. Changing concepts of atherogenesis. J Intern Med. 2000;247(3):349–358. 17.Virmani R, Kolodgie FD, Burke AP, et al. Atherosclerotic plaque progression and vulnerability to rupture: angiogenesis as a source of intraplaque hemorrhage. Arterioscler Thromb Vasc Biol. 2005;25(10):2054–2061. 18.Karmen A, Wroblewski F, Ladue JS. Transaminase activity in human blood. J Clin Invest. 1955;34(1):126–131. 19.Ladue JS, Wroblewski F, Karmen A. Serum glutamic oxaloacetic transaminase activity in human acute transmural myocardial infarction. Science. 1954;120(3117):497–499. 20.King J, Waind AP. Lactic dehydrogenase activity in acute myocardial infarction. Br Med J. 1960;2(5209):1361–1363. 21.Warburton FG, Bernstein A, Wright AC. Serum creatine phosphokinase estimations in myocardial infarction. Br Heart J. 1965;27(5):740–747. 22.Wroblewski F, Gregory KF. Lactic dehydrogenase isozymes and their distribution in normal tissues and plasma and in disease states. Ann N Y Acad Sci. 1961;94:912–932. 23.Vasudevan G, Mercer DW, Varat MA. Lactic dehydrogenase isoenzyme determination in the diagnosis of acute myocardial infarction. Circulation. 1978;57(6):1055–1057. 24.Dreyfus JC, Schapira G, Resnais J, et al. [Serum creatine kinase in the diagnosis of myocardial infarct]. Rev Fr Etud Clin Biol. 1960;5:386–387. 25.French JK, White HD. Clinical implications of the new definition of myocardial infarction. Heart. 2004;90(1):99–106. 26.Sobel BE, Shell WE. Serum enzyme determinations in the diagnosis and assessment of myocardial

infarction. Circulation. 1972;45(2):471–482. 27.Dillon MC, Calbreath DF, Dixon AM, et al. Diagnostic problem in acute myocardial infarction: CKMB in the absence of abnormally elevated total creatine kinase levels. Arch Intern Med. 1982;142(1):33–38. 28.Wagner GS, Roe CR, Limbird LE, et al. The importance of identification of the myocardial-specific isoenzyme of creatine phosphokinase (MB form) in the diagnosis of acute myocardial infarction. Circulation. 1973;47(2):263–269. 29.Malasky BR, Alpert JS. Diagnosis of myocardial injury by biochemical markers: problems and promises. Cardiol Rev. 2002;10(5):306–317. 30.Robinson DJ, Christenson RH. Creatine kinase and its CK-MB isoenzyme: the conventional marker for the diagnosis of acute myocardial infarction. J Emerg Med. 1999;17(1):95–104. 31.Scott BB, Simmons AV, Newton KE, et al. Interpretation of serum creatine kinase in suspected myocardial infarction. Br Med J. 1974;4(5946):691–693. 32.Goldberg DM, Winfield DA. Diagnostic accuracy of serum enzyme assays for myocardial infarction in a general hospital population. Br Heart J. 1972;34(6):597–604. 33.Goldberg DM. Clinical enzymology: an autobiographical history. Clin Chim Acta. 2005;357(2):93–112. 34.Cummins P, Perry SV. Troponin I from human skeletal and cardiac muscles. Biochem J. 1978;171(1):251–259. 35.Adams JE, Abendschein DR, Jaffe AS. Biochemical markers of myocardial injury. Is MB creatine kinase the choice for the 1990s? Circulation. 1993;88(2):750–763. 36.MacGeoch C, Barton PJ, Vallins WJ, et al. The human cardiac troponin I locus: assignment to chromosome 19p13.2-19q13.2. Hum Genet. 1991;88(1):101–104. 37.Cummins B, Auckland ML, Cummins P. Cardiac-specific troponin-I radioimmunoassay in the diagnosis of acute myocardial infarction. Am Heart J. 1987;113(6):1333–1344. 38.Adams JE, Bodor GS, Dávila-Román VG, et al. Cardiac troponin I. A marker with high specificity for cardiac injury. Circulation. 1993;88(1):101–106. 39.Coudrey L. The troponins. Arch Intern Med. 1998;158(11):1173–1180. 40.Stacy SR, Suarez-Cuervo C, Berger Z, et al. Role of troponin in patients with chronic kidney disease and suspected acute coronary syndrome: a systematic review. Ann Intern Med. 2014;161(7):502–512. 41.Aviles RJ, Askari AT, Lindahl B, et al. Troponin T levels in patients with acute coronary syndromes, with or without renal dysfunction. N Engl J Med. 2002;346(26):2047–2052. 42.Michos ED, Wilson LM, Yeh HC, et al. Prognostic value of cardiac troponin in patients with chronic kidney disease without suspected acute coronary syndrome: a systematic review and meta-analysis. Ann Intern Med. 2014;161(7):491–501. 43.Dierkes J, Domröse U, Westphal S, et al. Cardiac troponin T predicts mortality in patients with endstage renal disease. Circulation. 2000;102(16):1964–1969. 44.Deegan PB, Lafferty ME, Blumsohn A, et al. Prognostic value of troponin T in hemodialysis patients is independent of comorbidity. Kidney Int. 2001;60(6):2399–2405. 45.deFilippi C, Wasserman S, Rosanio S, et al. Cardiac troponin T and C-reactive protein for predicting prognosis, coronary atherosclerosis, and cardiomyopathy in patients undergoing long-term hemodialysis. JAMA. 2003;290(3):353–359. 46.Wood GN, Keevil B, Gupta J, et al. Serum troponin T measurement in patients with chronic renal impairment predicts survival and vascular disease: a 2 year prospective study. Nephrol Dial Transplant. 2003;18(8):1610–1615. 47.Boulier A, Jaussent I, Terrier N, et al. Measurement of circulating troponin Ic enhances the prognostic value of C-reactive protein in haemodialysis patients. Nephrol Dial Transplant. 2004;19(9):2313–2318. 48.Iliou MC, Fumeron C, Benoit MO, et al. Prognostic value of cardiac markers in ESRD: Chronic Hemodialysis and New Cardiac Markers Evaluation (CHANCE) study. Am J Kidney Dis. 2003;42(3):513–523.

49.Conway B, McLaughlin M, Sharpe P, et al. Use of cardiac troponin T in diagnosis and prognosis of cardiac events in patients on chronic haemodialysis. Nephrol Dial Transplant. 2005;20(12):2759–2764. 50.Khan NA, Hemmelgarn BR, Tonelli M, et al. Prognostic value of troponin T and I among asymptomatic patients with end-stage renal disease: a meta-analysis. Circulation. 2005;112(20):3088–3096. 51.Havekes B, van Manen JG, Krediet RT, et al. Serum troponin T concentration as a predictor of mortality in hemodialysis and peritoneal dialysis patients. Am J Kidney Dis. 2006;47(5):823–829. 52.Wang AY, Lam CW, Yu CM, et al. Troponin T, left ventricular mass, and function are excellent predictors of cardiovascular congestion in peritoneal dialysis. Kidney Int. 2006;70(3):444–452. 53.Hickson LJ, Cosio FG, El-Zoghby ZM, et al. Survival of patients on the kidney transplant wait list: relationship to cardiac troponin T. Am J Transplant. 2008;8(11):2352–2359. 54.Hayashi T, Obi Y, Kimura T, et al. Cardiac troponin T predicts occult coronary artery stenosis in patients with chronic kidney disease at the start of renal replacement therapy. Nephrol Dial Transplant. 2008;23(9):2936–2942. 55.Hassan HC, Howlin K, Jefferys A, et al. High-sensitivity troponin as a predictor of cardiac events and mortality in the stable dialysis population. Clin Chem. 2014;60(2):389–398. 56.McGill D, Talaulikar G, Potter JM, et al. Over time, high- sensitivity TnT replaces NT-proBNP as the most powerful predictor of death in patients with dialysis-dependent chronic renal failure. Clin Chim Acta. 2010;411(13–14):936–939. 57.Tun A, Khan IA, Win MT, et al. Specificity of cardiac troponin I and creatine kinase-MB isoenzyme in asymptomatic long-term hemodialysis patients and effect of hemodialysis on these cardiac markers. Cardiology. 1998;90(4):280–285. 58.Lippi G, Tessitore N, Montagnana M, et al. Influence of sampling time and ultrafiltration coefficient of the dialysis membrane on cardiac troponin I and T. Arch Pathol Lab Med. 2008;132(1):72–76. 59.Wayand D, Baum H, Schätzle G, et al. Cardiac troponin T and I in end-stage renal failure. Clin Chem. 2000;46(9):1345–1350. 60.Kagen L. Myoglobin: Biochemical, Physiological, and Clinical Aspects. New York, NY: Columbia University Press; 1973. 61.Klocke FJ, Copley DP, Krawczyk JA, et al. Rapid renal clearance of immunoreactive canine plasma myoglobin. Circulation. 1982;65(7):1522–1528. 62.McCord J, Nowak RM, McCullough PA, et al. Ninety-minute exclusion of acute myocardial infarction by use of quantitative point-of-care testing of myoglobin and troponin I. Circulation. 2001;104(13):1483–1488. 63.Kagen L, Scheidt S, Butt A. Serum myoglobin in myocardial infarction: the “staccato phenomenon.” Is acute myocardial infarction in man an intermittent event? Am J Med. 1977;62(1):86–92. 64.Kavsak PA, MacRae AR, Newman AM, et al. Effects of contemporary troponin assay sensitivity on the utility of the early markers myoglobin and CKMB isoforms in evaluating patients with possible acute myocardial infarction. Clin Chim Acta. 2007;380(1–2):213–216. 65.Eggers KM, Oldgren J, Nordenskjöld A, et al. Diagnostic value of serial measurement of cardiac markers in patients with chest pain: limited value of adding myoglobin to troponin I for exclusion of myocardial infarction. Am Heart J. 2004;148(4):574–581. 66.Ilva T, Eriksson S, Lund J, et al. Improved early risk stratification and diagnosis of myocardial infarction, using a novel troponin I assay concept. Eur J Clin Invest. 2005;35(2):112–116. 67.Van Nieuwenhoven FA, Kleine AH, Wodzig WH, et al. Discrimination between myocardial and skeletal muscle injury by assessment of the plasma ratio of myoglobin over fatty acid-binding protein. Circulation. 1995;92(10):2848–2854. 68.Seino Y, Ogata K, Takano T, et al. Use of a whole blood rapid panel test for heart-type fatty acidbinding protein in patients with acute chest pain: comparison with rapid troponin T and myoglobin tests. Am J Med. 2003;115(3):185–190. 69.McMahon CG, Lamont JV, Curtin E, et al. Diagnostic accuracy of heart-type fatty acid-binding protein for the early diagnosis of acute myocardial infarction. Am J Emerg Med.

2012;30(2):267–274. 70.Bhagavan NV, Lai EM, Rios PA, et al. Evaluation of human serum albumin cobalt binding assay for the assessment of myocardial ischemia and myocardial infarction. Clin Chem. 2003;49(4):581–585. 71.Christenson RH, Duh SH, Sanhai WR, et al. Characteristics of an Albumin Cobalt Binding Test for assessment of acute coronary syndrome patients: a multicenter study. Clin Chem. 2001;47(3):464–470. 72.Bar-Or D, Curtis G, Rao N, et al. Characterization of the Co(2+) and Ni(2+) binding amino-acid residues of the N-terminus of human albumin. An insight into the mechanism of a new assay for myocardial ischemia. Eur J Biochem. 2001;268(1):42–47. 73.Bar-Or D, Lau E, Winkler JV. A novel assay for cobalt-albumin binding and its potential as a marker for myocardial ischemia-a preliminary report. J Emerg Med. 2000;19(4):311–315. 74.Roy D, Quiles J, Aldama G, et al. Ischemia Modified Albumin for the assessment of patients presenting to the emergency department with acute chest pain but normal or non-diagnostic 12-lead electrocardiograms and negative cardiac troponin T. Int J Cardiol. 2004;97(2):297–301. 75.Sinha MK, Roy D, Gaze DC, et al. Role of “Ischemia modified albumin”, a new biochemical marker of myocardial ischaemia, in the early diagnosis of acute coronary syndromes. Emerg Med J. 2004;21(1):29–34. 76.Schaub N, Reichlin T, Meune C, et al. Markers of plaque instability in the early diagnosis and risk stratification of acute myocardial infarction. Clin Chem. 2012;58(1):246–256. 77.Nicholls SJ, Hazen SL. Myeloperoxidase and cardiovascular disease. Arterioscler Thromb Vasc Biol. 2005;25(6):1102–1111. 78.Ionita MG, Vink A, Dijke IE, et al. High levels of myeloid-related protein 14 in human atherosclerotic plaques correlate with the characteristics of rupture-prone lesions. Arterioscler Thromb Vasc Biol. 2009;29(8):1220–1227. 79.Bayes-Genis A, Conover CA, Overgaard MT, et al. Pregnancy-associated plasma protein A as a marker of acute coronary syndromes. N Engl J Med. 2001;345(14):1022–1029. 80.Mueller C, Buettner HJ, Hodgson JM, et al. Inflammation and long-term mortality after non-ST elevation acute coronary syndrome treated with a very early invasive strategy in 1042 consecutive patients. Circulation. 2002;105(12):1412–1415. 81.Krysko DV, Denecker G, Festjens N, et al. Macrophages use different internalization mechanisms to clear apoptotic and necrotic cells. Cell Death Differ. 2006;13(12):2011–2022. 82.Yaoita H, Maruyama Y. Intervention for apoptosis in cardiomyopathy. Heart Fail Rev. 2008;13(2):181–191. 83.Olivetti G, Abbi R, Quaini F, et al. Apoptosis in the failing human heart. N Engl J Med. 1997;336(16):1131–1141. 84.Ibe W, Saraste A, Lindemann S, et al. Cardiomyocyte apoptosis is related to left ventricular dysfunction and remodelling in dilated cardiomyopathy, but is not affected by growth hormone treatment. Eur J Heart Fail. 2007;9(2):160–167. 85.McLean A, Huang S. Biomarkers of cardiac injury. In: Vaidya V, Bonventre J, eds. Biomarkers: In Medicine, Drug Discovery, and Environmental Health. New York, NY: John Wiley & Sons, Inc.; 2010:119–155. 86.Dent MR, Das S, Dhalla NS. Alterations in both death and survival signals for apoptosis in heart failure due to volume overload. J Mol Cell Cardiol. 2007;43(6):726–732. 87.Anselmi A, Gaudino M, Baldi A, et al. Role of apoptosis in pressure-overload cardiomyopathy. J Cardiovasc Med (Hagerstown). 2008;9(3):227–232. 88.Leri A, Claudio PP, Li Q, et al. Stretch-mediated release of angiotensin II induces myocyte apoptosis by activating p53 that enhances the local renin-angiotensin system and decreases the Bcl2-to-Bax protein ratio in the cell. J Clin Invest. 1998;101(7):1326–1342. 89.Colucci WS, Sawyer DB, Singh K, et al. Adrenergic overload and apoptosis in heart failure: implications for therapy. J Card Fail. 2000;6(2 suppl 1):1–7. 90.Cooper LT. Myocarditis. N Engl J Med. 2009;360(15):1526–1538. 91.Lai HC, Yeh YC, Ting CT, et al. Doxycycline suppresses doxorubicin-induced oxidative stress and

cellular apoptosis in mouse hearts. Eur J Pharmacol. 2010;644(1–3):176–187. 92.Shi J, Abdelwahid E, Wei L. Apoptosis in Anthracycline Cardiomyopathy. Curr Pediatr Rev. 2011;7(4):329–336. 93.Asiri YA. Probucol attenuates cyclophosphamide-induced oxidative apoptosis, p53 and Bax signal expression in rat cardiac tissues. Oxid Med Cell Longev. 2010;3(5):308–316. 94.Haffner SJ, Cassells H. Hyperglycemia as a cardiovascular risk factor. Am J Med. 2003;115(Suppl 8A):6S–11S. 95.Buckalew VM, Freedman BI. Effects of race on albuminuria and risk of cardiovascular and kidney disease. Expert Rev Cardiovasc Ther. 2011;9(2):245–249. 96.Ochodnicky P, Henning RH, van Dokkum RP, et al. Microalbuminuria and endothelial dysfunction: emerging targets for primary prevention of end-organ damage. J Cardiovasc Pharmacol. 2006;47(suppl 2):S151–S162; discussion S172–S156. 97.Hunt SA, Abraham WT, Chin MH, et al. 2009 Focused update incorporated into the ACC/AHA 2005 Guidelines for the Diagnosis and Management of Heart Failure in Adults A Report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines Developed in Collaboration With the International Society for Heart and Lung Transplantation. J Am Coll Cardiol. 2009;53(15):e1–e90. 98.Lindenfeld J, Albert NM, Boehmer JP, et al. HFSA 2010 Comprehensive Heart Failure Practice Guideline. J Card Fail. 2010;16(6):e1–e194. 99.Dickstein K, Cohen-Solal A, Filippatos G, et al. ESC Guidelines for the diagnosis and treatment of acute and chronic heart failure 2008: the Task Force for the Diagnosis and Treatment of Acute and Chronic Heart Failure 2008 of the European Society of Cardiology. Developed in collaboration with the Heart Failure Association of the ESC (HFA) and endorsed by the European Society of Intensive Care Medicine (ESICM). Eur Heart J. 2008;29(19):2388–2442. 100.Pinamonti B, Di Lenarda A, Sinagra G, et al. Restrictive left ventricular filling pattern in dilated cardiomyopathy assessed by Doppler echocardiography: clinical, echocardiographic and hemodynamic correlations and prognostic implications. Heart Muscle Disease Study Group. J Am Coll Cardiol. 1993;22(3):808–815. 101.Temporelli PL, Scapellato F, Eleuteri E, et al. Doppler echocardiography in advanced systolic heart failure: a noninvasive alternative to Swan-Ganz catheter. Circ Heart Fail. 2010;3(3):387–394. 102.Maisel AS, Krishnaswamy P, Nowak RM, et al. Rapid measurement of B-type natriuretic peptide in the emergency diagnosis of heart failure. N Engl J Med. 2002;347(3):161–167. 103.Januzzi JL, Camargo CA, Anwaruddin S, et al. The N-terminal Pro-BNP investigation of dyspnea in the emergency department (PRIDE) study. Am J Cardiol. 2005;95(8):948–954. 104.Maisel AS, McCord J, Nowak RM, et al. Bedside B-Type natriuretic peptide in the emergency diagnosis of heart failure with reduced or preserved ejection fraction. Results from the Breathing Not Properly Multinational Study. J Am Coll Cardiol. 2003;41(11):2010–2017. 105.Januzzi JL, van Kimmenade R, Lainchbury J, et al. NT-proBNP testing for diagnosis and shortterm prognosis in acute destabilized heart failure: an international pooled analysis of 1256 patients: the International Collaborative of NT-proBNP Study. Eur Heart J. 2006;27(3):330–337. 106.de Lemos JA, Morrow DA, Bentley JH, et al. The prognostic value of B-type natriuretic peptide in patients with acute coronary syndromes. N Engl J Med. 2001;345(14):1014–1021. 107.McFarlane SI, Winer N, Sowers JR. Role of the natriuretic peptide system in cardiorenal protection. Arch Intern Med. 2003;163(22):2696–2704. 108.Wang TJ, Larson MG, Levy D, et al. Plasma natriuretic peptide levels and the risk of cardiovascular events and death. N Engl J Med. 2004;350(7):655–663. 109.Bibbins-Domingo K, Gupta R, Na B, et al. N-terminal fragment of the prohormone brain-type natriuretic peptide (NT-proBNP), cardiovascular events, and mortality in patients with stable coronary heart disease. JAMA. 2007;297(2):169–176. 110.Cowie MR, Jourdain P, Maisel A, et al. Clinical applications of B-type natriuretic peptide (BNP) testing. Eur Heart J. 2003;24(19):1710–1718. 111.Worster A, Balion CM, Hill SA, et al. Diagnostic accuracy of BNP and NT-proBNP in patients

presenting to acute care settings with dyspnea: a systematic review. Clin Biochem. 2008;41(4– 5):250–259. 112.Fonarow GC. B-type natriuretic peptide: spectrum of application. Nesiritide (recombinant BNP) for heart failure. Heart Fail Rev. 2003;8(4):321–325. 113.Güler N, Bilge M, Eryonucu B, et al. Cardiac troponin I levels in patients with left heart failure and cor pulmonale. Angiology. 2001;52(5):317–322. 114.Logeart D, Beyne P, Cusson C, et al. Evidence of cardiac myolysis in severe nonischemic heart failure and the potential role of increased wall strain. Am Heart J. 2001;141(2):247–253. 115.La Vecchia L, Mezzena G, Zanolla L, et al. Cardiac troponin I as diagnostic and prognostic marker in severe heart failure. J Heart Lung Transplant. 2000;19(7):644–652. 116.Nagarajan V, Tang WH. Biomarkers in advanced heart failure: diagnostic and therapeutic insights. Congest Heart Fail. 2011;17(4):169–174. 117.Peacock WF, De Marco T, Fonarow GC, et al. Cardiac troponin and outcome in acute heart failure. N Engl J Med. 2008;358(20):2117–2126. 118.Horwich TB, Patel J, MacLellan WR, et al. Cardiac troponin I is associated with impaired hemodynamics, progressive left ventricular dysfunction, and increased mortality rates in advanced heart failure. Circulation. 2003;108(7):833–838. 119.Yin WH, Chen JW, Feng AN, et al. Multimarker approach to risk stratification among patients with advanced chronic heart failure. Clin Cardiol. 2007;30(8):397–402. 120.Ledue TB, Rifai N. High sensitivity immunoassays for C-reactive protein: promises and pitfalls. Clin Chem Lab Med. 2001;39(11):1171–1176. 121.Tillett WS, Francis T. Serological reactions in pneumonia with a non-protein somatic fraction of pneumococcus. J Exp Med. 1930;52(4):561–571. 122.Abernethy TJ, Avery OT. The occurrence during acute infections of a protein not normally present in the blood: I. Distribution of the reactive protein in patients' sera and the effect of calcium on the flocculation reaction with C polysaccharide of pneumococcus. J Exp Med. 1941;73(2):173–182. 123.Macleod CM, Avery OT. The Occurrence during acute infections of a protein not normally present in the blood: II. isolation and properties of the reactive protein. J Exp Med. 1941;73(2):183–190. 124.Elster SK, Braunwald E, Wood HF. A study of C-reactive protein in the serum of patients with congestive heart failure. Am Heart J. 1956;51(4):533–541. 125.Hedlund P. The appearance of acute phase protein in various diseases. Acta Med Scand. 1947;128(suppl 196):579–601. 126.Kroop IG, Shackman NH. Level of C-reactive protein as a measure of acute myocardial infarction. Proc Soc Exp Biol Med. 1954;86(1):95–97. 127.Kroop IG, Shackman NH. The C-reactive protein determination as an index of myocardial necrosis in coronary artery disease. Am J Med. 1957;22(1):90–98. 128.de Beer FC, Hind CR, Fox KM, et al. Measurement of serum C-reactive protein concentration in myocardial ischaemia and infarction. Br Heart J. 1982;47(3):239–243. 129.Liuzzo G, Biasucci LM, Gallimore JR, et al. The prognostic value of C-reactive protein and serum amyloid a protein in severe unstable angina. N Engl J Med. 1994;331(7):417–424. 130.Ridker PM. C-reactive protein: eighty years from discovery to emergence as a major risk marker for cardiovascular disease. Clin Chem. 2009;55(2):209–215. 131.Ridker PM, Cushman M, Stampfer MJ, et al. Inflammation, aspirin, and the risk of cardiovascular disease in apparently healthy men. N Engl J Med. 1997;336(14):973–979. 132.Ridker PM, Rifai N, Pfeffer MA, et al. Inflammation, pravastatin, and the risk of coronary events after myocardial infarction in patients with average cholesterol levels. Cholesterol and Recurrent Events (CARE) Investigators. Circulation. 1998;98(9):839–844. 133.Pearson TA, Mensah GA, Alexander RW, et al. Markers of inflammation and cardiovascular disease: application to clinical and public health practice: A statement for healthcare professionals from the Centers for Disease Control and Prevention and the American Heart Association. Circulation. 2003;107(3):499–511. 134.Abraham JM, Cho L. The homocysteine hypothesis: still relevant to the prevention and treatment

of cardiovascular disease? Cleve Clin J Med. 2010;77(12):911–918. 135.Hankey GJ, Eikelboom JW. Homocysteine and vascular disease. Lancet. 1999;354(9176):407–413. 136.Gibson JB, Carson NA, Neill DW. Pathological findings in homocystinuria. J Clin Pathol 1964;17:427–437. 137.McCully KS. Vascular pathology of homocysteinemia: implications for the pathogenesis of arteriosclerosis. Am J Pathol. 1969;56(1):111–128. 138.Nygård O, Nordrehaug JE, Refsum H, et al. Plasma homocysteine levels and mortality in patients with coronary artery disease. N Engl J Med. 1997;337(4):230–236. 139.Wilcken DE, Wilcken B. The pathogenesis of coronary artery disease. A possible role for methionine metabolism. J Clin Invest. 1976;57(4):1079–1082. 140.Christen WG, Ajani UA, Glynn RJ, et al. Blood levels of homocysteine and increased risks of cardiovascular disease: causal or casual? Arch Intern Med. 2000;160(4):422–434. 141.Splaver A, Lamas GA, Hennekens CH. Homocysteine and cardiovascular disease: biological mechanisms, observational epidemiology, and the need for randomized trials. Am Heart J. 2004;148(1):34–40. 142.Blundell G, Jones BG, Rose FA, et al. Homocysteine mediated endothelial cell toxicity and its amelioration. Atherosclerosis. 1996;122(2):163–172. 143.Loscalzo J. Homocysteine-mediated thrombosis and angiostasis in vascular pathobiology. J Clin Invest. 2009;119(11):3203–3205. 144.Heinecke JW. Biochemical evidence for a link between elevated levels of homocysteine and lipid peroxidation in vivo. Curr Atheroscler Rep. 1999;1(2):87–89. 145.Tsai JC, Perrella MA, Yoshizumi M, et al. Promotion of vascular smooth muscle cell growth by homocysteine: a link to atherosclerosis. Proc Natl Acad Sci U S A. 1994;91(14):6369–6373. 146.Sung FL, Slow YL, Wang G, et al, Homocysteine stimulates the expression of monocyte chemoattractant protein-1 in endothelial cells leading to enhanced monocyte chemotaxis. Mol Cell Biochem. 2001;216(1–2):121–128. 147.Chen Z, Karaplis AC, Ackerman SL, et al. Mice deficient in methylenetetrahydrofolate reductase exhibit hyperhomocysteinemia and decreased methylation capacity, with neuropathology and aortic lipid deposition. Hum Mol Genet. 2001;10(5):433–443. 148.Wang G, Woo CW, Sung FL, et al. Increased monocyte adhesion to aortic endothelium in rats with hyperhomocysteinemia: role of chemokine and adhesion molecules. Arterioscler Thromb Vasc Biol. 2002;22(11):1777–1783. 149.Poddar R, Sivasubramanian N, DiBello PM, et al. Homocysteine induces expression and secretion of monocyte chemoattractant protein-1 and interleukin-8 in human aortic endothelial cells: implications for vascular disease. Circulation. 2001;103(22):2717–2723. 150.Austin RC, Lentz SR, Werstuck GH. Role of hyperhomocysteinemia in endothelial dysfunction and atherothrombotic disease. Cell Death Differ. 2004;11(suppl 1):S56–S64. 151.Humphrey LL, Fu R, Rogers K, et al. Homocysteine level and coronary heart disease incidence: a systematic review and meta-analysis. Mayo Clin Proc. 2008;83(11):1203–1212. 152.Wald DS, Law M, Morris JK. Homocysteine and cardiovascular disease: evidence on causality from a meta-analysis. BMJ. 2002;325(7374):1202. 153.Goldhaber SZ. Pulmonary embolism. Lancet. 2004;363(9417):1295–1305. 154.Silverstein MD, Heit JA, Mohr DN, et al. Trends in the incidence of deep vein thrombosis and pulmonary embolism: a 25-year population-based study. Arch Intern Med. 1998;158(6):585–593. 155.Kumar V, Fausto N, Abbas A. Robbins and Cotran Pathologic Basis of Disease. 7th ed. Philadelphia, PA: W.B. Saunders Company; 2004. 156.Douma RA, Kamphuisen PW, Büller HR. Acute pulmonary embolism. Part 1: epidemiology and diagnosis. Nat Rev Cardiol. 2010;7(10):585–596. 157.Anderson FA, Wheeler HB, Goldberg RJ, et al. A population-based perspective of the hospital incidence and case-fatality rates of deep vein thrombosis and pulmonary embolism. The Worcester DVT Study. Arch Intern Med. 1991;151(5): 933–938. 158.Douketis JD, Kearon C, Bates S, et al. Risk of fatal pulmonary embolism in patients with treated

venous thromboembolism. JAMA. 1998;279(6):458–462. 159.van Es J, Douma RA, Gerdes VE, et al. Acute pulmonary embolism. Part 2: treatment. Nat Rev Cardiol. 2010;7(11):613–622. 160.Wells PS, Anderson DR, Rodger M, et al. Excluding pulmonary embolism at the bedside without diagnostic imaging: management of patients with suspected pulmonary embolism presenting to the emergency department by using a simple clinical model and D-dimer. Ann Intern Med. 2001;135(2):98–107. 161.Agnelli G, Becattini C. Acute pulmonary embolism. N Engl J Med. 2010;363(3):266–274. 162.Stein PD, Hull RD, Patel KC, et al. D-Dimer for the exclusion of acute venous thrombosis and pulmonary embolism: a systematic review. Ann Intern Med. 2004;140(8):589–602. 163.Becattini C, Vedovati MC, Agnelli G. Prognostic value of troponins in acute pulmonary embolism: a meta-analysis. Circulation. 2007;116(4):427–433. 164.ten Wolde M, Tulevski II, Mulder JW, et al. Brain natriuretic peptide as a predictor of adverse outcome in patients with pulmonary embolism. Circulation. 2003;107(16):2082–2084. 165.Coutance G, Le Page O, Lo T, et al. Prognostic value of brain natriuretic peptide in acute pulmonary embolism. Crit Care. 2008;12(4):R109.

27 Renal Function KARA L. LYNCH and ALAN H. B. WU

Chapter Outline Renal Anatomy Renal Physiology Glomerular Filtration Tubular Function Elimination of Nonprotein Nitrogen Compounds Water, Electrolyte, and Acid–Base Homeostasis Endocrine Function

Analytic Procedures Creatinine Clearance Estimated GFR Cystatin C β2-Microglobulin Myoglobin Albuminuria Neutrophil Gelatinase–Associated Lipocalin NephroCheck Urinalysis

Pathophysiology Glomerular Diseases Tubular Diseases Urinary Tract Infection/Obstruction Renal Calculi Renal Failure

Questions References Chapter Objectives Upon completion of this chapter, the clinical laboratorian should be able to do the following: Diagram the anatomy of the nephron.

Describe the physiologic role of each part of the nephron: glomerulus, proximal tubule, loop of Henle, distal tubule, and collecting duct. Describe the mechanisms by which the kidney maintains fluid and electrolyte balance in conjunction with hormones. Discuss the significance and calculation of glomerular filtration rate and estimated glomerular filtration rate. Relate the clinical significance of total urine proteins, albuminuria, myoglobin clearance, serum β2-microglobulin, and cystatin C. List the tests in a urinalysis and microscopy profile and understand the clinical significance of each. Describe diseases of the glomerulus and tubules and how laboratory tests are used in these disorders. Distinguish between acute kidney injury and chronic kidney disease. Discuss the therapy of chronic renal failure with regard to renal dialysis and transplantation.

Key Terms Acute kidney injury Albuminuria Aldosterone Antidiuretic hormone (ADH) Chronic kidney disease Countercurrent multiplier system Creatinine clearance Cystatin C Diabetes mellitus Erythropoietin Estimated glomerular filtration rate (eGFR) Glomerular filtration rate (GFR) Glomerulonephritis Glomerulus Hemodialysis Hemofiltration Loop of Henle β2-Microglobulin (β2-M) Myoglobin Nephrotic syndrome Prostaglandin Renin Rhabdomyolysis Tubular reabsorption Tubular secretion Tubule Vitamin D

For additional student resources, please visit

at http://thepoint.lww.com

The kidneys are vital organs that perform a variety of important functions (Table 27.1). The most prominent functions are removal of unwanted substances from plasma (both waste and surplus); homeostasis (maintenance of equilibrium) of the body's water, electrolyte, and acid–base status; and participation in hormonal regulation. In the clinical laboratory, kidney function tests are used in the assessment of renal disease, water balance, and acid–base disorders and in situations of trauma, head injury, surgery, and infectious disease. This chapter focuses on renal anatomy and physiology and the analytic procedures available to diagnose, monitor, and treat kidney dysfunction. TABLE 27.1 Kidney Functions

RENAL ANATOMY The kidneys are paired, bean-shaped organs located retroperitoneally on either side of the spinal column. Macroscopically, a fibrous capsule of connective tissue encloses each kidney. When dissected longitudinally, two regions can be clearly discerned—an outer region called the cortex and an inner region called the medulla (Fig. 27.1). The pelvis can also be seen. It is a basin-like cavity at the upper end of the ureter into which newly formed urine passes. The bilateral ureters are thick-walled canals, connecting the kidneys to the urinary bladder. Urine is temporarily stored in the bladder until voided from the body by way of the urethra. The highlighted section in Figure 27.1 shows the arrangement of

nephrons in the kidney, functional units of the kidney that can only be seen microscopically. Each kidney contains approximately 1 million nephrons. Each nephron is a complex apparatus composed of five basic parts, expressed diagrammatically in Figure 27.2.

FIGURE 27.1 Anatomy of the kidney.

FIGURE 27.2 Representation of a nephron and its blood supply. The glomerulus—a capillary tuft surrounded by the expanded end of a renal tubule known as Bowman's capsule. Each glomerulus is supplied by an afferent arteriole carrying the blood in and an efferent arteriole carrying the blood out. The efferent arteriole branches into peritubular capillaries that supply the tubule. The proximal convoluted tubule—located in the cortex. The long loop of Henle—composed of the thin descending limb, which spans the medulla, and the ascending limb, which is located in both the medulla and the cortex, composed of a region that is thin and then thick. The distal convoluted tubule—located in the cortex. The collecting duct—formed by two or more distal convoluted tubules as they pass back down through the cortex and the medulla to collect the urine that drains from each nephron. Collecting ducts eventually merge and

empty their contents into the renal pelvis. The following section describes how each part of the nephron normally functions.

RENAL PHYSIOLOGY There are three basic renal processes: 1. Glomerular filtration 2. Tubular reabsorption 3. Tubular secretion Figure 27.3 illustrates how three different substances are variably processed by the nephron. Substance A is filtered and secreted, but not reabsorbed; substance B is filtered and a portion reabsorbed; and substance C is filtered and completely reabsorbed.1 The following is a description of how specific substances are regulated in this manner to maintain homeostasis.

FIGURE 27.3 Renal processes of filtration, reabsorption, and secretion.

Glomerular Filtration The glomerulus is the first part of the nephron and functions to filter incoming blood. Several factors facilitate filtration. One factor is the unusually high pressure in the glomerular capillaries, which is a result of their position between two arterioles. This sets up a steep pressure difference across the walls. Another factor is the semipermeable glomerular basement membrane, which has a molecular size cutoff value of approximately 66,000 Da, about the molecular size of albumin. This means that water, electrolytes, and small dissolved solutes,

such as glucose, amino acids, low-molecular-weight proteins, urea, and creatinine, pass freely through the basement membrane and enter the proximal convoluted tubule. Other blood constituents, such as albumin; many plasma proteins; cellular elements; and protein-bound substances, such as lipids and bilirubin, are too large to be filtered. In addition, because the basement membrane is negatively charged, negatively charged molecules, such as proteins, are repelled. Of the 1,200 to 1,500 mL of blood that the kidneys receive each minute (approximately one-quarter of the total cardiac output), the glomerulus filters out 125 to 130 mL of an essentially protein-free, cell-free fluid, called glomerular filtrate. The volume of blood filtered per minute is the glomerular filtration rate (GFR), and its determination is essential in evaluating renal function, as discussed in the section on Analytic Procedures.

Tubular Function Proximal Convoluted Tubule The proximal tubule is the next part of the nephron to receive the now cell-free and essentially protein-free blood. This filtrate contains waste products, which are toxic to the body above a certain concentration, and substances that are valuable to the body. One function of the proximal tubule is to return the bulk of each valuable substance back to the blood circulation. Thus, 75% of the water, sodium, and chloride; 100% of the glucose (up to the renal threshold); almost all of the amino acids, vitamins, and proteins; and varying amounts of urea, uric acid, and ions, such as magnesium, calcium, potassium, and bicarbonate, are reabsorbed. Almost all (98% to 100%) of uric acid, a waste product, is actively reabsorbed, only to be secreted at the distal end of the proximal tubule. When the substances move from the tubular lumen to the peritubular capillary plasma, the process is called tubular reabsorption. With the exception of water and chloride ions, the process is active; that is, the tubular epithelial cells use energy to bind and transport the substances across the plasma membrane to the blood. The transport processes that are involved normally have sufficient reserve for efficient reabsorption, but they are saturable. When the concentration of the filtered substance exceeds the capacity of the transport system, the substance is then excreted in the urine. The plasma concentration above which the substance appears in urine is known as the renal threshold, and its determination is useful in assessing both tubular function and nonrenal disease states. A renal threshold does not exist for water because it is always transported passively through diffusion down a concentration gradient. Chloride

ions in this instance diffuse in the wake of sodium. A second function of the proximal tubule is to secrete products of kidney tubular cell metabolism, such as hydrogen ions, and drugs, such as penicillin. The term tubular secretion is used in two ways: (1) tubular secretion describes the movement of substances from peritubular capillary plasma to the tubular lumen, and (2) tubular secretion also describes when tubule cells secrete products of their own cellular metabolism into the filtrate in the tubular lumen. Transport across the membrane of the cell is again either active or passive.

Loop of Henle The osmolality in the medulla in this portion of the nephron increases steadily from the corticomedullary junction inward and facilitates the reabsorption of water, sodium, and chloride. The hyperosmolality that develops in the medulla is continuously maintained by the loop of Henle, a hairpin-like loop between the proximal tubule and the distal convoluted tubule. The opposing flows in the loop, the downward flow in the descending limb, and the upward flow in the ascending limb are termed a countercurrent flow. To understand how the hyperosmolality is maintained in the medulla, it is best to look first at what happens in the ascending limb. Sodium and chloride are actively and passively reabsorbed into the medullary interstitial fluid along the entire length of the ascending limb. Because the ascending limb is relatively impermeable to water, little water follows, and the medullary interstitial fluid becomes hyperosmotic compared with the fluid in the ascending limb. The fluid in the ascending limb becomes hypotonic or dilute as sodium and chloride ions are reabsorbed without the loss of water, so the ascending limb is often called the diluting segment. The descending limb, in contrast to the ascending limb, is highly permeable to water and does not reabsorb sodium and chloride. The high osmolality of the surrounding interstitial medulla fluid is the physical force that accelerates the reabsorption of water from the filtrate in the descending limb. Interstitial hyperosmolality is maintained because the ascending limb continues to pump sodium and chloride ions into it. This interaction of water leaving the descending loop and sodium and chloride leaving the ascending loop to maintain a high osmolality within the kidney medulla produces hypo-osmolal urine as it leaves the loop. This process is called the countercurrent multiplier system.2

Distal Convoluted Tubule The distal convoluted tubule is much shorter than the proximal tubule, with two

or three coils that connect to a collecting duct. The filtrate entering this section of the nephron is close to its final composition. About 95% of the sodium and chloride ions and 90% of water have already been reabsorbed from the original glomerular filtrate. The function of the distal tubule is to effect small adjustments to achieve electrolyte and acid–base homeostasis. These adjustments occur under the hormonal control of both antidiuretic hormone (ADH) and aldosterone. Figure 27.4 describes the action of these hormones.

FIGURE 27.4 Antidiuretic hormone (ADH) and aldosterone control of the renal reabsorption of water and Na+. (Reprinted with permission from Kaplan A, et al. The kidney and tests of renal function. In: Kaplan A, Jack R, Orpheum KE, et al., eds. Clinical Chemistry: Interpretation and Techniques. 4th ed. Baltimore, MD: Williams & Wilkins; 1995:158, Figure 6.2.). Antidiuretic Hormone ADH is a peptide hormone secreted by the posterior pituitary, mainly in response to increased blood osmolality; ADH is also released when blood volume decreases by more than 5% to 10%. Large decreases of blood volume will stimulate ADH secretion even when plasma osmolality is decreased. ADH

stimulates water reabsorption. The walls of the distal collecting tubules are normally impermeable to water (like the ascending loop of Henle), but they become permeable to water in ADH. Water diffuses passively from the lumen of the tubules, resulting in more concentrated urine and decreased plasma osmolality. Aldosterone This hormone is produced by the adrenal cortex under the influence of the renin–angiotensin mechanism. Its secretion is triggered by decreased blood flow or blood pressure in the afferent renal arteriole and by decreased plasma sodium. Aldosterone stimulates sodium reabsorption in the distal tubules and potassium and hydrogen ion secretion. Hydrogen ion secretion is linked to bicarbonate regeneration and ammonia secretion, which also occur here. In addition to these ions, small amounts of chloride ions are reabsorbed. Collecting Duct The collecting ducts are the final site for either concentrating or diluting urine. The hormones ADH and aldosterone act on this segment of the nephron to control reabsorption of water and sodium. Chloride and urea are also reabsorbed here. Urea plays an important role in maintaining the hyperosmolality of the renal medulla. Because the collecting ducts in the medulla are highly permeable to urea, urea diffuses down its concentration gradient out of the tubule and into the medulla interstitium, increasing its osmolality.3

Elimination of Nonprotein Nitrogen Compounds Nonprotein nitrogen compounds (NPNs) are waste products formed in the body as a result of the degradative metabolism of nucleic acids, amino acids, and proteins. Excretion of these compounds is an important function of the kidneys. The three principal compounds are urea, creatinine, and uric acid.4,5 For a more detailed treatment of their biochemistry and disease correlations, see Chapter 12.

Urea Urea makes up the majority (more than 75%) of the NPN waste excreted daily as a result of the oxidative catabolism of protein. Urea synthesis occurs in the liver. Proteins are broken down into amino acids, which are then deaminated to form

ammonia. Ammonia is readily converted to urea, avoiding toxicity. The kidney is the only significant route of excretion for urea. It has a molecular weight of 60 Da and, therefore, is readily filtered by the glomerulus. In the collecting ducts, 40% to 60% of urea is reabsorbed. The reabsorbed urea contributes to the high osmolality in the medulla, which is one of the processes of urinary concentration mentioned earlier (see loop of Henle).

Creatinine Muscle contains creatine phosphate, a high-energy compound for the rapid formation of adenosine triphosphate (ATP). This reaction is catalyzed by creatine kinase (CK) and is the first source of metabolic fuel used in muscle contraction. Every day, up to 20% of total muscle creatine (and its phosphate) spontaneously dehydrates and cycles to form the waste product creatinine. Therefore, creatinine levels are a function of muscle mass and remain approximately the same in an individual from day to day unless muscle mass or renal function changes. Creatinine has a molecular weight of 113 Da and is, therefore, readily filtered by the glomerulus. Unlike urea, creatinine is not reabsorbed by the tubules. However, a small amount of creatinine is secreted by the kidney tubules at high serum concentrations.

Uric Acid Uric acid is the primary waste product of purine metabolism. The purines, adenine and guanine, are precursors of nucleic acids ATP and guanosine triphosphate, respectively. Uric acid has a molecular weight of 168 Da. Like creatinine, it is readily filtered by the glomerulus, but it then undergoes a complex cycle of reabsorption and secretion as it courses through the nephron. Only 6% to 12% of the original filtered uric acid is finally excreted. Uric acid exists in its ionized and more soluble form, usually sodium urate, at urinary pH > 5.75 (the first pKa of uric acid). At pH < 5.75, it is undissociated. This fact has clinical significance in the development of urolithiasis (formation of calculi) and gout.

Water, Electrolyte, and Acid–Base Homeostasis Water Balance The kidney's contribution to water balance in the body is through water loss or water conservation, which is regulated by the hormone ADH. ADH responds

primarily to changes in osmolality and intravascular volume. Increased plasma osmolality or decreased intravascular volume stimulates secretion of ADH from the posterior pituitary. ADH then increases the permeability of the distal convoluted tubules and collecting ducts to water, resulting in increased water reabsorption and excretion of more concentrated urine. In contrast, the major system regulating water intake is thirst, which appears to be triggered by the same stimuli that trigger ADH secretion. In states of dehydration, the renal tubules reabsorb water at their maximal rate, resulting in production of a small amount of maximally concentrated urine (high urine osmolality, 1,200 mOsm/L).6 In states of water excess, the tubules reabsorb water at only a minimal rate, resulting in excretion of a large volume of extremely dilute urine (low urine osmolality, down to 50 mOsm/L).7,8 The continuous fine-tuning possible between these two extreme states results in the precise control of fluid balance in the body (Fig. 27.5).

FIGURE 27.5 Antidiuretic hormone (ADH) control of thirst mechanism.

Electrolyte Balance The following is a brief overview of the notable ions involved in maintenance of electrolyte balance within the body. For a more comprehensive treatment of this subject, refer Chapter 16. Sodium Sodium is the primary extracellular cation in the human body and is excreted principally through the kidneys. Sodium balance in the body is controlled only

through excretion. The renin–angiotensin–aldosterone hormonal system is the major mechanism for the control of sodium balance. Potassium Potassium is the main intracellular cation in the body. The precise regulation of its concentration is of extreme importance to cellular metabolism and is controlled chiefly by renal means. Like sodium, it is freely filtered by the glomerulus and then actively reabsorbed throughout the entire nephron (except for the descending limb of the loop of Henle). Both the distal convoluted tubule and the collecting ducts can reabsorb and excrete potassium, and this excretion is controlled by aldosterone. Potassium ions can compete with hydrogen ions in their exchange with sodium (in the proximal convoluted tubule); this process is used by the body to conserve hydrogen ions and, thereby, compensate in states of metabolic alkalosis. Chloride Chloride is the principal extracellular anion and is involved in the maintenance of extracellular fluid balance. It is readily filtered by the glomerulus and is passively reabsorbed as a counterion when sodium is reabsorbed in the proximal convoluted tubule. In the ascending limb of the loop of Henle, potassium is actively reabsorbed by a distinct chloride “pump,” which also reabsorbs sodium. This pump can be inhibited by loop diuretics, such as furosemide. As expected, the regulation of chloride is controlled by the same forces that regulate sodium.6,8 Phosphate, Calcium, and Magnesium The phosphate ion occurs in higher concentrations in the intracellular than in the extracellular fluid environments. It exists as either a protein-bound or a non– protein-bound form; homeostatic balance is chiefly determined by proximal tubular reabsorption under the control of parathyroid hormone (PTH). Calcium, the second most predominant intracellular cation, is the most important inorganic messenger in the cell. It also exists in protein-bound and non–protein-bound states. Calcium in the non–protein-bound form is either ionized and physiologically active or nonionized and complexed to small, diffusible ions, such as phosphate and bicarbonate. The ionized form is freely filtered by the

glomerulus and reabsorbed in the tubules under the control of PTH. However, renal control of calcium concentration is not the major means of regulation. PTH- and calcitonin-controlled regulation of calcium absorption from the gut and bone stores is more important than renal secretion or reabsorption. Magnesium, a major intracellular cation, is important as an enzymatic cofactor. Like phosphate and calcium, it exists in both protein-bound and ionized states. The ionized fraction is easily filtered by the glomerulus and reabsorbed in the tubules under the influence of PTH. See Chapter 24 for more detailed information.

Acid–Base Balance Many nonvolatile acidic waste products are formed by normal body metabolism each day. Carbonic acid, lactic acid, ketoacids, and others must be continually transported in the plasma and excreted from the body, causing only minor alterations in physiologic pH. The renal system constitutes one of three means by which constant control of overall body pH is accomplished. The other two strategies involved in this regulation are the respiratory system and the acid–base buffering system.9 The kidneys manage their share of the responsibility for controlling body pH by dual means: conserving bicarbonate ions and removing metabolic acids. For a more in-depth examination of these processes, refer Chapter 17. Regeneration of Bicarbonate Ions In a complicated process, bicarbonate ions are first filtered out of the plasma by the glomerulus. In the lumen of the renal tubules, this bicarbonate combines with hydrogen ions to form carbonic acid, which subsequently degrades to carbon dioxide (CO2) and water. This CO2 then diffuses into the brush border of the proximal tubular cells, where it is reconverted by carbonic anhydrase to carbonic acid and then degrades back to hydrogen ions and regenerated bicarbonate ions. This regenerated bicarbonate is transported into the blood to replace what was depleted by metabolism; the accompanying hydrogen ions are secreted back into the tubular lumen, and from there, they enter the urine. Filtered bicarbonate is “reabsorbed” into the circulation, helping to return blood pH to its optimal level and effectively functioning as another buffering system. Excretion of Metabolic Acids

Hydrogen ions are manufactured in the renal tubules as part of the regeneration mechanism for bicarbonate. These hydrogen ions, as well as others that are dissociated from nonvolatile organic acids, are disposed of by several different reactions with buffer bases. Reaction with Ammonia (NH3) The glomerulus does not filter NH3. However, this substance is formed in the renal tubules when the amino acid glutamine is deaminated by glutaminase. This NH3 then reacts with secreted hydrogen ions to form ammonium ions (NH4+), which are unable to readily diffuse out of the tubular lumen and, therefore, are excreted into the urine. This mode of acid excretion is the primary means by which the kidneys compensate for states of metabolic acidosis. Reaction with Monohydrogen Phosphate Phosphate ions filtered by the glomerulus can exist in the tubular fluid as disodium hydrogen phosphate (Na2HPO4) (dibasic). This compound can react with hydrogen ions to yield dihydrogen phosphate (monobasic), which is then excreted. The released sodium then combines with bicarbonate to yield sodium bicarbonate and is reabsorbed. These mechanisms can excrete increasing amounts of metabolic acid until a maximum urine pH of approximately 4.4 is reached. After this, renal compensation is unable to adjust to any further decreases in blood pH, and metabolic acidosis ensues. Few free hydrogen ions are excreted directly in the urine.

Endocrine Function In addition to numerous excretory and regulatory functions, the kidney has endocrine functions as well. It is both a primary endocrine site, as the producer of its own hormones, and a secondary site, as the target locus for hormones manufactured by other endocrine organs. The kidneys synthesize renin, erythropoietin, 1,25-dihydroxy vitamin D3, and the prostaglandins.

Renin Renin is the initial component of the renin–angiotensin–aldosterone system. Renin is produced by the juxtaglomerular cells of the renal medulla when

extracellular fluid volume or blood pressure decreases. It catalyzes the synthesis of angiotensin by cleavage of the circulating plasma precursor angiotensinogen. Angiotensin is converted to angiotensin II by angiotensin-converting enzyme. Angiotensin II is a powerful vasoconstrictor that increases blood pressure and stimulates release of aldosterone from the adrenal cortex. Aldosterone, in turn, promotes sodium reabsorption and water conservation.7,8 For a more detailed look at the complexities of this feedback loop, see Chapter 21.

Erythropoietin Erythropoietin is a single-chain polypeptide produced by cells close to the proximal tubules, and its production is regulated by blood oxygen levels. Hypoxia produces increased serum concentrations within 2 hours. Erythropoietin acts on the erythroid progenitor cells in the bone marrow, increasing the number of red blood cells (RBCs). In chronic renal insufficiency, erythropoietin production is significantly reduced. The routine administration of recombinant human erythropoietin is common in chronic renal failure patients. Before this therapy was available, anemia was a clinical reality in these patients.4,7 Erythropoietin concentrations in blood can be measured by immunoassays. Recombinant human erythropoietin has also been used in sports doping to stimulate erythrocyte production and increase the oxygen-carrying capacity in the blood of endurance athletes. Assays capable of detecting posttranslational modifications on erythropoietin have been produced and are capable of distinguishing exogenous from endogenous erythropoietin.

1,25-Dihydroxy Vitamin D3 The kidneys are the sites of formation of the active form of vitamin D, 1,25(OH)2 vitamin D3. This form of vitamin D is one of three major hormones that determine phosphate and calcium balance and bone calcification in the human body. Chronic renal insufficiency is, therefore, often associated with osteomalacia (inadequate bone calcification, the adult form of rickets), owing to the continual distortion of normal vitamin D metabolism.

Prostaglandins The prostaglandins are a group of potent cyclic fatty acids formed from essential (dietary) fatty acids, primarily arachidonic acid. They are formed in almost all tissue and their actions are diverse. The prostaglandins produced by the kidneys increase renal blood flow, sodium and water excretion, and renin release. They

act to oppose renal vasoconstriction due to angiotensin and norepinephrine.

ANALYTIC PROCEDURES All laboratory methods used for the evaluation of renal function rely on the measurement of waste products in blood, usually urea and creatinine, which accumulate when the kidneys begin to fail. Renal failure must be advanced, with only about 20% to 30% of the nephrons still functioning, before the concentration of either substance begins to increase in the blood. The rate at which creatinine and urea are removed or cleared from the blood into the urine is termed clearance. Clearance is defined as that volume of plasma from which a measured amount of substance can be completely eliminated into the urine per unit of time expressed in milliliters per minute.5 Calculation of creatinine clearance has become the standard laboratory method for determining the GFR. Urea clearance was one of the first clearance tests performed; however, it is no longer widely used since it does not accurately provide a full clearance assessment. Older tests used administration of insulin, sodium [125I] iothalamate, or p-aminohippurate to assess glomerular filtration or tubular secretion. These tests are difficult to administer and are no longer common.

Creatinine Clearance Creatinine is a nearly ideal substance for the measurement of clearance. It is an endogenous metabolic product synthesized at a constant rate for a given individual and cleared essentially only by glomerular filtration. It is not reabsorbed and is only slightly secreted by the proximal tubule. Serum creatinine levels are higher in males than in females due to the direct correlation with muscle mass. Analysis of creatinine is simple and inexpensive using colorimetric assays; however, different methods for assaying plasma creatinine, such as kinetic or enzymatic assays, have varying degrees of accuracy and imprecision (see Chapter 12). Creatinine clearance is derived by mathematically relating the serum creatinine concentration to the urine creatinine concentration excreted during a period of time, usually 24 hours. Specimen collection, therefore, must include both a 24-hour urine specimen and a serum creatinine value, ideally collected at the midpoint of the 24-hour urine collection. The urine container must be kept refrigerated throughout the duration of both the collection procedure and the subsequent storage period until laboratory analysis can be performed. The

concentration of creatinine in both serum and urine is measured by the applicable methods discussed in Chapter 12. The total volume of urine is carefully measured, and the creatinine clearance is calculated using the following formula: (Eq. 27-1) where Cr is creatinine clearance, UCr is urine creatinine clearance, VUr is urine volume excreted in 24 hours, PCr is serum creatinine concentration, and 1.73/A is normalization factor for body surface area (1.73 is the generally accepted average body surface in square meters, and A is the actual body surface area of the individual determined from height and weight). If the patient's body surface area varies greatly from the average (e.g., obese or pediatric patients), this correction for body mass must be included in the formula. The reference range for creatinine clearance is lower in females compared with males and normally decreases with age.

Estimated GFR The National Kidney Foundation recommends that estimated GFR (eGFR) be calculated each time a serum creatinine level is reported. (Additional information is available at the National Kidney Foundation Web site at http://www.kidney.org/.) The equation is used to predict GFR and is based on serum creatinine, age, body size, gender, and race, without the need of a urine creatinine. Because the calculation does not require a timed urine collection, it should be used more often than the traditional creatinine clearance and result in earlier detection of chronic kidney disease (CKD). There are a number of formulas that can be used to estimate GFR on the basis of serum creatinine levels.

Cockcroft-Gault Formula The Cockcroft-Gault formula is one of the first formulas used to estimate GFR. This formula predicts creatinine clearance, and the results are not corrected for body surface area. This equation assumes that women will have a 15% lower creatinine clearance than men at the same level of serum creatinine.

(Eq. 27-2)

Modification of Diet in Renal Disease Formula The modification of diet in renal disease (MDRD) formula was developed in the Modification of Diet in Renal Disease Study of chronic renal insufficiency. The study showed that the MDRD formula provided a more accurate assessment of GFR than the Cockcroft-Gault formula. The MDRD formula was validated in a large population that included European Americans and African Americans. It does not require patient weight and is corrected for body surface area. The MDRD formula is known to underestimate the GFR in healthy patients with GFRs over 60 mL/min and to overestimate GFR in underweight patients. The four-variable MDRD equation includes age, race, gender, and serum creatinine as variables.

(Eq. 27-3)

CKD-EPI Formula The CKD-EPI (Chronic Kidney Disease Epidemiology Collaboration) formula was published in 2009.10 It was developed in an effort to create a formula more accurate than the MDRD formula. Multiple studies have shown the CKD-EPI formula to perform better and with less bias than the MDRD formula, especially in patients with higher GFR. Many laboratories still use the MDRD formula; however, some have converted to the CKD-EPI formula.

(Eq. 27-4) k is 0.7 for females and 0.9 for males, a is −0.329 for females and −0.411 for males, min indicates the minimum of SCr/k or 1, and max indicates the maximum of SCr/k or 1.

Cystatin C Cystatin C is a low-molecular-weight protein produced at a steady rate by most body tissues. It is freely filtered by the glomerulus, reabsorbed, and catabolized by the proximal tubule. Levels of cystatin C rise more quickly than creatinine levels in acute kidney injury (AKI). Plasma concentrations appear to be unaffected by diet, gender, race, age, and muscle mass. Studies have shown measurement of cystatin C to be at least as useful as serum creatinine and creatinine clearance in detecting early changes in kidney function. A rise in cystatin C is often detectable before there is a measureable decrease in the GFR or increase in creatinine. Cystatin C can be measured by immunoassay methods.11 Some findings suggest that an equation that uses both serum creatinine and cystatin C with age, sex, and race would be better than equations that use only one of these serum markers.12,13

Biologic Variation When the same test is performed on various individuals, we find that the mean of each person's results is not the same, showing that individual homeostatic setting points often vary. Biologic variation is defined as the random fluctuation around a homeostatic setting point.14 This includes the fluctuation around the homeostatic setting point for a single individual, termed within-subject biologic variation, and differences between the homeostatic setting points of multiple individuals, termed between-subject biologic variation. In the case of creatinine, levels for an individual differ slightly over time, and the mean values of all individuals vary significantly from each other. Therefore, each individual's results span only a small portion of the population-based reference interval. This means that for creatinine, within-subject biologic variation is less than betweensubject biologic variation. When this is true of a given analyte, the analyte is said to have marked individuality. Interestingly, the between-subject biologic variation is much smaller for cystatin C compared with creatinine, showing that population-based reference values are more useful for cystatin C compared with creatinine. However, the within-subject variation is greater for cystatin C compared with creatinine. As a result, creatinine is more helpful in monitoring renal function over time for a given individual, whereas cystatin C is potentially more useful for detecting minor renal impairment.

β2-Microglobulin

β2-Microglobulin (β2-M) is a small, nonglycosylated peptide (molecular weight, 11,800 Da) found on the surface of most nucleated cells. The plasma membrane sheds β2-M at a constant rate, as a relatively intact molecule. β2-M is easily filtered by the glomerulus, and 99.9% is reabsorbed by the proximal tubules and catabolized. Elevated levels in serum indicate increased cellular turnover as seen in myeloproliferative and lymphoproliferative disorders, inflammation, and renal failure. Both blood and urine β2-M tests may be ordered to evaluate kidney damage and to distinguish between disorders that affect the glomeruli and the renal tubules. Measurement of serum β2-M is used clinically to assess renal tubular function in renal transplant patients, with elevated levels indicating organ rejection.

Myoglobin Myoglobin is a low-molecular-weight protein (16,900 Da) associated with acute skeletal and cardiac muscle injury. Myoglobin functions to bind and transport oxygen from the plasma membrane to the mitochondria in muscle cells. Blood levels of myoglobin can rise very quickly with severe muscle injury. In rhabdomyolysis, myoglobin release from skeletal muscle is sufficient to overload the proximal tubules and cause acute renal failure. Early diagnosis and aggressive treatment of elevated myoglobin may prevent or lessen the severity of renal failure. Serum and urine myoglobin can be measured easily and rapidly by immunoassays.

Albuminuria The term albuminuria, previously referred to as microalbuminuria, describes the presence of albumin in the urine. Urine albumin measurement is important in the management of patients with diabetes mellitus, who are at serious risk for developing nephropathy over their lifetime. In the early stages of nephropathy, there is renal hypertrophy, hyperfunction, and increased thickness of the glomerular and tubular basement membranes. In this early stage, there are no overt signs of renal dysfunction. In the next 7 to 10 years, there is progression to glomerulosclerosis, with increased glomerular capillary permeability. This permeability allows small (micro) amounts of albumin to pass into the urine. If detected in this early phase, rigid glucose control, along with treatment to prevent hypertension, can be instituted and progression to kidney failure prevented. Quantitative albumin-specific immunoassays, usually using nephelometry or immunoturbidimetry, are widely used. For a 24-hour urine

collection, 30 to 300 mg of albumin is diagnostic of albuminuria. A 24-hour urine collection is preferred, but a random urine sample in which the ratio of albumin to creatinine is measured, is common. An albumin to creatinine ratio (ACR) of greater than 30 mg/g is diagnostic of albuminuria.

Neutrophil Gelatinase–Associated Lipocalin Neutrophil gelatinase–associated lipocalin (NGAL) is a 25-kDa protein expressed by neutrophils and epithelial cells including those of the proximal tubule. The gene encoding NGAL is upregulated in the presence of renal ischemia, tubule injury, and nephrotoxicity.15 It can be measured in plasma and urine and is elevated within 2 to 6 hours of AKI. NGAL has been shown to be a useful early predictor of AKI and has prognostic value for clinical endpoints, such as initiation of dialysis and mortality. However, urinary NGAL excretion may also arise from systemic stress in the absence of AKI, limiting its specificity.

NephroCheck NephroCheck is the first FDA-cleared test used to determine if critically ill patients are at risk for developing moderate to severe AKI in the 12 hours following administration of the test. NephroCheck quantifies concentrations of tissue inhibitor of metalloproteinase 2 (TIMP-2) and insulin-like growth factor binding protein 7 (IGFBP-7) in urine specimens and multiplies the results to generate a quantitative AKI risk index (AKIRisk Score). The two biomarkers invoke cell cycle arrest in response to tissue insults that cause AKI.16 Elevation of these two biomarkers is thought to signal the kidney's attempt to protect itself from harmful insults and signal the increased risk for imminent AKI. The biomarker concentrations have been shown to correspond to severity across all stages of AKI.

Urinalysis Urinalysis (UA) permits a detailed, in-depth assessment of renal status with an easily obtained specimen. UA also serves as a quick indicator of an individual's glucose status and hepatic–biliary function. Routine UA includes assessment of physical characteristics, chemical analyses, and a microscopic examination of the sediment from a (random) urine specimen.

Specimen Collection

The importance of a properly collected and stored specimen for UA cannot be overemphasized. Initial morning specimens are preferred, particularly for protein analyses, because they are more concentrated from overnight retention in the bladder. The specimen should be obtained by a clean midstream catch or catheterization. The urine should be freshly collected into a clean, dry container with a tight-fitting cover. It must be analyzed within 1 hour of collection if held at room temperature or else refrigerated at 2°C to 8°C for not more than 8 hours before analysis. If not assayed within these time limits, several changes will occur. Bacterial multiplication will cause false-positive nitrite tests, and ureaseproducing organisms will degrade urea to ammonia and alkalinize the pH. Loss of CO2 by diffusion into the air adds to this pH elevation, which, in turn, causes cast degeneration and red cell lysis.

Physical Characteristics Visual Appearance Color intensity of urine correlates with concentration: the darker the color, the more concentrated is the specimen. The various colors observed in urine are a result of different excreted pigments. Yellow and amber are generally due to urochromes (derivatives of urobilin, the end product of bilirubin degradation), whereas a yellowish-brown to green color is a result of bile pigment oxidation. Red and brown after standing are due to porphyrins, whereas reddish-brown in fresh specimens comes from hemoglobin or red cells. Brownish-black after standing is seen in alkaptonuria (a result of excreted homogentisic acid) and in malignant melanoma (in which the precursor melanogen oxidizes in the air to melanin). Drugs and some foods, such as beets, may also alter urine color. Odor Odor ordinarily has little diagnostic significance. The characteristic pungent odor of fresh urine is due to volatile aromatic acids, in contrast to the typical ammonia odor of urine that has been allowed to stand. Urinary tract infections impart a noxious, fecal smell to urine, whereas the urine of diabetics often smells fruity as a result of ketones. Certain inborn errors of metabolism, such as maple sugar urine disease, are associated with characteristic urine odors. Turbidity

The cloudiness of a urine specimen depends on pH and dissolved solids composition. Turbidity generally may be due to gross bacteriuria, whereas a smoky appearance is seen in hematuria. Threadlike cloudiness is observed when the specimen is full of mucus. In alkaline urine, suspended precipitates of amorphous phosphates and carbonates may be responsible for turbidity, whereas in acidic urine, amorphous urates may be the cause.17 Volume The volume of urine excreted indicates the balance between fluid ingestion and water lost from the lungs, sweat, and intestine. Most adults produce from 750 to 2,000 mL every 24 hours, averaging about 1.5 L per person. Polyuria is observed in diabetes mellitus and insipidus (in insipidus, as a result of lack of ADH), as well as in chronic renal disease, acromegaly (overproduction of the growth hormone somatostatin), and myxedema (hypothyroid edema). Anuria or oliguria ( 7.0) is observed postprandially as a normal reaction to the acidity of gastric HCl dumped into the duodenum and then into the circulation or following ingestion of alkaline food or medications. Urinary tract infections and bacterial contamination also will alkalinize pH. Medications such as potassium citrate and sodium bicarbonate will reduce urine pH. Alkaline urine is also found in Fanconi syndrome, a congenital generalized aminoaciduria resulting from defective proximal tubular function. Chemical Analyses Routine urine chemical analysis is rapid and easily performed with commercially available reagent strips or dipsticks. These strips are plastic coated with different reagent bands directed toward different analytes. When dipped into urine, a color

change signals a deviation from normality. Colors on the dipstick bands are matched against a color chart provided with the reagents. Automated and semiautomated instruments that detect by reflectance photometry provide an alternative to the color chart and offer better precision and standardization. Abnormal results are followed up by specific quantitative or confirmatory urine assays. The analytes routinely tested are glucose, protein, ketones, nitrite, leukocyte esterase, bilirubin/urobilinogen, and hemoglobin/blood. Glucose and Ketones These constituents are normally absent in urine. The clinical significance of these analytes and their testing methods are discussed in Chapter 14. Protein Reagent strips for UA are used as a general qualitative screen for proteinuria. They are primarily specific for albumin, but they may give false-positive results in specimens that are alkaline and highly buffered. Positive dipstick results should be confirmed by more specific chemical assays, as described in Chapter 11, or more commonly by microscopic evaluation to detect casts. Nitrite This assay semiquantitates the amount of urinary reduction of nitrate (on the reagent strip pad) to nitrite by the enzymes of gram-negative bacteria. A negative result does not mean that no bacteriuria is present. A gram-positive pathogen, such as Staphylococcus, Enterococcus, or Streptococcus, may not produce nitrate-reducing enzymes; alternatively, a spot urine sample may not have been retained in the bladder long enough to pick up a sufficient number of organisms to register on the reagent strip.16 Leukocyte Esterase White blood cells (WBCs), especially phagocytes, contain esterases. A positive dipstick for esterases indicates possible WBCs in urine. Bilirubin/Urobilinogen

Hemoglobin degradation ultimately results in the formation of the waste product bilirubin, which is then converted to urobilinogen in the gut through bacterial action. Although most of this urobilinogen is excreted as urobilin in the feces, some is excreted in urine as a colorless waste product. This amount is normally too small to be detected as a positive dipstick reaction. In conditions of prehepatic, hepatic, and posthepatic jaundice, however, urine dipstick tests for urobilinogen and bilirubin may be positive or negative, depending on the nature of the patient's jaundice. A more in-depth view of bilirubin metabolism and assay methods is given in Chapter 25. Reagent strip tests for bilirubin involve diazotization and formation of a color change. Dipstick methods for urobilinogen differ, but most rely on a modification of the Ehrlich reaction with p-dimethylaminobenzaldehyde.16 Hemoglobin/Blood Intact or lysed RBCs produce a positive dipstick result. The dipstick will be positive in cases of renal trauma/injury, infection, and obstruction that result from calculi or neoplasms. Sediment Examination Centrifuged, decanted urine aliquot leaves behind a sediment of formed elements that is used for microscopic examination. Cells For cellular elements, evaluation is best accomplished by counting and then taking the average of at least 10 microscopic fields. Red Blood Cells Erythrocytes greater in number than 0 to 2/high-power field (HPF) are considered abnormal. Such hematuria may result simply from severe exercise or menstrual blood contamination. However, it may also be indicative of trauma, particularly vascular injury, renal/urinary calculi obstruction, pyelonephritis, or cystitis. Hematuria in conjunction with leukocytes is diagnostic of infection.

White Blood Cells Leukocytes greater in number than 0 to 1/HPF are considered abnormal. These cells are usually polymorphonuclear phagocytes, commonly known as segmented neutrophils. They are observed when there is acute glomerulonephritis, urinary tract infection, or inflammation of any type. In hypotonic urine (low osmotic concentration), WBCs can become enlarged, exhibiting a sparkling effect in their cytoplasmic granules. These cells possess a noticeable Brownian motion and are called glitter cells, but they have no pathologic significance. Epithelial Cells Several types of epithelial cells are frequently encountered in normal urine because they are continuously sloughed off the lining of the nephrons and urinary tract. Large, flat, squamous vaginal epithelia are often seen in urine specimens from female patients, and samples heavily contaminated with vaginal discharge may show clumps or sheets of these cells. Renal epithelial cells are round, uninucleated cells and, if present in numbers greater than 2/HPF, indicate clinically significant active tubular injury or degeneration. Transitional bladder epithelial cells (urothelial cells) may be flat, cuboidal, or columnar and also can be observed in urine on occasion. Large numbers will be seen only in cases of urinary catheterization, bladder inflammation, or neoplasm. Miscellaneous Elements Spermatozoa are often seen in the urine of both males and females. They are usually not reported because they are of no pathologic significance. In males, however, their presence may indicate prostate abnormalities. Yeast cells are also frequently found in urine specimens. Because they are extremely refractile and of a similar size to RBCs, they can easily be mistaken under low magnification. Higher power examination for budding or mycelial forms differentiates these fungal elements from erythrocytes. Parasites found in urine are generally contaminants from fecal or vaginal material. In fecal contaminant category, the most commonly encountered organism is Enterobius vermicularis (pinworm) infestation in children. In the vaginal contaminant category, the most common is the intensely motile flagellate, Trichomonas vaginalis. A true urinary parasite, sometimes seen in patients from endemic areas of the world, is the ova of the

trematode Schistosoma haematobium. This condition will usually occur in conjunction with a significant hematuria.16 Bacteria Normal urine is sterile and contains no bacteria. Small numbers of organisms seen in a fresh urine specimen usually represent skin or air contamination. In fresh specimens, however, large numbers of organisms, or small numbers accompanied by WBCs and the symptoms of urinary tract infection, are highly diagnostic for true infection. Clinically significant bacteriuria is considered to be more than 20 organisms/HPF or, alternatively, 105 or greater registered on a microbiologic colony count. Most pathogens seen in urine are gram-negative coliforms (microscopic “rods”) such as Escherichia coli and Proteus spp. Asymptomatic bacteriuria, in which there are significant numbers of bacteria without appreciable clinical symptoms, occurs somewhat commonly in young girls, pregnant women, and patients with diabetes. This condition must be taken seriously because, if left untreated, it may result in pyelonephritis and, subsequently, permanent renal damage. Casts Casts are precipitated, cylindrical impressions of the nephrons. They comprise Tamm-Horsfall mucoprotein (uromucoid) from the tubular epithelia in the ascending limb of the loop of Henle. Casts form whenever there is sufficient renal stasis, increased urine salt or protein concentration, and decreased urine pH. In patients with severe renal disease, truly accurate classification of casts may require use of “cytospin” centrifugation and Papanicolaou staining for adequate differentiation. Unlike cells, casts should be examined under low power and are most often located around the edges of the coverslip. Hyaline The matrix of these casts is clear and gelatinous, without embedded cellular or particulate matter. They may be difficult to visualize unless a high-intensity lamp is used. Their presence indicates glomerular leakage of protein. This leakage may be temporary (as a result of fever, upright posture, dehydration, or emotional stress) or may be permanent. Their occasional presence is not considered pathologic.

Granular These casts are descriptively classified as either coarse or finely granular. The type of embedded particulate matter is simply a matter of the amount of degeneration that the epithelial cell inclusions have undergone. Their occasional presence is not pathologic; however, large numbers may be found in chronic lead toxicity and pyelonephritis. Cellular Several different types of casts are included in this category. RBC or erythrocytic casts are always considered pathologic because they are diagnostic for glomerular inflammation that results in renal hematuria. They are seen in subacute bacterial endocarditis, kidney infarcts, collagen diseases, and acute glomerulonephritis. WBC or leukocytic casts are also always considered pathologic because they are diagnostic for inflammation of the nephrons. They are observed in pyelonephritis, nephrotic syndrome, and acute glomerulonephritis. In asymptomatic pyelonephritis, these casts may be the only clue to detection. Epithelial cell casts are sometimes formed by fusion of renal tubular epithelia after desquamation; occasional presence is normal. Many, however, are observed in severe desquamative processes and renal stases that occur in heavy metal poisoning, renal toxicity, eclampsia, nephrotic syndrome, and amyloidosis. Waxy casts are uniformly yellowish, refractile, and brittle appearing, with sharply defined, often broken edges. They are almost always pathologic because they indicate tubular inflammation or deterioration. They are formed by renal stasis in the collecting ducts and are, therefore, found in chronic renal diseases. Fatty casts are abnormal, coarse, granular casts with lipid inclusions that appear as refractile globules of different sizes. Broad (renal failure) casts may be up to two to six times wider than “regular” casts and may be cellular, waxy, or granular in composition. Like waxy casts, they are derived from the collecting ducts in severe renal stasis.

Crystals Acid Environment Crystals seen in urine with pH values of less than 7 include calcium oxalate, which are normal colorless octahedrons or “envelopes”; they may have an

almost starlike appearance. Also seen are amorphous urates, normal yellow-red masses with a grain of sand appearance. Uric acid crystals found in this environment are normal yellow to red-brown crystals that appear in extremely irregular shapes, such as rosettes, prisms, or rhomboids. Cholesterol crystals in acid urine are clear, flat, rectangular plates with notched corners. They may be seen in nephrotic syndrome and in conditions producing chyluria and are always considered abnormal. Cystine crystals are also sometimes observed in acid urine; they are highly pathologic and appear as colorless, refractile, nearly flat hexagons, somewhat similar to uric acid. These are observed in homocystinuria (an aminoaciduria resulting in mental retardation) and cystinuria (an inherited defect of cystine reabsorption resulting in renal calculi). Alkaline Environment Crystals seen in urine with pH values greater than 7 include amorphous phosphates, which are normal crystals that appear as fine, colorless masses, resembling sand. Also seen are calcium carbonate crystals, which are normal forms that appear as small, colorless dumbbells or spheres. Triple phosphate crystals are also observed in alkaline urines; they are colorless prisms of three to six sides, resembling “coffin lids.” Ammonium biurate crystals are normal forms occasionally found in this environment, appearing as spiny, yellow-brown spheres, or “thorn apples.” Other Sulfonamide crystals are abnormal precipitates shaped like yellow-brown sheaves, clusters, or needles, formed in patients undergoing antimicrobial therapy with sulfa drugs. These drugs are seldom used today. Tyrosine/leucine crystals are abnormal types shaped like clusters of smooth, yellow needles or spheres. These are sometimes seen in patients with severe liver disease.16

PATHOPHYSIOLOGY Glomerular Diseases Disorders or diseases that directly damage the renal glomeruli may, at least initially, exhibit normal tubular function. With time, however, disease progression involves the renal tubules as well. The following syndromes have

discrete symptoms that are recognizable by their patterns of clinical laboratory findings.

Acute Glomerulonephritis Pathologic lesions in acute glomerulonephritis primarily involve the glomerulus. Histologic examination shows large, inflamed glomeruli with a decreased capillary lumen. Abnormal laboratory findings usually include rapid onset of hematuria and proteinuria (usually albumin and generally 3.5 g/d) and resultant hypoalbuminemia. The subsequent decreased plasma oncotic pressure causes a generalized edema as a result of the movement of body fluids out of vascular and into interstitial spaces. Other hallmarks of this syndrome are hyperlipidemia and lipiduria. Lipiduria takes the

form of oval fat bodies in the urine. These bodies are degenerated renal tubular cells containing reabsorbed lipoproteins. Primary causes are associated directly with glomerular disease states.

FIGURE 27.6 Pathophysiology of nephrotic syndrome.

Tubular Diseases Tubular defects occur to a certain extent in the progression of all renal diseases as the GFR falls. In some instances, however, this aspect of the overall dysfunction becomes predominant. The result is decreased excretion/reabsorption of certain substances or reduced urinary concentrating capability. Clinically, the most important defect is RTA, the primary tubular disorder affecting acid–base balance. This disease can be classified into two types, depending on the nature of the tubular defect. In distal RTA, the renal tubules are unable to keep up the vital pH gradient between the blood and tubular fluid. In proximal RTA, there is decreased bicarbonate reabsorption, resulting in hyperchloremic acidosis. In general, reduced reabsorption in the proximal tubule is manifested by findings of abnormally low serum values for phosphorus and uric acid and by glucose and amino acids in the urine. In addition, there may be some proteinuria (usually 50 mL) in the peritoneal cavity indicates disease. The presence of excess peritoneal fluid is called ascites, and the fluid is called ascitic fluid. The process of obtaining samples of this fluid by needle aspiration is paracentesis (Fig. 29.9). Usually, the fluid is visualized by ultrasound to confirm its presence and volume before paracentesis is attempted.

FIGURE 29.9 Paracentesis of the abdominal cavity in midline. (From Snell RS. Clinical Anatomy. 7th ed. Philadelphia, PA: Lippincott Williams & Wilkins; 2003.) The same mechanisms that cause serous effusions in other body cavities are operative for the peritoneal cavity. Specifically, a disturbance in the rate of dialysis secondary to a remote pathology is a transudate, compared with a primary pathology of the peritoneal membrane causing an exudate. The multiple factors that apply to this large space, such as renal and hepatic function, tend to cloud the distinction. The most common cause of transudative ascites is portal hypertension. Obstructions to hepatic flow, such as cirrhosis, congestive heart failure, and hypoalbuminemia for any reason also, demonstrate high incidence. The exudative causes of ascites are predominantly metastatic ovarian, prostate, and colon cancer and infective peritonitis. The recommended method for differentiating transudates and exudates is the serum–ascites albumin gradient (SAAG). The SAAG is calculated by subtracting the fluid albumin level from the serum albumin level. A difference of 1.1 g/dL or more is used to indicate a transudative process while a difference of less than 1.1 g/dL indicates an exudative process. A neutrophil count greater than 250 cells/μm indicates peritonitis. Measurement of the tumor markers CEA and CA125 is indicated in suspected cases of malignancy.

CASE STUDY 29.3

A 47-year-old man was brought to the hospital by the police after falling to the pavement and being unable to rise. His admission laboratory values revealed the following:

An abdominal ultrasound was significant for the presence of ascites and a cirrhotic-appearing liver. The patient was admitted to the hospital for further evaluation and subsequent treatment of alcohol withdrawal and decompensated cirrhosis. He was put on a salt and protein-restricted diet and diuretics to manage ascites (accumulated fluid in the peritoneal cavity). He was treated with benzodiazepines and lactulose and placed on a diuretic regimen of furosemide 40 mg orally daily along with spironolactone 100 mg orally daily. Ascitic fluid should be sampled in all inpatients and outpatients with new-onset ascites. If uncomplicated cirrhotic ascites is suspected, the initial specimen of ascitic fluid should be sent for cell count and differential, albumin, and total protein concentration. With the results of the above tests, the SAAG can be calculated.

Alcoholic hepatitis is a syndrome of progressive inflammatory liver injury associated with long-term heavy intake of ethanol. Patients who are severely affected present with subacute onset of fever, hepatomegaly, leukocytosis, marked impairment of liver function (e.g., jaundice and coagulopathy), and manifestations of portal hypertension (e.g., ascites, hepatic encephalopathy, and variceal hemorrhage). However, milder forms of alcoholic hepatitis often may not cause any symptoms.

Questions 1. What lab tests are abnormal? 2. What will the SAAG value be, and how is it calculated? 3. How will diuretics reduce this excess fluid? 4. Why is his diet salt restricted?

AMNIOTIC FLUID The amniotic sac provides an enclosed environment for fetal development. This sac is bilayered as the result of a fusion of the amniotic (inner) and chorionic (outer) membranes at an early stage of fetal development. The fetus is suspended in amniotic fluid (AF) within the sac. The AF provides a cushioning medium for the fetus, regulates the temperature of the fetal environment, allows fetal movement, and serves as a matrix for influx and efflux of constituents such as glucose, sodium, and potassium. Depending on the interval of the gestational period, the fluid may be derived from different sources, but ultimately, the mother is the source of the AF. At the

initiation of pregnancy, some maternal secretions cross the amnion and contribute to the volume. Shortly after formation of the placenta and embryo and fusion of membranes, AF is largely derived by transudation across the fetal skin. In the last half of pregnancy, the skin becomes substantially less permeable, and fetal micturition, or urination, becomes the major volume source. The fate of the fluid also varies with the period of gestation. A bidirectional exchange is presumed to occur across the membranes and at the placenta. Similarly, during early pregnancy, the fetal skin is involved in exchange of AF. In the last half of pregnancy, the mechanism of fetal swallowing is the major fate of AF. There is a dynamic balance established between production and clearance; fetal urination and swallowing maintain this balance. The continual swallowing maintains intimate contact of the AF with the fetal gastrointestinal tract, buccal cavity, and bronchotracheal tree. This contact is evidenced by the sloughed cellular material from the fetus that provides us with an indication of various fetal developmental milestones and functional stages. A sample of fluid is obtained to analyze the cellular material and the biochemical constituents of the AF. This fluid is collected via transabdominal amniocentesis (amniotic sac puncture), which is performed under aseptic conditions. Before an attempt is made to obtain fluid, the positions of the placenta, fetus, and fluid pockets are visualized using ultrasonography. Aspiration of anything except fluid could lead to erroneous conclusions, as well as possible harm to the fetus (Fig. 29.10).

FIGURE 29.10 Amniocentesis. A sample is removed from the amniotic sac for fetal abnormality testing. Amniocentesis and subsequent AF analysis are performed to test for congenital diseases, neural tube defects (NTDs), hemolytic disease, and fetal pulmonary development. Emphasis is placed on those conditions diagnosed or monitored using the clinical biochemistry laboratory.

Hemolytic Disease of the Newborn The analysis of AF to screen for hemolytic disease of the newborn (erythroblastosis fetalis) was the first recognized laboratory procedure performed on AF. Hemolytic disease of the newborn is a syndrome of the fetus resulting from incompatibility between maternal and fetal blood, because of differences in either the ABO or Rh blood group systems. Maternal antibodies to fetal erythrocytes cause a hemolytic reaction that can vary in severity. The resultant hemoglobin breakdown products, predominantly bilirubin, appear in the AF and provide a measure of the severity of the incompatibility reaction. The most commonly used method is a direct spectrophotometric scan of undiluted AF and subsequent calculation of the relative bilirubin amount. Classically, absorbance due to bilirubin is reported instead of a concentration of bilirubin. The method consisted of scanning AF from 550 to 350 nm against a water blank. The common method, the method of Liley,10 requires the plotting of the absorbances at 5 nm intervals against wavelength, using semilogarithmic paper. A baseline is constructed by creating a line between the absorbance readings at 550 and 350 nm. An absorbance peak seen at 450 nm is a result of bilirubin within the AF. The larger the difference between baseline absorbance and peak absorbance at 450 nm is an indication of the level of bilirubin present within the AF. This direct observation is referred to as the ΔA450 (Fig. 29.11).

FIGURE 29.11 Change in A450 nm from AF bilirubin scan. To avoid interference in the spectrophotometric scan, specimens should be immediately centrifuged and the fluid separated from the sediment. This will prevent not only particulate interference but also the possibility of increased lysis of red blood cells in the specimen producing hemoglobin in the AF. As with all specimens for bilirubin analysis, AF specimens for bilirubin scans must be protected from light. Specimens are routinely collected in amber-colored tubes. Exposure to light results in the photo-oxidation of bilirubin to biliverdin that will not be detected at 450 nm, resulting in underestimation of the hemolytic disease severity. Examples of interferences, compared with a normal specimen, are given in Figure 29.12. Each laboratory should compile its own catalog of real examples for spectrophotometric analysis. The presence of hemoglobin is identified by its peak absorbance at 410 to 415 nm; the presence of urine is identified by the broad curve and confirmed by creatinine and urea analyses; and the presence of meconium is identified by the distinctly greenish color of the AF and flat absorbance curve.

FIGURE 29.12 Amniotic fluid absorbance scans. Care must be used in the interpretation of the spectra. A decision for treatment can be made based on the degree of hemolysis and gestational age. The rather limited treatment options are immediate delivery, intrauterine transfusion, and observation. The transfusion can be accomplished by means of the umbilical artery and titrated to the desired hematocrit. Several algorithms have been proposed to aid in decision making and are outlined in Figure 29.13.

FIGURE 29.13 Assessment of fetal prognosis. Liley method: 1A, above broken line, condition desperate, immediate delivery or transfusion; 1B, between broken and continuous lines, hemoglobin greater than 8 g/100 mL, delivery or transfusion (stippled area) urgent; 2A, between continuous and broken lines, hemoglobin 8 to 10 g/100 mL, delivery 36 to 37 weeks; 2B, between broken and continuous lines, hemoglobin 11 to 13.9 g/100 mL, delivery 37 to 39 weeks; 3, below continuous line, not anemic, delivery at term. Freda method: 4+, above upper horizontal line, fetal death imminent, immediate delivery or transfusion; 3+, between upper and middle horizontal lines, fetus in jeopardy, death within 3 weeks, delivery or transfusion as soon as possible; 2+, between middle and lower horizontal lines, fetal survival for at least 7 to 10 days, repeat test, possible indication for transfusion; 1+, below lower horizontal line, fetus in no immediate danger of death. (Adapted from Robertson JG. Evaluation of the reported methods of interpreting spectrophotometric tracings of amniotic fluid in rhesus isoimmunization. Am J Obstet Gynecol. 1966;95:120.)

Neural Tube Defects

Screening for NTDs is initially performed using maternal serum. The presence of elevated levels of α-fetoprotein (AFP) is primarily associated with NTDs such as spina bifida and anencephaly. Elevated maternal serum AFP is also found with abdominal hernias into the umbilical cord, cystic hygroma, and poor pregnancy outcome. Low maternal serum AFP is associated with an increased incidence of Down's syndrome and other aneuploidy states. AFP is a product of, first, the fetal yolk sac and, then, the fetal liver. It is released into the fetal circulation and presumably enters the AF by transudation. Entry into the maternal circulation could be by placenta crossover or from the AF. An open NTD (e.g., spina bifida) that causes an increase in amniotic fluid αfetoprotein (AFAFP) is accompanied by an increase in maternal serum AFP. Under normal conditions, AFAFP would be cleared by fetal swallowing and metabolism. An increased presence overloads this mechanism, causing AFAFP elevation. Both serum and AF are routinely analyzed using immunologic methods. Results are reported as multiples of the median. Because of the variety of demographics involved in determining normal values, each laboratory should establish its own median values for gestational weeks (usually weeks 15 to 21). Results can then be calculated using the formula:

(Eq. 29-3) The protocol for AFP testing to determine the presence of an NTD is generally the following: 1. Maternal serum AFP, usually in conjunction with hCG, unconjugated estriol, and inhibin A assays 2. Repeat the AFP, if elevated 3. Diagnostic ultrasound 4. Amniocentesis for confirmation Interpretation of maternal serum AFP testing is complex, being a function of age, race, weight, gestational age, and level of nutrition. Therefore, the material serum AFP can only be used as a screening tool and should not be considered diagnostics for any of the conditions it is associated with and are listed above. Testing of AFAFP is the confirmatory procedure to detect NTDs. Concern over the difficulty in interpreting AFP tests generated the need for a second test to confirm NTDs and abdominal wall defects. The secondary method

used is the assay for a central nervous system (CNS)-specific acetylcholinesterase (AChE). The NTD allows direct or, at least less difficult, passage of AChE into the AF. Analysis for CNS-specific AChE in the AF then offers a degree of confirmation for AFAFP. The methods used for CNS AChE include enzymatic, immunologic, and electrophoretic with inhibition. The latter includes the use of acetylthiocholine as substrate and BW284C51, a specific CNS inhibitor, to differentiate the serum pseudocholinesterase from the CNSspecific AChE.

Fetal Lung Maturity The primary reason for AF testing is the need to assess fetal pulmonary maturity. All the organ systems are at jeopardy from prematurity, but the state of the fetal lungs is a priority from the clinical perspective. Consequently, the laboratory is asked whether sufficient specific phospholipids are reflected in the AF to prevent atelectasis (alveolar collapse) if the fetus were to be delivered. This question is important when preterm delivery is contemplated because of other risk factors in pregnancy, such as preeclampsia and premature rupture of membranes. Risk factors to fetus or mother can be weighed against interventions, such as delay of delivery with steroid administration to the mother to enhance fetal surfactant production, or against at-risk postdelivery therapies, such as exogenous surfactant therapy, high-frequency ventilation, and extracorporeal membrane oxygenation. Alveolar collapse in the neonatal lung may occur during the changeover from placental oxygen to air as an oxygen source at birth if the proper quantity and type of phospholipid (surfactant) is not present. The ensuing condition, which may vary in the degree of severity, is called respiratory distress syndrome (RDS). It has also been referred to as hyaline membrane disease because of the hyaline membrane found in affected lungs. Lung maturation is a function of differentiation of alveolar epithelial cells (pneumocytes) into type I and type II cells beginning near the 24th week of pregnancy. The type I cells form the alveolar– capillary membrane for exchange of gases. The type II cells produce and store the surfactants needed for alveolar stability in the form of lamellar bodies. As the lungs mature, increases occur in phospholipid concentration— particularly the compounds phosphatidylglycerol (PG) and lecithin11 (Fig. 29.14). These two compounds, comprising 10% and 70%, of total phospholipid concentration, respectively, are most important as surfactants. Their presence in

high enough levels acts in concert to allow contraction and re-expansion of the neonatal alveoli. Insufficient surfactant allows alveoli to collapse, requiring a great deal of energy to re-expand the alveoli upon inspiration. This not only creates an extreme energy demand on a newborn but probably also causes physical damage to the alveoli with each collapse. The damage may lead to “hyaline” deposition, or the newborn may not have the strength to continue inspiration at the energy cost. The result of either can be fatal.

FIGURE 29.14 The form used to report the lung profile. The four determinations are plotted on the ordinate, and the weeks of gestation are plotted on the abscissa (as well as the L/S ratio as an “internal standard”). When plotted, these fall with a high frequency into a given grid that then identifies the stage of development of the lung as shown in the upper part of the form. The designation “mature (caution)” refers to patients other than those with diabetes who can be delivered if necessary at this time; if the patient has diabetes, she can be delivered with safety when the values fall in the “mature” grid. (Reprinted from Kulovich MV, Hallman MB, Gluck L. The lung profile. I. Normal pregnancy. Am J Obstet Gynecol. 1979;135:57, with permission. Copyright 1977 by the Regents of the University of California.) Tests for assessing fetal lung maturity (FLM) include functional assays and quantitative assays. Functional assays provide a direct physical measure of AF in an attempt to assess surfactant ability to decrease surface tension. These include

the “bubble or shake” test and the foam stability index (FSI). Quantitative tests include the lecithin–sphingomyelin ratio (L/S ratio), PG, and lamellar body counts. The FSI,12 a variant of Clements' original “bubble test”13 that was performed at the bedside, appears acceptable as a rapid, inexpensive, and informative assay. This qualitative, technique-dependent test requires only common equipment. The assay is based on the ability of surfactant to generate a surface tension lower than that of a 0.47 mol fraction ethanol–water solution. If sufficient surfactant is present, a stable ring of foam bubbles remains at the air–liquid interface. As surfactant increases (FLM probability increases), a larger mole fraction of ethanol is required to overcome the surfactant-controlled surface tension. The highest mole fraction used while still maintaining a stable ring of bubbles at the air–liquid interface is reported as the FSI (Table 29.2). The test is dependent on technique and can also be skewed by contamination of any kind in the AF (e.g., blood or meconium contamination). Interpretation of the FSI bubble patterns is difficult and technique dependent. Results can vary among clinical laboratorians. Most laboratories have found that an FSI of 0.47 correlates well with an L/S ratio of 2.0. TABLE 29.2 FSI Determination

The quantitative tests were given emphasis primarily by the work of Gluck et al.14 The phospholipids of importance in these tests are PG, lecithin, and sphingomyelin (SP). Relative amounts of PG and lecithin increase dramatically with pulmonary maturity, whereas SP concentration is relatively constant and provides a baseline for the L/S ratio. Increases in PG and lecithin correspond to the larger amounts of surfactant being produced by the type II pneumocytes as fetal lungs mature. The classic technique for separation and evaluation of the lipids involves thin-layer chromatography (TLC) of an extract of the AF. The extraction

procedure removes most interfering substances and results in a concentrated lipid solution. Current practices use either one- or two-dimensional TLC for identification. Laboratory needs determine if a one-dimensional or a twodimensional method is performed. An example of the phospholipids separation by one-dimensional TLC is shown in Figure 29.15.15

FIGURE 29.15 Thin-layer chromatogram of AF phospholipids. Standard phospholipids (ST), total extract (T), and acetone-precipitable compounds (A) in AF are shown. The phospholipid standards contained, per liter, 2 g each of lecithin and P1, 1 g each of PG and SP, and 0.3 g each of PS and PE; 10 mL of the standard was spotted. (Reprinted from Tsai MY, Marshall JG. Phosphatidylglycerol in 261 samples of amniotic fluid. Clin Chem. 1979;25(5):683, with permission. Copyright 1979 American Association for Clinical Chemistry.) The classic breakpoint for judgment of maturity has been an L/S ratio of 2.0. As a result of the time-consuming requirements of performing the L/S ratio, several additional tests have been developed to allow faster determination of FLM. The L/S ratio, however, remains the “gold standard” by which all methods are compared.

Phosphatidylglycerol As mentioned previously, an additional phospholipid essential for FLM is PG that increases in proportion to lecithin. In the case of diabetic mothers, however,

development of PG is delayed. Therefore, using an L/S ratio of 2.0 as an indicator of FLM cannot be relied upon to ensure that RDS will not occur unless PG is also included in interpretation of the L/S ratio. With the current trend toward less labor-intensive techniques, an immunologic assay using antibody specific for PG can be used to determine FLM. The AmnioStat-FLM (Irvine Scientific, Santa Ana, CA) immunologic test is designed to measure the adequate presence of PG in AF. Because lecithin production is not affected in diabetic mothers and the levels of PG and lecithin rise at the same rate in unaffected pregnancies, the AmnioStat-FLM can be used to determine whether adequate FLM is present. Good correlation has been shown compared with the L/S ratio. The antibody-specific immunologic assay offers the additional advantage, not present in other assays, of being unaffected by specimen debris such as meconium and blood.

Lamellar Body Counts The phospholipids produced and secreted by the type II alveolar cells are released in the form of lamellar bodies. As FLM increases, these lamellated packets of surfactant also exhibit an increased presence in the AF. The fact that lamellar bodies are approximately the same size as platelets provides a convenient method to determine their concentration using the platelet channel on automated hematology analyzers.16 Based on the model of analyzer used, the number of lamellar bodies needed to ensure FLM can vary. This variation occurs as a result of different instrumental methods used to detect the bodies. Acceptable counts must be correlated with the specific instrumentation.17 A standardized protocol has been developed in an effort to make the assay transferable between laboratories and has allowed a greater incorporation of this assay into practice.18

CASE STUDY 29.4 A 28-year-old woman is pregnant for the second time, having miscarried a year ago. At the time she miscarried, she was working as a volunteer in Haiti and did not seek medical care. Her doctor is concerned that the baby that she is carrying now may have hemolytic disease due to Rh incompatibility. She also has diabetes. She has an increase in optical density at 450 nm on her amniotic fluid screen with a Liley graph reading of 0.6 and is at approximately 34 weeks of gestation. Her L/S ratio was 2.1. The

physician is considering inducing labor. Further testing of the amniotic fluid shows an FLM II reading of 55 mg surfactant/g of albumin.

questions 1. What do the tests reveal? Why was the FLM II reading ordered in this case when the patient already had the L/S ratio performed? 2. Why is the physician considering inducing labor?

SWEAT The common eccrine sweat glands function in the regulation of body temperature. They are innervated by cholinergic nerve fibers and are a type of exocrine gland. Sweat has been analyzed for its multiple inorganic and organic contents but, with one notable exception, has not been proved as a clinically useful model. That exception is the analysis of sweat for chloride levels in the diagnosis of cystic fibrosis (CF). The sweat test is the single most accepted common diagnostic tool for the clinical identification of this disease. Normally, the coiled lower part of the sweat gland secretes a “presweat” upon cholinergic stimulation. As the presweat traverses the ductal part of the gland going through the dermis, various constituents are resorbed. In CF, the electrolytes, most notably chloride and sodium ions, are improperly resorbed owing to a mutation in the cystic fibrosis transmembrane conductance regulator (CFTR) gene, which controls a cyclic AMP–regulated chloride channel. CF (mucoviscidosis) is an autosomal recessive inherited disease that affects the exocrine glands and causes electrolyte and mucous secretion abnormalities. This exocrinopathy is present only in the homozygous state. The frequency of the carrier (heterozygous) state is estimated at 1 of 20 in the United States. The disease predominantly affects those of non-Hispanic Caucasian decent. The observed rate of expression ranks CF as the most common lethal hereditary disease in the United States, with death usually occurring by the third decade. The primary cause of death is pneumonia, secondary to the heavy, abnormally viscous secretion in the lungs. These heavy secretions cause obstruction of the microairways, predisposing the CF patient to repeated episodes of pneumonia. Patients also experience pancreatic insufficiency due to abnormally viscous

secretions obstructing pancreatic ducts. This obstruction ostensibly causes pooling and autoactivation of the pancreatic enzymes. The enzymes then cause destruction of the exocrine pancreatic tissue. Diagnostic algorithms for CF continue to rely on abnormal sweat electrolytes, pancreatic or bronchial abnormalities, and family history. The use of blood immunoreactive trypsin, a pancreatic product, is now prevalent in newborn screening programs. The rapidly developing area of molecular genetics provides the definitive methodology. The gene defect causing CF has been localized on chromosome 7, and the most frequent mutations have been characterized. However, approximately 1,550 mutations of the CFTR gene have been cataloged with the majority of patients found to be homozygous or heterozygous for the most common mutation. The sweat glands remain structurally unaffected by CF. Analysis of sweat for both sodium and chloride is valid but, historically, chloride was and is the major element, leading to use of the sweat chloride test. Because of its importance, a standard method has been suggested by the Cystic Fibrosis Foundation. This method is based on the pilocarpine nitrate iontophoresis method of Gibson and Cooke.19 Pilocarpine is a cholinergic-like drug used to stimulate the sweat glands. The sweat is absorbed on a gauze pad during the procedure. After collecting sweat by iontophoresis, chloride analysis is performed. Many methods have been suggested, and all are dependent on laboratory requirements. Generally, the sweat is leached into a known volume of distilled water and analyzed for chloride (chloridometer). In general, values greater than 60 mmol/L are considered positive. Other tests including osmolarity, conductivity, and chloride electrodes or patches placed on the skin are available but are considered screening tests with abnormal results followed by the Gibson-Cooke reference method. A variety of instrumentation is available for these screening tests. Although a value of 60 mmol/L is generally recognized for the quantitative pilocarpine iontophoretic test, it is important to consider several factors in interpretation. Not only will there be analytic variation around the cutoff, an epidemiologic borderline area will also occur. Considering this, the range of 45 to 65 mmol/L for chloride would be more appropriate in determining the need for repetition. Some patients with CFTR mutations have been found to have values below 60 mmol/L. Other variables must be considered. Age generally increases the limit—so much so that it is increasingly difficult to classify adults. Obviously, the patient's state of hydration also affects sweat levels. Because the

complete procedure is technically demanding, expertise should be developed before the test is clinically available. A complete description of sweat collection and analysis, including procedural justifications, is available for review.20

SYNOVIAL FLUID Joints are classified as movable or immovable. The movable joint contains a cavity that is enclosed by a capsule; the inner lining of the capsule is the synovial membrane (Fig. 29.16). This cavity contains synovial fluid, which is formed by ultrafiltration of plasma across the synovial membrane. Along with ultrafiltration of the plasma, the membrane also secretes a mucoprotein rich in hyaluronic acid. This mucoprotein is what causes the synovial fluid to be viscous. Synovial fluid functions as a lubricant for the joints and as a transport medium for delivery of nutrients and removal of cell wastes. The volume of fluid found in a large joint, such as the knee, rarely exceeds 3 mL. Normal fluid is clear, colorless to pale yellow, viscous, and nonclotting. Variations are indicative of pathologic conditions and are summarized in Table 29.3. Collection of a synovial fluid sample is accomplished by arthrocentesis of the joint under aseptic conditions (Fig. 29.17). The sample should be collected in a heparin tube for culture, a heparin or liquid EDTA for microscopic analysis, and a fluoride tube for glucose analysis. The viscosity of the fluid often requires the use of hyaluronidase to break down the mucoprotein matrix and allow for appropriate manual or automated aspiration and delivery into reaction vessels.

FIGURE 29.16 Synovial joint and model. TABLE 29.3 Classification of Synovial Fluids

Source: Mundt L, Shanahan K. Graff's Textbook of Routine Urinalysis and Body Fluids. 2nd ed. Philadelphia, PA: Lippincott Williams & Wilkins; 2011:268.

FIGURE 29.17 Placement of needle in arthrocentesis of knee joint. Chemical analysis of synovial fluid includes the testing of several different

analytes. These analytes include total protein, glucose, uric acid, and LD. Total protein within the synovial fluid can be measured using the most common methods performed on serum samples. The normal range for synovial fluid protein is 1 to 3 g/dL. Increased synovial fluid protein levels are seen in ankylosing spondylitis, arthritis, arthropathies that accompany Crohn's disease, gout, psoriasis, Reiter's syndrome, and ulcerative colitis. Synovial fluid glucose levels are interpreted using serum glucose levels after a 6- to 8-hour fast. Normally, synovial fluid glucose levels are less than 10 mg/dL lower than serum levels. Infectious disorders of the joint demonstrate large decreases in synovial fluid glucose and can be as much as 20 to 100 mg/dL less than serum levels. Other groups of joint disorders typically demonstrate a less degree of decrease in synovial fluid glucose.19 The ratio of synovial fluid to plasma glucose (normally 0.9:1) remains the most useful mechanism to evaluate glucose levels within the synovial fluid. Decreased ratios are found in inflammatory (e.g., gout, rheumatoid arthritis [RA], and systemic lupus erythematosus) and septic (e.g., bacterial and viral arthritis) conditions. Standard methods for glucose analysis can be used for synovial fluid analysis. Synovial fluid uric acid levels should be determined in patients with a monoarthritic episode to confirm or rule out the presence of gouty arthritis since plasma uric acid levels do not correlate with synovial fluid levels. While identifying the presence of uric acid crystals via microscopic methods is diagnostic of gouty arthritis, the determination of synovial fluid uric acid levels can be used in laboratories where crystal identification is not a routine procedure. Synovial fluid uric acid levels normally ranges from 6 to 8 mg/dL. Although rarely performed, lactic acid measurements can be helpful in diagnosing septic arthritis. Normally, synovial fluid lactate is less than 25 mg/dL but can be as high as 1,000 mg/dL in septic arthritis. LD can be elevated in synovial fluid, while serum levels remain normal. Synovial fluid LD levels are usually increased in RA, infectious arthritis, and gout. Rheumatoid factor (RF) is an antibody to immunoglobulins. Most patients with RA have RF in their serum, whereas more than half of these patients will demonstrate RF in synovial fluid. Determining synovial fluid RF is important to diagnose cases where RF is only being produced by joint tissue. In these cases, synovial fluid RF may be positive while the serum RF is negative.20

CASE STUDY 29.5

A 57-year-old female has had joint pain for 4 years. Her joints are becoming progressively deformed with swelling. Her doctor orders a synovial fluid analysis cell count, differential, synovial fluid RF serology, protein, glucose, LD, serum glucose level, serum RF, and fluid crystal analysis. Her results are as follows: Fluid appearance: yellow/cloudy WBC count, differential: greater than 25,000 WBCs/μL and greater than 50% polymorphonuclear neutrophils (PMNs) with ragocytes seen

Questions 1. What is the most probable diagnosis for this patient? What laboratory findings support your conclusion? 2. Name another condition that could have similar symptoms but was ruled out based upon the presented laboratory findings? What results would have been different in the condition you listed?

For additional student resources, please visit http://thepoint.lww.com

questions

at

1. Clinical chemistry laboratory testing for the assessment of infectious meningitis may include all of the following tests on a CSF sample EXCEPT a. Glucose. b. Total protein. c. Lactate. d. Glutamine. e. All of the above would be used to confirm a suspected case of infectious meningitis. 2. Cerebrospinal fluid performs which of the following functions? a. Buoyant cushion for the brain b. Supplies nutrients to the central nervous system c. Removes wastes d. Intracerebral and extracerebral transport e. All of the above 3. Assessing the integrity of the blood–brain barrier can be accomplished using which of the following ratios? a. CSF IgG/CSF albumin b. CSF albumin/CSF IgG c. CSF albumin/serum albumin d. Serum albumin/CSF albumin e. CSF total protein/serum total protein 4. A red CSF indicates a. Cerebral hemorrhage b. Traumatic tap c. Bacterial meningitis d. Viral meningitis e. Cerebral hemorrhage or traumatic tap 5. Lamellar body counts indicate a. Surfactant phospholipid packets b. A ratio of lecithin to sphingomyelin c. Direct measure of phosphatidylglycerol levels

d. Amniotic fluid bilirubin levels e. Meconium count of the fetus 6. A transudate could be caused by all of the following EXCEPT a. Congestive heart failure b. Lymphoma c. Renal failure d. Hepatic cirrhosis e. Nephrotic syndrome 7. CF is characterized by a. Elevated sweat chloride levels b. Homozygous expression of an autosomal recessive trait c. Pancreatic insufficiency d. All of these e. None of these 8. The high viscosity characteristic of normal synovial fluid samples is caused by a. Hyaluronic acid b. Hyaluronidase c. Elevated white blood cell counts d. The presence of crystals e. All of the above 9. An exudative pleural effusion would exhibit which of the following laboratory results? a. F/P total protein ratio of 0.4 b. F/P LD ratio of 0.7 c. Fluid cholesterol of 35 mg/dL d. F/P cholesterol ratio of 0.1 e. F/P bilirubin ratio of 0.3 10. Pleural fluid is collected via a. Thoracentesis b. Paracentesis c. Pericardiocentesis

d. Spinal tap e. None of the above

references 1. Brunzel NA. Fundamentals of Urine and Body Fluid Analysis. 2nd ed. Philadelphia, PA: Saunders; 2004:327–328. 2. McBride LJ. Textbook of Urinalysis and Body Fluids: a Clinical Approach. New York, NY: Lippincott; 1998:200–201. 3. Okta M, Okta K, et al. Clinical and analytical evaluation of an enzyme immunoassay for myelin basic protein in cerebrospinal fluid. Clin Chem. 2000;46:1336–1330. 4. Bailey EM, Domenico P, Cunha BA. Bacterial or viral meningitis. Postgrad Med. 1990;88:217. 5. Calbreath DF. Clinical Chemistry: a Fundamental Textbook. Philadelphia, PA: Saunders; 1992:370. 6. Ross DL, Neeley AE. Textbook of Urinalysis and Body Fluids. New York, NY: Appleton-CenturyCrofts; 1983:275. 7. Calbreath DF. Clinical Chemistry: A Fundamental Textbook. Philadelphia, PA: Saunders; 1992:371. 8. Ross DL, Neeley AE. Textbook of Urinalysis and Body Fluids. New York, NY: Appleton-CenturyCrofts; 1983:280. 9. Davie AP, Francis CM, Love MP, et al. Value of the electrocardiogram in identifying heart failure due to left ventricular systolic dysfunction. Br Med J. 1996;312:222–224. 10.Liley AW. Liquor amnil analysis in the management of the pregnancy complicated by rhesus sensitization. Am J Obstet Gynecol. 1961;82:1359. 11. Kulovich MV, Hallman MB, Gluck L. The lung profile. I. Normal pregnancy. Am J Obstet Gynecol. 1979;135:157. 12.Statland BE, Freer DE. Evaluation of two assays of functional surfactant in amniotic fluid: surfacetension lowering ability and the foam stability index test. Clin Chem. 1979;25:1770. 13.Clements JA, Plataker ACG, Tierney DF, et al. Assessment of the risk of the respiratory distress syndrome by a rapid test for surfactant in amniotic fluid. N Engl J Med. 1972;286:1077. 14.Gluck L, Kulovich MV, Borer RC Jr, et al. Diagnosis of the respiratory distress syndrome by amniocentesis. Am J Obstet Gynecol. 1971;109:440. 15.Tsai MY, Marshall JG. Phosphatidylglycerol in 261 samples of amniotic fluid from normal and diabetic pregnancies, as measured by one-dimensional thin layer chromatography. Clin Chem. 1979;25:682. 16.Khazardoost H, Yakazadeh S, Borna F, et al. Amniotic fluid lamellar body count and its sensitivity and specificity evaluating fetal lung maturity. J Obstet Gynecol. 2005;25:257–259. 17.Szallasi A, Gronowski AM, Eby CS. Lamellar body count in amniotic fluid: a comparison of four different hematology analyzers. Clin Chem. 2003;49:994–997. 18.Neerhof ME, Dohnal JC, Ashwood ER, et al. Lamellar body counts: a consensus on protocol. Obstet Gynecol. 2001;97:318–320. 19.Gibson LE, Cooke RE. A test for concentration of electrolytes in cystic fibrosis of the pancreas utilizing pilocarpine by iontophoresis. Pediatrics. 1959;23:545. 20.Clinical Laboratory Standards Institute. Sweat Testing: Sample Collection and Quantitative Analysis Approved Guidelines. Villanova, PA: Clinical Laboratory Standards Institute, reaffirmed 2005. (CLSI document C34-A2.)

PART four Specialty Areas of Clinical Chemistry

30 Therapeutic Drug Monitoring TAKARA L. BLAMIRES

Chapter Outline Overview Routes of Administration Drug Absorption Drug Distribution Free Versus Bound Drugs Drug Metabolism Drug Elimination Pharmacokinetics Specimen Collection Pharmacogenomics Cardioactive Drugs Digoxin Quinidine Procainamide Disopyramide

Antibiotics Aminoglycosides Teicoplanin Vancomycin

Antiepileptic Drugs Phenobarbital and Primidone Phenytoin and Fosphenytoin Valproic Acid Carbamazepine Ethosuximide Felbamate Gabapentin

Lamotrigine Levetiracetam Oxcarbazepine Tiagabine Topiramate Zonisamide

Psychoactive Drugs Lithium Tricyclic Antidepressants Clozapine Olanzapine

Immunosuppressive Drugs Cyclosporine Tacrolimus Sirolimus Mycophenolic Acid

Antineoplastics Methotrexate

Bronchodilators Theophylline

Questions Suggested Readings References Chapter Objectives Upon completion of this chapter, the clinical laboratorian should be able to the following: Discuss drug characteristics that make therapeutic drug monitoring essential. Identify factors that influence the absorption of an orally administered drug. List factors that influence the rate of drug elimination. Define drug distribution and discuss factors that influence it. Calculate volume of distribution, elimination constant, and drug half-life. Relate the concentration of a circulating drug to pharmacokinetic parameters. Discuss collection of appropriate specimens for therapeutic drug monitoring.

Identify the therapeutic category or use of each drug presented in this chapter. Describe complications that result from major toxicity of drugs presented in this chapter. Identify key features of each drug presented in this chapter that may influence its blood concentration.

Key Terms Bioavailability Distribution Peak drug concentration Pharmacogenomics Pharmacokinetics Standard dosage Therapeutic drug monitoring Therapeutic range Trough drug concentration

For additional student resources, please visit http://thepoint.lww.com

at

OVERVIEW Therapeutic drug monitoring (TDM) involves the coordinated effort of several health care professionals to accurately monitor circulating drug concentrations in serum, plasma, and whole blood specimens. Laboratory personnel play an essential role in TDM. Appropriate timing of specimen collection, accurate measurement of drug concentrations, and timely reporting of results are necessary to achieve safe and effective patient drug therapy. The main purposes for TDM are to (1) ensure drug dosage is within a range that produces maximal therapeutic benefit known as the therapeutic range and (2) identify when the drug is outside the therapeutic range, which may lead to drug inefficacy or toxicity. For most drug therapies, safe and effective dosage regimens for the majority of the population have been established, and TDM is unnecessary. However, for some therapeutic drugs, there is a narrow window between therapeutic efficacy and toxicity. Therefore, careful monitoring and appropriate dosage adjustments are necessary to maintain therapeutic concentrations. The standard dosage for a drug is the dose providing therapeutic benefits as statistically derived from observations in a healthy population. Disease states may produce altered physiologic conditions in which the standard dose does not produce the predicted concentration and must be adjusted accordingly. Patient

age, gender, genetics, diet, prescription drugs, self-administered over-the-counter drugs, and even naturopathic agents can influence drug concentrations and efficacy. For these reasons, it is important to establish a dosage regimen to fit individual situations and needs. This is achieved with TDM. TDM may also be used to identify patients who are noncompliant or to reoptimize a dosing regimen based on drug–drug interactions or a change in the patient's physiologic state that has unpredictably affected circulating drug concentrations. The basis of TDM includes consideration of the route of administration, rate of absorption, distribution of drug within the body, and rate of elimination (Fig. 30.1). There are many factors that influence drug absorption, distribution, and elimination; therefore, the process of achieving therapeutic drug concentrations is by no means straightforward. This chapter introduces these concepts and their influence on circulating drug concentrations as well as briefly discussing some drugs commonly monitored by TDM testing in the clinical laboratory.

FIGURE 30.1 Overview of factors that influence the circulating concentration of an orally administered drug. GI, gastrointestinal.

ROUTES OF ADMINISTRATION To achieve maximum therapeutic benefit, a drug must be at an appropriate concentration at its site of action. For instance, a cardioactive drug might need to reach myocytes at a dosage concentration that is effectively maintained in the therapeutic range for days to weeks. Intravenous (IV) administration into the circulatory system offers the most direct and effective delivery to their sites of action. The fraction of the administered dose that eventually reaches its site of action is defined as its bioavailability. In addition to IV, drugs can be

administered by several other routes. Drugs can be injected directly into muscle tissue through intramuscular (IM) injections or just under the skin with a subcutaneous (SC) injection. They can also be inhaled or absorbed through the skin (transcutaneous) through use of a transdermal patch. Rectal delivery through a suppository is commonly used in infants and in situations in which oral delivery is not possible. Oral administration is the most common route of delivery as it is the least invasive for patients. Each method of administration presents with different characteristics that can affect circulating drug concentrations. As oral administration is the most common, the following discussions of drug absorption, distribution, metabolism, and elimination will focus on this method of administration.

DRUG ABSORPTION For orally administered drugs, the efficiency of absorption from the gastrointestinal tract is dependent on several factors including (1) dissociation from its administered form, (2) solubility in gastrointestinal fluids, and (3) diffusion across gastrointestinal membranes. Tablets and capsules require dissolution before being absorbed, whereas liquid solutions tend to be more rapidly absorbed. Some drugs are subject to uptake by active transport mechanisms intended for dietary constituents; however, most are absorbed by passive diffusion from the gastrointestinal tract into the bloodstream. This process requires that the drug be in a hydrophobic, or nonionized, state. Because of gastric acidity, weak acids are efficiently absorbed in the stomach, but weak bases are preferentially absorbed in the intestine where the pH is more alkaline. For most drugs, absorption from the gastrointestinal tract occurs in a predictable manner in healthy people; however, changes in intestinal motility, pH, inflammation, as well as the presence of food or other drugs may dramatically alter absorption rates. For instance, a patient with inflammatory bowel syndrome may have a compromised gastrointestinal tract, which may affect normal absorption of some drugs. Absorption can also be affected by coadministration of drugs that affect gastrointestinal function such as antacids, kaolin, sucralfate, cholestyramine, and antiulcer medications. Morphine may also slow gastrointestinal motility, thereby influencing the rate of drug absorption. Additionally, drug absorption rates may change with age, pregnancy, or pathologic conditions. In these instances, predicting the final circulating concentration in blood from a standard oral dose can be difficult. However, with the use of TDM, effective oral dosage regimens can be determined.

DRUG DISTRIBUTION The free fraction of circulating drug is subject to diffusion out of the vasculature and into interstitial and intracellular spaces. The relative proportion between circulation and the tissues defines the drug's distribution. The ability to leave circulation is largely dependent on the lipid solubility of the drug. Drugs that are highly hydrophobic can easily cross cellular membranes and partition into lipid compartments, such as adipose tissue and nerve cells. Drugs that are polar and not ionized can also cross cell membranes, but do not sequester into lipid compartments. Ionized species diffuse out of the vasculature, but at a slow rate. The volume of distribution index is used to describe the distribution characteristics of a drug and is expressed mathematically as follows: (Eq. 30-1) where Vd is the volume of distribution in liters, D is an IV injected dose of the drug in milligrams (mg) or grams (g), and C is the drug concentration in plasma (mg/L or g/L). Drugs that are hydrophobic can have large Vd values due to partitioning into hydrophilic compartments. Substances that are ionized or are primarily bound in plasma have small Vd values due to sequestration in the vasculature.

FREE VERSUS BOUND DRUGS Most drugs in circulation are subject to binding with serum constituents. Although many potential bonds may be formed, most are drug–protein complexes. An important aspect regarding drug dynamics is that typically only the free or unbound fraction can interact with its site of action and result in a biologic response. For this reason, the free drug fraction is also termed active. At a standard dose, the total plasma drug concentration may be within the therapeutic range, but the patient experiences toxic adverse effects due to a high free fraction or does not realize a therapeutic benefit due to a low free fraction. This may occur secondary to changes in blood protein content as might occur during inflammation, malignancies, pregnancy, hepatic disease, nephrotic syndrome, malnutrition, and acid–base disturbances. Albumin represents the majority of protein constituents in plasma, and changes in its concentration can affect the free versus bound status of many drugs. Additionally, increases in plasma alpha-1-acid glycoprotein during acute phase reactions will lead to

increased binding of drugs such as propranolol, quinidine, chlorpromazine, cocaine, and benzodiazepines. The fraction of free drug may also be influenced by the concentration of substances that compete for binding sites, which may be other drugs or endogenous substances, such as urea, bilirubin, or hormones. Measurement of the free drug fraction should be considered for drugs that are highly protein bound or when clinical signs are inconsistent with the total drug measurement.

DRUG METABOLISM All substances absorbed from the intestine (except the rectum) enter the hepatic portal system. In this system, circulating blood from the gastrointestinal tract is routed through the liver before it enters into general circulation. Certain drugs are subject to significant hepatic uptake and metabolism during passage through the liver. This process is known as first-pass metabolism. Liver metabolism may not be the same in every patient as it can be influenced by an individual's genetics. These variations in drug metabolism related to genetics are examined in the discipline of pharmacogenomics. In addition to genetic variation, a patient with impaired liver function may have reduced capacity to metabolize drugs. This is a particularly important consideration if the efficacy of a drug depends on metabolic generation of a therapeutically active metabolite. This enzymatic process is referred to as biotransformation. Patients with liver disorders may require reduced dosages of the drug as the rate of metabolism and the subsequent elimination process may be slowed. Most drugs are xenobiotics, which are exogenous substances that are capable of entering biochemical pathways intended for endogenous substances. There are many potential biochemical pathways in which drugs can be acted on or biotransformed. The biochemical pathway responsible for a large portion of drug metabolism is the hepatic mixed-function oxidase (MFO) system. The basic function of this system involves taking hydrophobic substances and, through a series of enzymatic reactions, converting them into water-soluble products. These products can then either be transported into the bile or released into general circulation for elimination by renal filtration. There are many enzymes involved in the MFO system and are commonly divided into two functional groups or phases. Phase I reactions produce reactive intermediates. Phase II reactions conjugate functional groups, such as glutathione, glycine, phosphate, and sulfate, to reactive sites on the intermediates resulting in water-soluble products. The MFO system is nonspecific and allows

many different endogenous and exogenous substances to go through this series of reactions. Although there are many potential substrates for this pathway, the products formed from an individual substance are specific. For example, acetaminophen is metabolized in the MFO pathway ultimately leading to the formation of a glutathione conjugate following phase II reactions. In the presence of too much acetaminophen, as in the case of an overdose, the MFO system may be overwhelmed and cannot effectively metabolize it to a safe, water-soluble end product for elimination by the kidneys. In this case, the conjugating group for a given drug can become depleted in phase II reactions and an accumulation of phase I products occurs. Excessive phase I products may result in toxic effects, and in the case of acetaminophen, irreversible damage to hepatocytes may occur. It is also noteworthy that the MFO system can be induced. This is seen as an increase in the synthesis and activity of the rate-limiting enzymes within this pathway. The most common inducers are xenobiotics that are MFO substrates. Thus, certain drugs may affect their own rate of elimination. Due to biologic variability in the degree of induction, TDM is commonly used to establish an appropriate dosage regimen for these drugs. Because many potential substrates enter the MFO system, competitive and noncompetitive drug–drug interactions can result in altered rates of elimination of the involved drugs. Interactions are not limited to drug–drug, but may also include drug–food (i.e., grapefruit) or drug–beverage (i.e., alcohol and caffeine). For example, metabolism of acetaminophen by the MFO system is altered in the presence of alcohol rendering it more toxic. In most instances, the degree of alteration is unpredictable. Changes in hepatic status can also affect the concentration of circulating drugs eliminated by this pathway. Induction of the MFO system typically results in accelerated clearance and a corresponding shorter drug half-life. Hepatic disease states characterized by a loss of functional tissue may result in slower rates of clearance and a corresponding longer drug half-life. For example, cirrhosis results in irreversible damage and fibrosis of the liver, rendering hepatocytes nonfunctional. Consequently, xenobiotics may not be effectively metabolized by the MFO system, thereby reducing the rate of metabolism and elimination while increasing the opportunity for toxicity. In these situations, TDM aids in appropriate dosage adjustment. For some drugs, there is considerable variance in the rate of hepatic and nonhepatic drug metabolism within a normal population. This results in a highly

variable rate of clearance, even in the absence of disease. Establishing dosage regimens for these drugs is, in many instances, aided by the use of TDM. With the use of molecular genetics, it is now also possible to identify common genetic variants of some drug-metabolizing pathways, and identification of these individuals may assist in establishing an individualized dosage regimen.

DRUG ELIMINATION Drugs can be cleared from the body by various mechanisms. The plasma free fraction of a parent drug or its metabolites is subject to glomerular filtration, renal secretion, or both. For those drugs not secreted or subject to reabsorption, the elimination rate of free drug directly relates to the glomerular filtration rate. Decreases in glomerular filtration rate directly result in an increased drug halflife and elevated plasma concentration. Aminoglycoside antibiotics and cyclosporine, an immunosuppressant drug, are examples of drugs that are not secreted or reabsorbed by the renal tubules. Independent of the clearance mechanism, decreases in the plasma drug concentration most often occur as a first-order process indicating an exponential rate of loss. This implies that the rate of change of drug concentration over time varies continuously in relation to the concentration of the drug. First-order elimination follows the following general equation: (Eq. 30-2) This equation defines how the change in concentration per unit time (ΔC/ ΔT) is directly related to the concentration of drug (C) and the constant (k). The k value is a proportionality factor that describes the percent change per unit time. It is commonly referred to as the elimination constant or the rate of elimination and is negative because it is a decreasing value. The graphic solution to this equation is an exponential function that declines in the predicted curvilinear manner, asymptotically approaching zero (Fig. 30.2). The graph shown in Figure 30.2 illustrates a large change at high drug concentrations and a smaller change at low drug concentrations; however, the rate, or percent lost, remains the same. Plotting it in semilogarithmic dimensions (Fig. 30.3) can linearize this function.

FIGURE 30.2 First-order drug elimination. This graph demonstrates exponential rate of loss on a linear scale. Hash-marked lines are representative of half-life.

FIGURE 30.3 Semilogarithmic plot of exponential rate of drug elimination. The slope of this plot is equal to the rate of elimination (k). Drugs are eliminated through hepatic metabolism, renal filtration, or a combination of the two. For some drugs, elimination by these routes is highly variable and functional changes in either organ system may result in changes in the rate of elimination. In these situations, information regarding elimination rate and estimating the circulating concentration of a drug after a given time period are important factors in establishing an effective and safe dosage regimen. Equation 30-2 and Figures 30.2 and 30.3 are useful in determining the rate of elimination and concentration of a drug after a given time period. The following equation illustrates how the concentration of drug can be estimated based on initial concentration, the elimination constant, and time since the initial dose was administered: (Eq. 30-3) where C0 is the initial concentration of drug, CT is the concentration of drug after the time period (T), k is the elimination constant, and T is the time period evaluated. This is the most useful form of the elimination equation. By measuring the initial drug concentration and the concentration after time (T), the

elimination constant can be determined. Once k is known, the amount of drug that will be present after a certain time period can be determined.

Example The concentration of gentamicin is 10 μg/mL at 12:00. At 16:00, the gentamicin concentration is 6 μg/mL. What is the elimination constant (k) for gentamicin in this patient? Using Eq 30-3:

Substituting these values into the elimination equation (Eq 30-3) yields:

Divide both sides by 10 μg/mL and note that the concentration units can be cancelled:

To eliminate the exponent, take the natural logarithm of both sides:

Solve the natural log:

Multiply both sides by −1:

Divide both sides by 4 hours:

The calculated value for k indicates the patient is eliminating gentamicin at a rate of 13% per hour.

In this same patient on the same day, what would the predicted blood concentration of gentamicin be at midnight (24:00)? For C0, either the 12:00 or 16:00 value can be used as long as the correct corresponding time duration is used. In this example, the 16:00 value of 6 μg/mL will be used.

Substituting these values into the elimination equation (Eq 30-3) yields:

Solve for the exponent and note that the time unit can be cancelled:

The calculated CT value indicates the patient would have a serum gentamicin concentration of 2.1 μg/mL at midnight (24:00). Although the elimination constant (k) is a useful value, it is not common nomenclature in the clinical setting. Instead, the term half-life (T½) is used. One half-life is the time needed for the serum concentration to decrease by one-half. It can be determined graphically (Fig. 30.2) or by conversion of the elimination constant (k) to half-life (T½) using the formula given in Eq 30-4. Of these two methods, the calculation provides the easiest and most accurate way to determine the drug half-life. Referring to the example above, the half-life of gentamicin in this patient would be calculated as follows: (Eq. 30-4)

The gentamicin half-life value for this patient indicates that after 5.33 hours, the concentration of drug in their blood would be one-half the initial

concentration.

PHARMACOKINETICS Pharmacokinetics is the activity of a drug in the body as influenced by absorption, distribution, metabolism, and excretion. Evaluation of pharmacokinetics assists in establishing or modifying a dosage regimen as it takes into consideration in vivo factors that affect the concentration of the drug and its rate of change. Figure 30.3 is an idealized plot of elimination after an IV bolus assuming there is no previous distribution of the drug. A drug that distributes outside of the vascular space would produce an elimination graph such as in Figure 30.4. The rapid rate of change seen immediately after the initial IV bolus is a result of distribution and elimination. The rate of elimination (k) can be determined only after distribution is complete. Figure 30.5 is a plot of serum drug concentration as it would appear after oral administration of a drug. As absorbed drug enters the circulation, it is subject to simultaneous distribution and elimination. Serum concentrations rise when the rate of absorption exceeds the rate of distribution and elimination and declines as elimination and distribution exceed the rate of absorption. The rate of elimination can only be determined after absorption and distribution are complete.

FIGURE 30.4 Semilogarithmic elimination plot of a drug subject to distribution. Initial rate of elimination is influenced by distribution (dashed line) and terminal elimination rate (dotted line). After distribution is complete (1.5 hours), elimination is first order.

FIGURE 30.5 Plasma concentration of a drug after oral administration. After oral administration at time 0, serum concentration increases (solid line) after a brief lag period. Plasma concentrations peak when rate of elimination and distribution exceed rate of absorption. First-order elimination (dotted line) occurs when absorption and distribution are complete. Most drugs are not administered as a single bolus, but are delivered on a scheduled basis (e.g., once every 8 hours). With this type of administration, the blood drug concentration oscillates between a maximum and a minimum referred to as the peak drug concentration and the trough drug concentration, respectively. The goal of a multiple-dosage regimen is to achieve trough and peak concentrations in the therapeutic range and to ensure that the peak is not in the toxic range. Evaluation of this oscillating function cannot be done immediately after initiation of a scheduled dosage regimen. Approximately five to seven doses are required before a steady-state oscillation is achieved as demonstrated in Figure 30.6.

FIGURE 30.6 Steady-state kinetics in a multiple-dosage regimen. The character τ indicates the dosage interval. Equal doses at this interval reach steady state after six or seven dosage intervals. c mean drug concentration. After the first oral dose, absorption and distribution occur, followed only by elimination. Before the concentration of drug drops significantly, the second dose is given. The peak of the second dose is additive to what remained from the first dose. Because elimination is first order, the higher concentration results in a larger amount eliminated. The third through seventh scheduled doses all have the same effect, increasing serum concentration and the amount eliminated. By the end of the seventh dose, the amount of drug administered in a single dose is equal to the amount eliminated during the dosage period. At this point, steady state is established and peak and trough concentrations can be evaluated.

SPECIMEN COLLECTION Accurate timing of specimen collection is the single most important factor in TDM. In general, trough concentrations are drawn right before the next dose, and peak concentrations are drawn 1 hour after an orally administered dose. This rule of thumb must always be used within the clinical context of the situation. Drugs that are absorbed at a slower rate may require several hours before peak drug concentrations can be evaluated. In all situations, determination of serum drug concentrations should be performed only after steady state has been

achieved. Serum or plasma is the specimen of choice for the determination of circulating concentrations of most drugs. Care must be taken that the appropriate container is used when collecting these specimens as some drugs have a tendency to be absorbed into the gel of certain separator collection tubes. It is necessary to follow manufacturer recommendations when this preanalytical effect is possible as failure to do so may result in falsely low values. Heparinized plasma is generally suitable for most drug analyses. Calcium-binding anticoagulants add a variety of anions and cations that may interfere with analysis or cause a drug to distribute differently between cells and plasma. As a result, specimen tubes that contain ethylenediaminetetraacetic acid (EDTA), citrate, or oxalate are generally considered unacceptable specimen types for TDM.

PHARMACOGENOMICS The effectiveness of a drug within a population can be divided into two categories: patients who are responders and patients who are nonresponders. Responders are patients benefiting from the therapeutic and desired effects of the drug, while nonresponders do not demonstrate a beneficial effect from the initiation of a given drug regimen. The therapeutic effectiveness of drugs in responders and nonresponders has recently been attributed to the interindividual variation in genetic polymorphisms of the patients' drug metabolism pathways. As previously mentioned, pharmacogenomics is the science of studying these variations and developing drug therapies to compensate for the genetic differences impacting therapy regimens. One of the most prominent gene families that affect drug metabolism is the cytochrome P-450 (CYP450) family, which is a family of enzymes within the MFO system previously described. The differences in rates of drug metabolism in a population are attributed to the variations in the enzymes as a result of genetic polymorphism. The three variations most often linked to differences in drug metabolism are CYP2D6, CYP2C9, and CYP3A4. This information can be used to personalize drug doses to the degree that is appropriate for the CYP450 profile of the patient. For example, if the patient's CYP450 profile indicates he or she has genes known to metabolize more slowly, they would be given lower doses of the drug to avoid toxic concentrations. Alternatively, if a patient's CYP450 profile indicates he or she has genes predisposing the patient to an increased rate of metabolism, he or she would need an increased dose to

maintain therapeutic drug concentrations. Pharmacogenetic profiling can also be used to predict drug–drug interactions or as an indicator if the drug will provide any therapeutic benefit at all.1

CARDIOACTIVE DRUGS Many cardiac conditions are treated with drugs, of which, only a few require TDM. The cardiac glycosides and antiarrhythmics are two classes of cardioactive drugs for which TDM aids in achieving an appropriate dosage regimen.

Digoxin Digoxin (Lanoxin) is a cardiac glycoside used in the treatment of congestive heart failure. It functions by inhibiting membrane Na+, K+-ATPase. This causes a decrease in intracellular potassium, which results in increased intracellular calcium in cardiac myocytes. The increased intracellular calcium improves cardiac contractility. This effect is seen in the serum concentration range of 0.8 to 2 ng/mL (1 to 2.6 nmol/L).2 Higher serum concentrations (>3 ng/mL [3.8 nmol/L]) decrease the rate of ventricular depolarization. Although this level can be used to control ventricular tachycardia, it is not frequently used because of toxic adverse effects, including nausea, vomiting, and visual disturbances, that occur at plasma concentrations greater than 2 ng/mL. Adverse cardiac effects, such as premature ventricular contractions (PVCs) and atrioventricular node blockage, are also common. Absorption of orally administered digoxin is variable and influenced by dietary factors, gastrointestinal motility, and formulation of the drug. In circulation, about 25% is protein bound. The unbound or free form of digoxin is sequestered into muscle cells, and at equilibrium, the tissue concentration is 15 to 30 times greater than that of plasma. Elimination of digoxin occurs primarily by renal filtration of unbound digoxin. The remainder is metabolized into several products by the liver. The half-life of plasma digoxin is 38 hours in an average adult. The major contributing factor to the extended half-life is the slow release of tissue digoxin back into circulation. Because of variable gastrointestinal absorption, establishing a dosage regimen usually requires assessment of plasma concentrations after initial dosing to ensure that effective and nontoxic plasma concentrations are achieved. In addition, changes in glomerular filtration rate can have a dramatic effect on

plasma concentrations. Frequent dosage adjustments, in conjunction with measurement of plasma digoxin concentrations, should be performed in patients with renal disease. The therapeutic benefits and toxicities of digoxin can also be influenced by the concentration of electrolytes in the plasma. Low potassium and magnesium potentiate digoxin actions.2 In these conditions, adjustment of plasma concentrations below the therapeutic range may be necessary to avoid toxicity. Thyroid status may also influence the actions of digoxin. Hyperthyroid patients display a resistance to digoxin actions, and hypothyroid patients are more sensitive.3 The timing for evaluation of peak digoxin concentrations is crucial. In an average adult, plasma concentrations peak between 2 and 3 hours after an oral dose; however, uptake into the tissues is a relatively slow process. As a result, peak plasma concentrations do not correlate well with tissue concentrations. It has been established that the plasma concentration 8 to 10 hours after an orally administered dose correlates well with the tissue concentration, and specimens should be drawn within this window. Specimens collected before this time are misleading and are not considered a valid representation of the patient's drug status. Immunoassays are used to measure total digoxin concentration in serum.2 With most commercial assays, cross-reactivity with hepatic metabolites is minimal; however, newborns, pregnant women, and patients with uremia or latestage liver disease produce an endogenous substance that cross-reacts with the antibodies used to measure serum digoxin. In patients with these digoxin-like immunoreactive substances, falsely elevated concentrations are common and should be considered along with the clinical context.

Quinidine Quinidine (Quinidex Extentabs, Cardioquin, Quinora) is a naturally occurring drug used to treat various cardiac arrhythmias. The two most common formulations are quinidine sulfate and quinidine gluconate. Quinidine is generally administered orally as gastrointestinal absorption is complete and rapid for quinidine sulfate with peak serum concentrations achieved 2 hours post dose. Quinidine gluconate is a slow-release formulation with peak serum concentrations 4 to 5 hours after an oral dose.2 Because of its slow rate of absorption, trough levels of quinidine gluconate are usually drawn 1 hour post dose. The most common adverse effects of quinidine toxicity are nausea, vomiting, and abdominal discomfort though more serious complications such as

thrombocytopenia and tinnitus can occur. Signs of cardiovascular toxicity, such as PVCs, may be seen when blood concentrations are twice the upper limit of the therapeutic range. In most instances, monitoring of quinidine therapy only involves measurement of trough specimens to ensure serum concentrations are within the therapeutic range. Assessment of peak specimens is performed only when symptoms of toxicity are present. Approximately 70% to 80% of absorbed quinidine is bound to plasma proteins.2 Quinidine has a half-life of 6 to 8 hours and is primarily eliminated by hepatic metabolism. Induction of hepatic metabolism, such as by barbiturates, increases the clearance rate of quinidine, whereas impairment of this system, as seen in late-stage liver disease, may extend the half-life of quinidine in circulation. Plasma quinidine concentrations can be determined by chromatographic methods or immunoassays and the therapeutic range for quinidine is 2 to 5 μg/mL. Manufactured quinidine may contain bioactive contaminants such as dihydroquinidine. Early immunoassays detected quinidine only, but most current immunoassays cross-react with these bioactive contaminants and serum measurements represent the total quinidine potential.

Procainamide Like quinidine, procainamide (Procanbid, Procan SR, Pronestyl) is used to treat cardiac arrhythmias. Oral administration is common as gastrointestinal absorption is rapid and complete with peak plasma concentrations at about 1 hour post dose.2 Approximately 20% of absorbed procainamide is bound to plasma proteins. Procainamide has a half-life of approximately 4 hours and is eliminated by a combination of both renal filtration and hepatic metabolism. Nacetylprocainamide, a hepatic metabolite of the parent drug, demonstrates antiarrhythmic activity similar to procainamide, and measurements of the total antiarrhythmic drug potential must take into consideration the parent drug and its metabolite. The therapeutic range for procainamide is 4 to 8 μg/mL. Alteration in either renal or hepatic function can lead to increased plasma concentrations of the parent drug and metabolites. An increased concentration can result in myocardial depression and arrhythmia. Both procainamide and its active metabolite can be measured by immunoassay to determine the total drug potential.

CASE STUDY 30.1

A patient with congestive heart failure has been successfully treated with digoxin for several years, but recently developed renal failure. Laboratory records indicate semiannual peak digoxin concentrations that have all been within the therapeutic range. A serum specimen was collected upon admission and results are shown in the ase Study Table 30.1.1. Although the digoxin concentration is elevated, the physician indicates the patient is not exhibiting signs or symptoms of toxicity.

Questions 1. If the specimen was collected without consideration of the time of the patient's last digoxin dose, how might this affect the interpretation of the result? 2. Other than the time relative to dose administration, what additional factors should be taken into consideration when interpreting the digoxin result?

3. What additional laboratory test would aid in the interpretation of this case?

Disopyramide Disopyramide (Norpace) is another antiarrhythmic agent used in the treatment of cardiac abnormalities. Disopyramide may be administered as a quinidine substitute when the adverse effects of quinidine are unacceptable. It is most commonly administered as an oral preparation as gastrointestinal absorption is complete and rapid with plasma concentrations peaking at about 1 to 2 hours post dose.2 Disopyramide binds to several plasma proteins, but binding is highly variable between individuals and is concentration dependent. As a result, it is difficult to correlate total plasma drug concentration with therapeutic benefit and toxicity. In most patients, total blood concentrations in the range of 3 to 7.5 μg/mL (8.8 to 22.1 μmol/L) have been determined to be effective and nontoxic; however, interpretation of disopyramide results should take the clinical perspective into consideration. The adverse effects of disopyramide therapy are dose dependent.2 Anticholinergic effects, such as dry mouth and constipation, may be seen at concentrations greater than 4.5 μg/mL (13.3 μmol/L). Cardiac effects of drug toxicity, such as bradycardia and atrioventricular node blockage, are usually seen at concentrations greater than 10 μg/mL (29.5 μmol/L). Disopyramide has a half-life of approximately 7 hours and is primarily eliminated by renal filtration and, to a lesser extent, by hepatic metabolism. In conditions with a decreased glomerular filtration rate, the drug half-life is prolonged and expected concentrations are elevated. Plasma disopyramide concentrations are commonly determined by chromatographic methods or immunoassay.

CASE STUDY 30.2 A patient receiving procainamide therapy for cardiac arrhythmia is given an IV loading dose resulting in a serum concentration of 6 μg/mL. The therapeutic range for procainamide is 4 to 8 μg/mL, and its half-life is 4 hours. Four hours after the initial loading dose, another equivalent dose is given as an IV bolus resulting in a serum concentration of 7.5 μg/mL.

questions 1. Does the serum concentration after the second dose seem appropriate? If not, what would be the predicted serum concentration at this time? 2. What factors would influence the rate of elimination of this drug?

ANTIBIOTICS Aminoglycosides Aminoglycosides are a group of chemically related antibiotics used to treat gram-negative bacterial infections that are resistant to other less-toxic antibiotics. There are many individual antibiotics within this classification, but the most frequently encountered ones in a clinical setting are gentamicin, tobramycin, amikacin, and kanamycin. These antibiotics share a common mechanism of action, but vary in effectiveness against different strains of bacteria. The most serious effects of aminoglycoside toxicity are nephrotoxicity and ototoxicity. The ototoxic effects are irreversible and involve disruption of inner ear cochlear and vestibular membranes, which results in hearing and balance impairment. Cumulative effects may be seen with repeated high-level exposure. Nephrotoxicity is also a major concern for patients on this type of antibiotic therapy as aminoglycosides impair the function of the proximal renal tubules, which can result in electrolyte imbalance and proteinuria. These effects are usually reversible; however, extended high-level exposure may result in necrosis of renal tubular cells and subsequent renal failure.4 Toxic concentrations for this type of drug are usually considered as any concentration above the therapeutic range. Because aminoglycosides are poorly absorbed in the gastrointestinal tract, administration is generally limited to IV or IM injection; therefore, these drugs are not used in an outpatient setting. Peak concentrations are achieved 1 to 2 hours post dose, and approximately 10% of the drug is bound to plasma proteins in circulation. Aminoglycosides have a half-life of approximately 2 to 3 hours, depending on the specific antibiotic, and are eliminated by renal filtration. In patients with compromised renal function, appropriate dosage adjustments must be made based on plasma concentrations. Chromatography and immunoassay are

the primary methods used for aminoglycoside determinations.

Teicoplanin Teicoplanin is a bactericidal antibiotic effective against both aerobic and anaerobic gram-positive bacilli as well as gram-positive cocci that cannot be treated with less-toxic antibiotics. Of note, teicoplanin has proven effective in the treatment of methicillin-resistant Staphylococcus aureus (MRSA) infections. This antibiotic has very poor absorption when administered orally and should be given through IV or IM injection. Teicoplanin has a long half-life of 70 to 100 hours, and 90% to 95% is bound to plasma proteins in circulation. It is not metabolized in the liver, but is eliminated by renal filtration and secretion. TDM is not routinely performed for teicoplanin as a relationship between plasma concentrations and toxicity has not been established; however, monitoring may assist in dosage optimization. When trough specimens are measured, the therapeutic range for teicoplanin is 10 to 60 mg/L for the treatment of endocarditis and 20 to 60 mg/L for staphylococci infections. Adverse effects associated with teicoplanin toxicity include nausea, vomiting, fever, diarrhea, and mild hearing loss.

Vancomycin Vancomycin (Vancocin HCl) is a glycopeptide antibiotic effective against grampositive cocci and bacilli infections. Because of poor gastrointestinal absorption, vancomycin should be administered by IV infusion allowing peak concentrations to be achieved 1 hour post dose. In circulation, approximately 55% is bound to plasma proteins. Unlike other drugs, a clear relationship between blood concentration and toxic adverse effects has not been firmly established. Indeed, many of the toxic effects occur in the therapeutic range (5 to 10 μg/mL [3.45 to 6.9 μmol/L]).4 Major toxicities of vancomycin include red man syndrome, nephrotoxicity, and ototoxicity. Red man syndrome is characterized by an erythemic flushing of the extremities. The nephrotoxic and ototoxic effects are similar to those of the aminoglycosides. It appears that nephrotoxic effects occur more frequently at trough concentrations greater than 10 μg/mL (6.9 μmol/L) and ototoxic effects occur more frequently when peak concentrations exceed 40 μg/mL (27.6 μmol/L). Because vancomycin has a long distribution phase, generally only trough concentrations are monitored to ensure the drug concentration is within the therapeutic range. Vancomycin has a half-life of 4 to 6 hours and is primarily eliminated by renal filtration and secretion. Vancomycin concentrations are routinely determined by immunoassay and chromatographic

methods.

ANTIEPILEPTIC DRUGS Epilepsy, convulsions, and seizures are prevalent neurologic disorders. Because antiepileptic drugs (AEDs) are used as prophylactics, therapeutic ranges are considered guidelines and should be interpreted in accordance with the presenting clinical context.5 Effective concentrations are determined as the concentration that provides therapeutic benefit with no or acceptable adverse effects. In recent years, a second generation of anti-AEDs have been introduced in clinical practice as supplemental therapy to more traditional drugs' firstgeneration AEDs such as phenobarbital and phenytoin. Unlike first-generation AEDs, optimal concentration ranges have not been firmly established. For these drugs, TDM may be used to establish individual baseline concentrations at which the patient is responding well.5 It is important to take these individual therapeutic ranges into account as physiologic conditions change (age related, pregnancy, kidney or liver disease, etc.) and in reassessing concentrations when other AEDs are added to an established regimen.5 Most AEDs are analyzed by immunoassay or chromatography and measure the free or bound drug in a serum or plasma sample. In a normal physiologic state, the total drug concentration may be sufficient for therapeutic monitoring purposes; however, a free drug measurement may be necessary when there is cause for alteration in patient plasma protein, such as in the later stages of pregnancy, in late-stage renal or hepatic disease, malnutrition, or when a known drug–drug interaction may occur.6 As with all TDM, specimen collection timing must be consistent, and the preferred specimen is generally a trough specimen collected at the end of the dosing interval.6

Phenobarbital and Primidone Phenobarbital (Luminal or Solfoton) is a slow-acting barbiturate that effectively controls several types of seizures. Absorption of oral phenobarbital is slow, but complete. For most patients, peak plasma concentrations are reached about 10 hours after an oral dose, and approximately 50% of circulating phenobarbital is bound to plasma proteins. The half-life of phenobarbital is 70 to 100 hours. It is eliminated primarily by hepatic metabolism; however, renal filtration is also significant. With compromised renal or hepatic function, the rate of elimination is decreased. Because of the slow rate of absorption and long half-life, blood concentrations of phenobarbital do not change dramatically within a dosing

interval; therefore, only trough concentrations are typically evaluated unless toxicity is suspected. Toxic adverse effects of phenobarbital include drowsiness, fatigue, depression, and reduced mental capacity. Phenobarbital clearance occurs by the hepatic MFO system. It is also important to note that phenobarbital is a potent inducer of the MFO system. After initiation of therapy, dose adjustment is usually required after the induction period is complete. For most individuals, this occurs within 10 to 15 days after the initial dose. Primidone is an inactive proform of phenobarbital. Primidone is rapidly absorbed in the gastrointestinal tract and converted to its active form, phenobarbital. Primidone is preferred over phenobarbital when steady-state kinetics need to be established quickly. Both primidone and phenobarbital are measured to assess the total AED potential in circulation with a therapeutic range of 10 to 40 mg/L.

CASE STUDY 30.3 A patient on successful oral phenytoin therapy for seizure disorders presents with severe diarrhea for the past 2 weeks. Subsequent to the prolonged diarrhea, the patient had a seizure. Evaluation of the serum phenytoin concentration at the time of the seizure revealed a low value. The dose was increased until the serum concentration was within the therapeutic range. The diarrhea resolved; however, several days after the dose adjustment, the patient had another seizure.

questions 1. What is the most probable cause of the initial low serum phenytoin concentration? 2. Would determination of free serum phenytoin aid in resolving the cause of the initial seizure? 3. What assays other than the determination of serum phenytoin would aid in this situation? 4. What is the most probable cause of the seizure after the diarrhea had been resolved?

Phenytoin and Fosphenytoin Phenytoin (Dilantin) is a common therapeutic agent used in the treatment of seizure disorders. It is also used as a short-term prophylactic agent in brain injury to prevent loss of functional tissue. Phenytoin is primarily administered as an oral preparation though gastrointestinal absorption is variable and sometimes incomplete. Peak concentrations are achieved 3 to 12 hours post dose. Circulating phenytoin has a high but variable degree of protein binding (87% to 97%) and can be easily displaced by other highly protein-bound drugs.7 Like most drugs, the unbound, or free, fraction is the biologically active form of the drug. Reduced protein binding may occur with anemia, hypoalbuminemia, and as a result of coadministration of other drugs with similar binding properties. In these situations, symptoms of toxicity may be observed even though the total drug concentration is within the therapeutic range. The most significant adverse effect of phenytoin toxicity is initiation of seizures. Thus, seizures in a patient receiving phenytoin therapy may be a result of subtherapeutic or toxic concentrations. Additional adverse effects of phenytoin include hirsutism, gingival hyperplasia, vitamin D deficiency, and folate deficiency. Phenytoin has a half-life of 6 to 24 hours and is eliminated by hepatic metabolism. At therapeutic concentrations, the elimination pathway may become saturated (zero-order kinetics); therefore, relatively small changes in dosage or elimination can have dramatic effects on plasma drug concentrations. Phenytoin is also an inducer of the hepatic MFO pathway, which reduces the half-life of concurrently administered drugs that are eliminated by this pathway.5 For most patients, total blood concentrations of 10 to 20 μg/mL (20 to 80 μmol/L) are considered effective. In many situations, however, the therapeutic range must be individualized to suit the clinical situation. The therapeutic range for free serum phenytoin is 1 to 2 μg/mL (4 to 8 μmol/L). This has been well correlated with the pharmacologic actions of this drug. In patients with altered plasma protein binding, determination of the free fraction aids in dosage adjustment. Fosphenytoin is an injectable proform of phenytoin that is rapidly metabolized (~75 minutes) to form the parent drug. Most immunoassays for phenytoin do not detect fosphenytoin so peak concentrations should only be measured after conversion to the active form is complete.

Valproic Acid Valproic acid, or valproate (Depakote), is used in the treatment of petit mal and absence seizures.7 It is administered as an oral preparation as gastrointestinal absorption is rapid and complete. Circulating valproic acid is highly protein bound (93%), and peak concentrations are reached 1 to 4 hours post dose. The percent of valproic acid bound to plasma proteins decreases in renal failure, in late liver disease, and with coadministration of drugs that compete for its binding sites. Valproic acid is eliminated by hepatic metabolism, which may be induced by coadministration of other AEDs, but is inhibited by administration of felbamate, which is another AED.7 Without coadministration of these drugs, valproic acid has a half-life of 11 to 17 hours. The therapeutic range for valproic acid is relatively wide (50 to 120 μg/mL [347 to 832 μmol/L]), and determination of blood concentrations is primarily performed to ensure that toxic levels (>120 μg/mL) are not present. Nausea, lethargy, and weight gain are the most common adverse effects of valproic acid toxicity; however, pancreatitis, hyperammonemia, and hallucinations have been associated with significantly elevated concentrations (>200 μg/mL [>1387 μmol/L]). Hepatic dysfunction occasionally occurs in some patients even at therapeutic concentrations; therefore, hepatic markers should be checked frequently during the first 6 months of therapy. Many factors influence the portion of valproic acid bound to plasma proteins so measurement of the free fraction provides a more reliable index of therapeutic and toxic concentrations.

Carbamazepine Carbamazepine (Tegretol) is an effective treatment for various seizure disorders; however, because of its serious adverse effects, it is only used when patients do not respond well to other AEDs. Orally administered carbamazepine is absorbed with a high degree of variability, and approximately 70% to 80% of circulating carbamazepine is bound to plasma proteins. Peak concentrations are achieved 4 to 8 hours post dose, and the half-life for carbamazepine is 10 to 20 hours. It is eliminated primarily by hepatic metabolism so liver dysfunction can result in accumulation of carbamazepine. Carbamazepine is an inducer of hepatic metabolism so frequent monitoring of plasma concentrations must be performed after initiation of therapy until the induction period has come to completion. Carbamazepine toxicity is diverse and variable. Certain effects occur in a dose-dependent manner, while others do not. There are several idiosyncratic effects of carbamazepine, which affect a portion of the population at therapeutic

concentrations including rashes, leukopenia, nausea, vertigo, and febrile reactions. Of these, leukopenia is the most serious. Leukocyte counts are commonly assessed during the first 2 weeks of therapy to detect this possible toxic effect. Liver function tests should also be evaluated during this time period as mild, transient liver dysfunction is commonly seen during initiation of therapy. Persistent increases in liver markers or significant leukopenia commonly result in discontinuation of carbamazepine therapy. The therapeutic range for carbamazepine is 4 to 12 μg/mL (16.9 to 50.8 μmol/L), and plasma concentrations greater than 15 μg/mL (63.5 μmol/L) are associated with hematologic disorders and possible aplastic anemia.

Ethosuximide Ethosuximide (Zarontin) is used for controlling petit mal seizures. It is administered as an oral preparation with peak concentrations achieved within 2 to 4 hours. The therapeutic range for ethosuximide is 40 to 100 μg/mL (283 to 708 μmol/L). Toxicities associated with high plasma concentrations are rare, tolerable, and generally self-limiting. Common adverse effects of toxicity include nausea, vomiting, anorexia, dizziness, and lethargy. Less than 5% of circulating drug is bound to proteins in the plasma. Ethosuximide is metabolized in the liver; however, approximately 20% is excreted through renal filtration. The half-life for ethosuximide is 40 to 60 hours. TDM for ethosuximide is performed to ensure that blood concentrations are in the therapeutic range.

Felbamate Felbamate (Felbatol) is primarily indicated for use in severe epilepsies such as in children with the mixed seizure disorder, Lennox-Gastaut syndrome, and in adults with refractory epilepsy.5, 6, 7 Felbamate is most commonly administered orally as it is nearly completely absorbed by the gastrointestinal tract, and peak serum concentrations can be achieved within 1 to 4 hours.8 In circulation, approximately 30% of felbamate is bound to plasma proteins and has a half-life of 14 to 22 hours in adults. Felbamate is eliminated by renal and hepatic metabolism so impairment of hepatic or renal function can significantly increase the half-life of the drug in circulation. Hepatic metabolism is enhanced by enzyme inducers such as phenobarbital, primidone, phenytoin, and carbamazepine and results in a decreased half-life. TDM may be indicated due to a narrow therapeutic range, but should only be considered after steady state has been reached. In patients receiving therapeutic

doses, felbamate concentrations are typically in the range of 30 to 60 μg/mL (126 to 252 μmol/L).8 Documented adverse side effects of felbamate toxicity include fatal aplastic anemia and hepatic failure.5,7

Gabapentin Gabapentin (Neurontin) is administered orally with a maximum bioavailability of 60%, which is reduced when antacids are administered concurrently. This drug may be administered as a monotherapy or in conjunction with other AEDs for patients suffering from complex partial seizures with or without generalized seizures.5,7 Peak concentrations for gabapentin are achieved 2 to 3 hours following dosage. Gabapentin does not bind to plasma proteins and is not metabolized in the liver. It is eliminated unchanged by the kidneys and has a half-life of approximately 5 to 9 hours in patients with normal renal function.8 Children require a higher dose than adults to maintain a comparable half-life as they eliminate the drug faster than adults. Due to exclusive renal clearance of this drug, impaired kidney function increases the half-life of the circulating drug in a linear manner.8 Therapeutic concentrations are reported to be between 12 and 20 μg/mL (70.1 and 116.8 μmol/L) although a wide range of serum concentrations have been reported in association with seizure control. Multiple daily doses may be the preferred dosage regimen as excessively high blood concentrations may lead to adverse drug effects, while low-level trough concentrations may lead to breakthrough seizures.5 Adverse effects associated with gabapentin toxicity are generally mild and may include fatigue, ataxia, dizziness, and weight gain. Gabapentin is the most frequently used AED in patients with liver disease and in treating partial-onset seizures in patients with acute intermittent porphyria.5

Lamotrigine Lamotrigine (Lamictal) is orally administered and is rapidly and completely absorbed from the gastrointestinal tract reaching peak concentrations 3 hours after administration. It is used to treat patients with partial and generalized seizures.5 Once in circulation, approximately 55% of lamotrigine is protein bound and biologically inactive.8 Hepatic metabolism accounts for the majority of elimination and in patients undergoing monotherapy, its half-life is 15 to 30 hours.7,8 The rate of elimination for lamotrigine is highly dependent on patient age and physiologic condition. Younger infants tend to metabolize this drug slower than older infants, and children metabolize lamotrigine twice as quickly

as adults. Marked increases in clearance occur during pregnancy, peaking at 32 weeks of gestation.5 Lamotrigine clearance is increased by enzyme-inducing AEDs such as phenobarbital, primidone, phenytoin, and carbamazepine; however, valproic acid is an inhibitor of lamotrigine metabolism and may increase its half-life to 60 hours. Because of these drug–drug interactions, TDM is essential to maintain therapeutic drug concentrations. Individual therapeutic ranges may vary, but a concentration range of 2.5 to 15 μg/mL has been noted as efficacious and an increasing concentration seems to correlate well with increased risk of toxicity.8 A small percentage of patients taking lamotrigine develop a rash. Other adverse effects associated with toxicity include neurological effects, such as dizziness, and gastrointestinal disturbances.

Levetiracetam Levetiracetam (Keppra) is an orally administered AED that does not bind to plasma proteins so it is almost completely bioavailable and reaches peak concentration by 1 hour post dose.5,7,8 Use of levetiracetam is indicated in partial and generalized seizures. Sixty-five percent of levetiracetam is excreted unchanged by the kidneys, and it has a half-life of 6 to 8 hours, although the rate of elimination is increased in children and pregnant females and decreased in the elderly. The rate of clearance for this drug correlates well with glomerular filtration rate, which may be of use in monitoring patients with renal impairment. The need for TDM of levetiracetam is not as pronounced as in other AEDs due to its lack of pharmacokinetic variability, but may be useful in monitoring compliance and fluctuating concentrations during pregnancy.5 Therapeutic concentrations have been reported at 8 to 26 μg/mL. Adverse effects are minimal but include dizziness and weakness.

Oxcarbazepine Oxcarbazepine (Trileptal) is an orally administered prodrug that is almost immediately metabolized to licarbazepine.7,8 It is indicated for treatment of partial seizures and secondarily in generalized tonic–clonic seizures. In circulation, almost 40% is bound to plasma proteins, and peak concentrations are achieved at about 8 hours post dose. It is metabolized by the liver into two pharmacologically active, equipotent enantiomers via keto reduction followed by glucuronide conjugation of the active licarbazepine derivative.7,8 In adults, the half-life of this drug is 8 to 10 hours. In children, there is a higher clearance rate;

therefore, a need exists for a higher dosing regimen to obtain the optimal serum concentration per kilogram of body weight compared with adults. In the elderly population, the drug clearance is reduced by 30% so a lower dosing regimen is needed to maintain therapeutic concentrations.8 Clearance of the drug and its metabolite is reduced in patients with marked renal dysfunction, and appropriate dosage adjustments must be made to avoid toxicity. The metabolism of licarbazepine is sensitive to enzyme inducers such as phenytoin and phenobarbital, which may decrease the blood concentration by 20% to 40%.8 TDM may be indicated when therapeutic benefits are not being met, when drug– drug interactions are possible, and during pregnancy. Although not well defined, therapeutic effects of licarbazepine have been reported at serum concentrations of 12 to 35 μg/mL. Adverse effects of toxicity are similar to those of carbamazepine.

Tiagabine Tiagabine (Gabitril) is used in the treatment of partial seizures.7 Gastrointestinal absorption of tiagabine is rapid and nearly complete resulting in peak concentrations at 1 to 2 hours post dose. Approximately 96% of circulating tiagabine is protein bound and its half-life is variable, but in the range of 4 to 13 hours. Due to its significant protein binding, the ratio of free to bound drug is affected by other protein-binding drugs such as valproic acid, naproxen, and salicylates and by pregnancy.8 It is highly metabolized by the hepatic MFO pathway5,8 so hepatic dysfunction can prolong the half-life of the drug. TDM may be indicated due to intraindividual and interindividual variations. Therapeutic benefits of the drug have been observed at concentrations of 20 to 100 ng/mL.8 Adverse central nervous system (CNS) side effects have been noted including confusion, difficulty in speaking clearly (stuttering), mild sedation, and a tingling sensation in the body's extremities, or paresthesia, especially in the hands and fingers.5,9

Topiramate Only 15% of topiramate (Topamax) is bound to plasma proteins and is almost completely bioavailable after oral administration. Peak concentrations are achieved within 1 to 4 hours.7,8 The half-life of topiramate is 20 to 30 hours, and the majority of this drug is eliminated by renal filtration though some is eliminated by hepatic metabolism. Topiramate is used in the treatment of partial and generalized seizures.5 The dose-to-serum concentration ratio in children is

less than that of adults per kilogram of body mass such that they require a higher dose to maintain plasma topiramate concentrations comparable to adults.8 Plasma concentrations are increased secondary to renal insufficiency, but may be decreased when used with other enzyme-inducing AEDs. The therapeutic range for topiramate is based on collection of trough specimens and is reported to be less than 25 mg/L (75 μmol/L). Adverse CNS side effects of topiramate include change of taste with particular foods (e.g., diet soda and beer) and a sensation of “pins and needles” in the extremities.5,9 TDM may be indicated when steady state is achieved to provide the clinician with an effective individual baseline concentration and may also be employed when therapeutic benefits are not realized or to monitor drug–drug interactions.8

Zonisamide Zonisamide (Zonegran) is an anticonvulsant used in adjunctive therapy for partial and generalized seizures. This drug is administered orally and absorbed from the gastrointestinal tract on the order of 65% or higher. Peak serum concentrations are reached 4 to 7 hours post dose, and approximately 60% of the drug is bound to plasma proteins and accumulates extensively in erthyrocytes.8 The majority of zonisamide is metabolized by the liver via glucuronide conjugation, acetylation, and oxidation and then renal excretion. The half-life of zonisamide is 50 to 70 hours in patients receiving monotherapy and may be reduced to 25 to 35 hours when other enzyme-inducing AEDs are being administered concomitantly.8 Children require higher doses to achieve therapeutic plasma concentrations comparable to that of an adult.8 Clinicians treating patients with liver or kidney disease should exercise caution as plasma zonisamide concentrations may increase proportionally with the level and type of organ impairment. There is documented overlap in zonisamide blood concentrations between those experiencing therapeutic effectiveness and those experiencing toxic side effects.8 Symptoms of zonisamide toxicity include difficulty breathing, low blood pressure, slow heart rate, and possible loss of consciousness. TDM may be indicated to establish a baseline level after steady state has been achieved, to detect drug–drug interactions, and at therapeutic failure. Therapeutic doses have been reported in patients with blood concentrations of 10 to 38 μg/mL.

PSYCHOACTIVE DRUGS

Lithium Lithium (Eskalith, Lithobid) is a mood-altering drug primarily used in the treatment of bipolar disorder, recurrent depression, and aggressive or selfmutilating behavior though it may also be used as a preventative treatment for migraines and cluster headaches. Gastrointestinal absorption is complete and rapid so this drug is administered orally and peak plasma concentrations are reached 2 to 4 hours after the dose was administered. Lithium is a cationic metal that does not bind to proteins. Lithium has a half-life of 10 to 35 hours and is eliminated predominately by renal filtration, but is subject to reabsorption in the renal tubules and compromises in renal function usually result in lithium accumulation. Correlations between plasma lithium concentration and therapeutic response have not been well established; however, plasma concentrations in the range of 0.5 to 1.2 mmol/L are effective in a large portion of the patient population.10 The purpose of TDM for lithium is to avoid plasma concentrations associated with toxic effects. Concentrations in the range of 1.5 to 2 mmol/L may cause apathy, lethargy, speech difficulties, and muscle weakness.10 Concentrations greater than 2 mmol/L are associated with renal impairment, hypothyroidism, and CNS disturbances such as muscle rigidity, seizures, and possible coma. Determination of serum lithium concentrations is commonly done by ion-selective electrode though flame emission photometry and atomic absorption are also viable methods.

Tricyclic Antidepressants Tricyclic antidepressants (TCAs) are a class of drug used to treat depression, insomnia, extreme apathy, and loss of libido. From a clinical laboratory perspective, imipramine, amitriptyline, and doxepin are the most relevant.10 Desipramine and nortriptyline are active metabolites of imipramine and amitriptyline, respectively. The TCAs are orally administered and demonstrate a varying degree of absorption. In many patients, they slow gastric emptying and intestinal motility, which significantly slows the rate of absorption. As a result, peak concentrations are reached in the range of 2 to 12 hours. Approximately 85% to 95% of TCAs are protein bound. For most TCAs, therapeutic effects are not seen for the first 2 to 4 weeks after initiation of therapy and the correlation between serum concentration and therapeutic effects of most TCAs is moderate to weak. TCAs are eliminated by hepatic metabolism, and many of their metabolic products have therapeutic actions. The rate of metabolism of these agents is variable and influenced by a wide variety of factors. As a result, the

half-life of TCAs varies considerably among patients (17 to 40 hours). The rate of elimination can also be influenced by coadministration of other drugs that are eliminated by hepatic metabolism. The toxicity of TCAs is dose dependent; serum concentrations twice the upper limit of the therapeutic range can lead to drowsiness, constipation, blurred vision, and memory loss. Higher levels may cause seizure, cardiac arrhythmia, and unconsciousness. Because of the high variability in half-life and absorption, plasma concentrations of the TCAs should not be evaluated until a steady state has been achieved. At this point, therapeutic efficacy is determined from clinical evaluation of the patient, and potential toxicity is determined by TDM. Many of the immunoassays for TCAs use polyclonal antibodies, which cross-react among the different TCAs and their metabolites. These immunoassays are used for TCA screening rather than TDM, and the results are reported out as “total tricyclics.”10 Some immunoassays employ an extraction step to separate the parent drugs from their metabolites prior to analysis, and interpretation of these results requires an in-depth understanding of the assay. Chromatographic methods provide simultaneous evaluation of both the parent drugs and their metabolites, which provides a basis for unambiguous interpretation of results.10

Clozapine Clozapine (Clozaril, FazaClo) is an antipsychotic used in the treatment of otherwise treatment-refractory schizophrenia. Absorption is rapid and almost complete, and approximately 97% of circulating drug is bound to plasma proteins. Peak concentrations are achieved within 2 hours of administration. Clozapine is metabolized in the liver and has a half-life of 8 to 16 hours. Research has shown that although there is not a well-established clinical serum concentration, beneficial effects of the drug have been demonstrated at 350 to 420 ng/mL. TDM may be indicated to check for compliance and in patients with altered pharmacokinetics. TDM may also be used to avoid toxicity, which can result in seizures.11

Olanzapine Olanzapine (Zyprexa) is a thienobenzodiazepine derivative that effectively treats schizophrenia, acute manic episodes, and the recurrence of bipolar disorders.11 It can be administered as a fast-acting IM injection, but is more commonly administered orally. The drug is absorbed well in the gastrointestinal tract; however, an estimated 40% is inactivated by first-pass metabolism. Peak

concentrations are reached 5 to 8 hours post dose, and approximately 93% is bound to plasma proteins in circulation. Olanzapine is metabolized in the liver and has a variable half-life of 21 to 54 hours. Women and nonsmokers tend to have lower clearance and thus higher blood concentrations of olanzapine compared with men and smokers.11 There is indication that plasma concentration correlates well with clinical outcomes and that TDM may help to optimize clinical response while balancing the occurrence of adverse effects in a therapeutic range of 20 to 50 ng/mL.11 Adverse effects of olanzapine toxicity include tachycardia, decreased consciousness, and possible coma.

IMMUNOSUPPRESSIVE DRUGS Transplantation medicine is a rapidly emerging discipline within clinical medicine. The clinical laboratory plays many important roles that determine the success of any transplantation program. Among these responsibilities, monitoring of immunosuppressive drugs used to prevent rejection is of key concern. Most immunosuppressive drugs require establishment of individual dosage regimens to optimize therapeutic outcomes and minimize toxicity.

Cyclosporine Cyclosporine (Gengraf, Neoral, Sandimmune) is a cyclic polypeptide that has potent immunosuppressive activity. Its primary clinical use is suppression of host-versus-graft rejection of heterotopic transplanted organs. It is administered as an oral preparation with absorption in the range of 5% to 50% and peak concentrations within 1 to 6 hours. Because of this high variability, the relationship between oral dosage and blood concentration is poor; therefore, TDM is an important part of establishing a dosage regimen. More than 98% of circulating cyclosporine is protein bound, and cyclosporine appears to sequester in cells, including erythrocytes.12 Erythrocyte content is highly temperature dependent; therefore, evaluation of plasma cyclosporine concentrations requires rigorous control of specimen temperature. To avoid this preanalytic variable, whole blood specimens are used. Correlations have been established between cyclosporine whole blood concentration and therapeutic and toxic effects. Cyclosporine is eliminated by hepatic metabolism to inactive products and has a half-life of approximately 12 hours. The therapeutic range for cyclosporine is dependent on the organ transplanted and the time after transplantation. Whole blood concentrations in

the range of 350 to 400 ng/mL (291 to 333 nmol/L) have been associated with cyclosporine toxicity. The adverse effects of cyclosporine are primarily renal tubular and glomerular dysfunction, which may result in hypertension.13 Several immunoassays are available for the determination of whole blood cyclosporine concentration though many show cross-reactivity with inactive metabolites. Chromatographic methods are available and provide separation and quantitation of the parent drug from its metabolites.14

Tacrolimus Tacrolimus FK-506 (Astagraf, Envarsus, Hecoria, Prograf) is an orally administered immunosuppressive drug that is 100 times more potent than cyclosporine.15 Early use of tacrolimus suggested a low degree of toxicity compared with cyclosporine at therapeutic concentrations; however, after extensive use in clinical practice, both drugs appear to have comparable degrees of nephrotoxicity. Tacrolimus has been associated with thrombus formation at concentrations above the therapeutic range. Many aspects of tacrolimus pharmacokinetics are similar to those of cyclosporine. Gastrointestinal uptake is highly variable with peak plasma concentrations achieved in 1 to 3 hours. More than 98% of circulating tacrolimus is bound to proteins in the plasma. Tacrolimus has a half-life of 10 to 12 hours; it is eliminated almost exclusively by hepatic metabolism, and its metabolic products are primarily secreted into bile for excretion. Increases in immunoreactive tacrolimus may be seen in cholestasis as a result of crossreactivity with several of these products. Because of the high potency of tacrolimus, circulating therapeutic concentrations are low. This limits the methodologies capable of measuring whole blood concentrations. Currently, the most common method is high-performance liquid chromatography–tandem mass spectrometry; however, several immunoassays are also available. Similar to cyclosporine, therapeutic ranges for tacrolimus are dependent on the transplant type and time from transplantation. Whole blood concentrations correlate well with therapeutic and toxic effects and are the preferred specimen for tacrolimus TDM. Some of the adverse effects associated with tacrolimus toxicity include anemia, leukopenia, thrombocytopenia, and hyperlipidemia.

Sirolimus Sirolimus (Rapamune) is an antifungal agent with immunosuppressive activity that is used to prevent graft rejection in patients receiving a kidney transplant.

Sirolimus is rapidly absorbed after once-daily oral administration, and peak blood concentrations are achieved 1 to 2 hours post dose. Plasma concentrations are affected extensively by intestinal and hepatic first-pass metabolism. To increase the therapeutic efficacy, sirolimus is commonly coadministered with cyclosporine or tacrolimus as the bioavailability for sirolimus is 15% when taken in conjunction with cyclosporine.14,16,17 Sirolimus has a long half-life of 62 hours and is predominantly metabolized in the liver. Plasma concentrations are affected by individual differences in absorption, distribution, metabolism, and excretion demonstrating the need for TDM. This drug is also extremely potent and requires TDM due to its inherent toxicity. Adverse effects associated with toxicity include thrombocytopenia, anemia, leukopenia, infections, and hyperlipidemia.16,17 Sirolimus binds more readily to lipoproteins than plasma proteins making whole blood the ideal specimen for analysis.17 Approximately 92% of circulating sirolimus is bound. Initial TDM is performed using a trough specimen drawn during steady state. Subsequent monitoring is performed by collecting trough specimens on a weekly basis for the first month followed by a biweekly sampling pattern in the second month. These specimens are analyzed and used to establish a safe and effective therapeutic range. A therapeutic range of 4 to 12 μg/L is used when sirolimus is administered in conjunction with cyclosporine, and a range of 12 to 20 μg/L is used if cyclosporine therapy is not used or discontinued.17 Sirolimus concentrations can be measured using chromatographic methods or by high-performance liquid chromatography– tandem mass spectrometry.14,16

Mycophenolic Acid Mycophenolate mofetil (Myfortic) is a prodrug that is rapidly converted in the liver to its active form, mycophenolic acid (MPA).13 MPA is a lymphocyte proliferation inhibitor that is used most commonly as supplemental therapy with cyclosporine and tacrolimus in renal transplant patients.8 As with the other antirejection drugs, low trough concentrations of MPA increase the risk of acute rejection, while high concentrations imply toxicity. MPA is administered orally and absorbed under neutral pH conditions in the intestine.18 Interindividual variation of gastrointestinal tract physiology influences the degree of absorption of MPA; however, peak concentrations are generally achieved 1 to 2 hours post dose. Once in circulation, MPA is 95% protein bound. The degree to which MPA is protein bound varies both intraindividually and interindividually and is dependent on circulating albumin concentrations, renal function, and the

concentration of other drugs that may competitively bind to plasma albumin.18 MPA is primarily eliminated by renal excretion (>90%) and has a half-life of approximately 17 hours. The therapeutic range for MPA is reported to be 1 to 3.5 μg/mL; toxicity may cause nausea, vomiting, diarrhea, and abdominal pain. Plasma MPA and its metabolites' concentrations can be assayed using chromatography or, more commonly, immunoassay, though immunoassays are generally considered less specific.18 As with most immunoassay methods, crossreactivity between MPA and its active metabolite (AcMPAG) should be taken into account along with the clinical picture when evaluating a dosage regimen.

ANTINEOPLASTICS Assessment of the therapeutic benefit and toxicity of most antineoplastic drugs is not aided by TDM because correlations between plasma concentration and therapeutic benefit are hard to establish. Many of these agents are rapidly metabolized or incorporated into cellular macromolecular structures within seconds to minutes of their administration. In addition, the therapeutic range for many of these drugs includes concentrations associated with toxic effects. Considering that most antineoplastic agents are administered intravenously as a single bolus, the actual delivered dose is more relevant than circulating concentrations.

Methotrexate Methotrexate (Otrexup, Rasuvo) is one of the few antineoplastic drugs in which TDM may offer some benefit to determining a therapeutic regimen. High-dose methotrexate followed by leucovorin rescue has been shown to be an effective therapy for various neoplastic conditions.19 The basis for this therapy involves the relative rate of mitosis of normal versus neoplastic cells. In general, neoplastic cells divide more rapidly than normal cells. Methotrexate inhibits DNA synthesis in all cells. Neoplastic cells, as a result of their rapid rate of division, have a higher requirement for DNA and are susceptible to depravation of this essential constituent before normal cells. The efficacy of methotrexate therapy is dependent on a controlled period of inhibition, one that is selectively detrimental to neoplastic cells. This is accomplished by administration of leucovorin, which reverses the action of methotrexate at a specific time after methotrexate infusion. This is referred to as leucovorin rescue. Failure to stop the action of methotrexate results in cytotoxic effects to most cells. Evaluation of serum methotrexate concentration, after the inhibitory time period has passed, is

used to determine how much leucovorin is needed to counteract many of the toxic effects of methotrexate.19 Methotrexate is administered orally with peak plasma concentrations 1 hour post dose. Approximately 50% of methotrexate is bound to plasma proteins in circulation. Methotrexate has a half-life of 5 to 9 hours and is predominantly excreted through the renal system. Trough specimens are preferred for determination of plasma methotrexate concentrations. The therapeutic range for trough specimens is less than 1 μmol/L.

BRONCHODILATORS Theophylline Theophylline (Theo-Dur, Theo-24, Uniphyl) is used in the treatment of respiratory disorders, such as asthma and stable chronic obstructive pulmonary disease, for patients that have difficulty using an inhaler or those with nocturnal symptoms. Absorption can be variable with peak blood concentrations achieved after 1 to 2 hours post dose when a rapid-release formation is administered or within 4 to 8 hours for a modified-release preparation. Approximately 50% to 65% of circulating drug is bound to plasma proteins, primarily albumin. Theophylline has a half-life of 3 to 8 hours. It is predominantly metabolized in the liver; however, about 20% is eliminated through the renal system. Beneficial effects have been demonstrated at 10 to 20 mg/L (55 to 110 μmol/L). Though infrequent, concentrations above 20 mg/L may lead to serious adverse effects including insomnia, tachycardia, seizures, arrhythmias, and possible cardiorespiratory arrest. There is poor correlation between dosage and plasma concentrations; however, TDM may initially be useful in optimizing the dosage and used to confirm toxicity when suspected. For additional student http://thepoint.lww.com

resources,

please

visit

at

questions 1. If drug X has a half-life (T1/2) of 2 days (48 hours) and the concentration at 12:00 today was 10 μg/mL, what would the expected concentration of drug

X be at 12:00 tomorrow? a. 7 μg/mL b. 7.5 μg/mL c. 5 μg/mL d. 3.5 μg/mL 2. If a drug is administered orally, which of the following would affect the efficiency of its absorption in the gastrointestinal tract? a. Dissociation of the drug from its administered form b. The drug's solubility in gastrointestinal fluid c. Diffusion of the drug across gastrointestinal membranes d. All of the above 3. If a trough specimen is required for therapeutic drug monitoring, the most appropriate time to collect the specimen would be: a. Eight hours after the last dose was given b. Three days after the dose was administered c. Immediately after the dose is administered d. Immediately before the next dose is given 4. Which of the following statements concerning procainamide is TRUE? a. Procainamide should be administered intravenously due to poor absorption. b. Procainamide is an antibiotic used to treat gram-positive bacterial infections. c. Procainamide is metabolized into an active metabolite with similar antiarrhythmic activity. d. Procainamide is eliminated entirely through renal filtration. 5. Which of the following statements concerning lithium is TRUE? a. Lithium is used to treat depression, self-mutilating behavior, and bipolar disorder. b. Lithium toxicity has been associated with ototoxicity and nephrotoxicity. c. Lithium is completely metabolized in the liver with no renal elimination. d. Lithium is used in conjunction with cyclosporine to prevent transplant

rejection. 6. Which of the following is the primary purpose for measuring serum concentrations of methotrexate? a. To determine the optimum dosage for oral administration of methotrexate b. To ensure that serum concentrations are within the therapeutic range c. To confirm serum concentrations when toxicity is suspected d. To determine the amount of leucovorin needed to halt methotrexate action 7. Primidone is an inactive preform of which of the following antiepileptic drugs? a. Gabapentin b. Clozapine c. Phenobarbital d. Ethosuximide 8. Bilirubin competes with some drugs for the same binding site on plasma proteins. What effect would an increased concentration of bilirubin in the blood (bilirubinemia) have on the potential activity of this drug? a. The fraction of bound drug would increase thereby increasing potential activity. b. The fraction of bound drug would decrease thereby decreasing potential activity. c. The fraction of bound drug would decrease thereby increasing potential activity. d. The fraction of bound drug would increase thereby decreasing potential activity. 9. Twenty milligrams (mg) of drug Y is injected intravenously. One hour after the injection, blood is collected and assayed for the concentration of drug Y. If the concentration of drug Y in this specimen was 0.4 mg/L, what is the volume of distribution for this drug? a. 0.8 L b. 8 L c. 20 L d. 50 L

10. A new orally administered drug has been introduced in your institution. It is unclear whether TDM is needed for this drug. What factors should be taken into consideration when addressing this question? a. Proximity of the toxic range to the therapeutic range. b. Consequences of a subtherapeutic concentration. c. Predictability of serum concentrations after a standard oral dose. d. All of the above should be taken into consideration.

suggested readings 1. Birkett DJ. Pharmacokinetics Made Easy. New York: McGraw-Hill; 2003. 2. Broussard L, Tuckler V. The value of TDM, toxicology. Adv Admin Clin Lab. 2004;13:32. 3. Graham K. Lab limelight: TDM: applications for pharmacogenetics. Adv Admin Clin Lab. 2007;16:81. 4. Hardman JG, Limbird LE, Gillman AG, eds. Goodman & Gilman's The Pharmacological Basis of Therapeutics. 10th ed. New York: Pergamon Press; 2002.

references 5. Abbot A. With your genes? Take one of these, three times a day. Nature. 2003;425:760–762. 6. Campbell TJ, Williams MK. Therapeutic drug monitoring: antiarrhythmic drugs. Br J Clin Pharmacol. 2001;52: 21S–34S. 7. Jurgens G, Graudal NA, Kampmann JP. Therapeutic drug monitoring of antiarrhythmic drugs. Clin Pharmacokinet. 2003;42:647–663. 8. Winston L, Benowitz N. Once-daily dosing of aminoglycosides: how much monitoring is truly required? Am J Med. 2003;114:239–240. 9. Perucca E. An introduction to antiepileptic drugs. Epilepsia. 2005;46:31–37. 10.Eadie MJ. Therapeutic drug monitoring—antiepileptic drugs. Br J Clin Pharmacol. 2001;52:11S–19S. 11. Israni RK, Kasbekar N, Haynes K, et al. Use of antiepileptic drugs in patients with kidney disease. Semin Dial. 2006;19:408–416. 12.Johannessen SI, Tomson T. Pharmacokinetic variability of newer antiepileptic drugs. Clin Pharmacokinet. 2006;45:1061–1075. 13.Wikipedia Online. Searched Tiagabine. http://en.wikipedia.org/wiki/Tiagabine. Accessed February 17, 2008. 14.Mitchell PB. Therapeutic drug monitoring of psychotropic medications. Br J Clin Pharmacol. 2001;52:45S–54S. 15.Mauri MC, Volonteri LS, Colasanti A, et al. Clinical pharmacokinetics of atypical antipsychotics. Clin Pharmacokinet. 2007;46:359–388. 16.Masuda S, Inui K. An update review on individualized dosage of calcineurin inhibitors in organ transplant recipients. Pharmacol Ther. 2006;112:184–198. 17.Taylor AL, Watson CJE, Bradley JA. Immunosuppressive agents in solid organ transplantation: mechanisms of action and therapeutic efficacy. Crit Rev Oncol Hematol. 2005;56:23–46. 18.Johnston A, Holt DW. Immunosuppressant drugs—the role of therapeutic drug monitoring. Br J Clin Pharmacol. 2001;52:61S–73S. 19.Baraldo M, Furlanut M. Chronopharmacokinetics of cyclosporine and tacrolimus. Clin

Pharmacokinet. 2006;45:775–788. 20.Wong SHY. Therapeutic drug monitoring for immunosuppressants. Clin Chim Acta. 2001;313:241–253. 21.Stenton SB, Partovi N, Ensom MHH. Sirolimus: the evidence for clinical pharmacokinetic monitoring. Clin Pharmacokinet. 2005;44:769–786. 22.Elbarbry FA, Shoker AS. Therapeutic drug measurement of mycophenolic acid derivatives in transplant patients. Clin Biochem. 2007;40:752–764. 23.Lennard L. Therapeutic drug monitoring of cytotoxic drugs. Br J Pharmacol. 2001;52:75S–87S.

31 Toxicology TAKARA L. BLAMIRES

Chapter Outline Xenobiotics, Poisons, and Toxins Routes of Exposure Dose–Response Relationship Acute and Chronic Toxicity

Analysis of Toxic Agents Toxicology of Specific Agents Alcohols Carbon Monoxide Caustic Agents Cyanide Metals and Metalloids Pesticides

Toxicology of Therapeutic Drugs Salicylates Acetaminophen

Toxicology of Drugs of Abuse Amphetamines Anabolic Steroids Cannabinoids Cocaine Opiates Phencyclidine Sedatives–Hypnotics

Questions References Chapter Objectives Upon completion of this chapter, the clinical laboratorian should be able to do the following:

Define the following terms: poison, toxicant, toxicology, toxin, xenobiotic, ED50, LD50, TD50. Identify the primary routes for exposure and discuss factors that influence the absorption of an ingested toxin. Compare and contrast acute and chronic toxicity. List major toxicants. Define the pathologic mechanisms of the major toxicants. Identify common specimen types used in toxicology and discuss the benefits and drawbacks of each. Discuss the challenges of properly collecting and handling specimens for toxicology testing. Explain the differences between quantitative and qualitative tests in toxicology. Identify common qualitative and quantitative test methods used to evaluate toxicity in the clinical laboratory. Explain how the osmolal gap is calculated and used to evaluate the presence of osmotically active substances in blood and urine. Discuss the role of the clinical laboratory in the evaluation of exposure to toxins and poisons. Evaluate clinical laboratory data in suspected poisoning cases and provide recommendations for further testing.

For additional student resources, please visit http://thepoint.lww.com

at

Key Terms Bioaccumulation Body burden Dose–response relationship Drugs of abuse ED50 Individual dose–response relationship LD50 Poisons Quantal dose–response relationship TD50 Therapeutic index Toxicokinetics Toxicology Toxins Xenobiotics

Toxicology is the study of the adverse effects of xenobiotics in humans. Xenobiotics are chemicals and drugs that are not normally found in or produced by the body. The scope of toxicology is very broad and includes three major disciplines: mechanistic, descriptive, and regulatory toxicology. Mechanistic

toxicology elucidates the cellular, molecular, and biochemical effects of xenobiotics within the context of a dose–response relationship between the xenobiotic and its adverse effect(s). Mechanistic studies provide a basis for rational therapy design and the development of laboratory tests to assess the degree of exposure in individuals. Descriptive toxicology uses the results from animal experiments to predict what level of exposure will cause harm in humans. This process is known as risk assessment. In regulatory toxicology, combined data from mechanistic and descriptive studies are used to establish standards that define the level of exposure that will not pose a risk to public health or safety. Typically, regulatory toxicologists work for, or in conjunction with, government agencies. The Food and Drug Administration (FDA) oversees human safety issues associated with therapeutic drugs, cosmetics, and food additives. The U.S. Environmental Protection Agency (EPA) has regulatory oversight with regard to pesticides, fungicides, rodenticides, and industry-related chemicals that may threaten safe drinking water and clean air. The Occupational Safety and Health Administration (OSHA) is responsible for ensuring safe and healthy work environments. The Consumer Product Safety Commission (CPSC) regulates household chemicals, while the Department of Transportation (DOT) oversees transportation of hazardous chemicals. There are also a number of specialties within toxicology, including forensic, clinical, and environmental toxicology. Forensic toxicology is primarily concerned with the medical and legal consequences of exposure to chemicals or drugs. A major focus of forensic toxicology is establishing and validating the analytic performance of test methods used to generate evidence in legal situations, including cause of death. Clinical toxicology focuses on the relationships between xenobiotics and disease states. This area emphasizes not only diagnostic testing but also therapeutic intervention. Environmental toxicology includes the evaluation of environmental chemical pollutants and their impact on human health. This is a growing area of concern as we learn more about the mechanisms of action of these chemicals and their adverse effects on human health. Another goal of environmental toxicology is to monitor occupational health issues and to increase public health biomonitoring efforts nationwide. Within the organizational structure of a typical clinical laboratory, toxicology is usually considered a specialty of clinical chemistry because the qualitative and quantitative methodologies used to measure xenobiotics overlap with this discipline. However, appropriate diagnosis and management of patients with acute poisoning or chronic exposure to xenobiotics requires an integrated

approach from all sections of the clinical laboratory.

XENOBIOTICS, POISONS, AND TOXINS The terms, xenobiotic, poison, and toxin are often used interchangeably; however, there are some important distinctions that should be made between them. Xenobiotics, as previously mentioned, are defined as exogenous agents that can have an adverse effect on a living organism. This term is more often used to describe environmental exposure to chemicals or drugs. Examples of environmental drug exposures include antibiotics and antidepressants; chemical exposures might include perfluorinated and brominated compounds. Similarly, poisons are also exogenous agents that have an adverse effect on a biological system; however, this term is more often used when describing substances from an animal, plant, mineral, or gas. Examples include venoms from poisonous snakes or spiders, poison hemlock, arsenic, lead, and carbon monoxide. Toxins, however, are endogenous substances biologically synthesized either in living cells or in microorganisms. Examples include botulinum toxin produced from the microorganism, Clostridium botulinum, hemotoxins produced from venomous snakes, and mycotoxins produced from fungus. The terms toxicant and toxic refer to substances that are not produced within a living cell or microorganism and are more commonly used to describe environmental chemicals. From a clinical standpoint, almost 50% of poisoning cases are intentional suicide attempts, accidental exposure accounts for about 30%, and the remaining cases are a result of homicide or occupational exposure. Of these, suicide has the highest mortality rate. Accidental exposure occurs most frequently in children; however, accidental drug overdose of either therapeutic or illicit drugs is relatively common in adolescents and adults. Occupational exposure primarily occurs in industrial and agricultural settings, but is an expanding area of concern as we learn more about the role of various chemical agents and their contribution to disease.

ROUTES OF EXPOSURE Toxins can enter the body via several routes, with ingestion, inhalation, and transdermal absorption being the most common. Of these, ingestion is most often observed in the clinical setting. For most toxins to exert a systemic effect, they must be absorbed into circulation. Absorption of toxins from the

gastrointestinal tract occurs via several mechanisms. Some toxins are taken up by processes intended for dietary nutrients; however, most are passively absorbed through diffusion. Diffusion requires that the substance be able to cross the cellular barriers of the gastrointestinal tract. Hydrophobic substances do have the ability to diffuse across cell membranes and, therefore, can be absorbed anywhere along the gastrointestinal tract. Ionized substances, however, cannot passively diffuse across the membranes. Weak acids can become protonated in gastric acid resulting in a nonionized species, which can be absorbed in the stomach. In a similar manner, weak bases can be absorbed in the intestine where the pH is largely neutral or slightly alkaline. Other factors that influence the absorbance of toxins from the gastrointestinal tract include the rate of dissolution, gastrointestinal motility, resistance to degradation in the gastrointestinal tract, and interaction with other substances. Toxins that are not absorbed from the gastrointestinal tract do not produce systemic effects, but may produce local effects, such as diarrhea, bleeding, and malabsorption, which may cause systemic effects secondary to toxin exposure.

DOSE–RESPONSE RELATIONSHIP The concept that all substances have the potential to cause harm, even water, is a central theme in toxicology. Paracelsus (1493 to 1591) pioneered the use of chemicals in medicine and coined the term “the dose makes the poison.” Understanding this, dose–response relationship is fundamental and essential to modern toxicology. To enable assessment of substances' potential to cause pathologic effects, it is necessary to establish an index of the relative toxicities of the substances. Several systems are available, but most correlate the dose of a xenobiotic that will result in harmful effects. One such system correlates a single acute oral dose range with the probability of a lethal outcome in an average 70kg man (Table 31.1). This is a useful system to compare the relative toxicities of substances as the predicted response is death, which is valid. However, most xenobiotics can express pathologic effects other than death at lower degrees of exposure; therefore, other indices have been developed. TABLE 31.1 Toxicity Rating System

Adapted from Klaassen CD. Principles of toxicology. In: Klaassen CD, Amdur MO, Doull J, eds. Toxicology: the Basic Science of Poisons. 3rd ed. New York, NY: Macmillan; 1986:13.

A more in-depth characterization can be acquired by evaluating data from a cumulative frequency histogram of toxic responses over a range of doses. This experimental approach is typically used to evaluate responses over a wide range of concentrations. One response monitored is the toxic response or the response associated with an early pathologic effect at lower than lethal doses. This response has been determined to be an indicator of the toxic effects specific for that toxin. For a substance that exerts early toxic effects by damaging liver cells, the response monitored may be increases in serum alanine aminotransferase (ALT) or γ-glutamyltransferase (GGT) activity. The dose–response relationship implies that there will be an increase in the toxic response as the dose is increased. It should be noted that not all individuals display a toxic response at the same dose. The population variance can be seen in a cumulative frequency histogram of the percentage of people producing a toxic response over a range of concentrations (Fig. 31.1). The TD50 is the predicted dose that would produce a toxic response in 50% of the population. If the monitored response is death, the LD50 is the predicted dose that would result in death in 50% of the population. Similar experiments can be used to evaluate the doses of therapeutic drugs. The ED50 is the dose that would be predicted to be effective or have a therapeutic benefit in 50% of the population. The therapeutic index is the ratio of the TD50 (or LD50) to the ED50. Drugs with a large therapeutic index demonstrate fewer toxic adverse effects when the dose of the drug is in the therapeutic range.

FIGURE 31.1 Dose–response relationship. Comparison of responses of a therapeutic drug over a range of doses. The ED50 is the dose of drug in which 50% of treated individuals will experience benefit. The TD50 is the dose of drug in which 50% of individuals will experience toxic adverse effects. The LD50 is the dose of drug in which 50% of individuals will result in mortality. Dose–response relationships may apply to an individual or a population. The individual dose–response relationship relates to the individual's health status as well as changes in xenobiotic exposure levels. A quantal dose–response relationship describes the change in health effects of a defined population based on changes in the exposure to the xenobiotic.

Acute and Chronic Toxicity Acute toxicity and chronic toxicity are terms used to relate the duration and frequency of exposure to observed toxic effects. Acute toxicity is usually associated with a single, short-term exposure to a substance in which the dose is sufficient to cause immediate toxic effects. Chronic toxicity is generally associated with repeated and frequent exposure for extended periods of time (months to years) at doses that are insufficient to cause an immediate acute response. In many instances, chronic exposure is related to an accumulation of the toxicant or the toxic effects within the individual. Chronic toxicity may affect different systems than those associated with acute toxicity; therefore, dose– response relationships may differ for acute and chronic exposures for the same xenobiotic.

ANALYSIS OF TOXIC AGENTS

Toxicology testing may be performed to screen for the presence of a number of agents that might be present (e.g., drug screens, heavy metal panels) or as targeted testing. Targeted testing might be performed when an environmental risk of exposure is known (e.g., industrial workers, chemical plants), to support the investigation of an exposure (e.g., chemical spill, suicide attempt), to comply with occupational regulations or guidelines (e.g., OSHA), or to confirm clinical suspicions of poisoning (e.g., arsenic, cyanide). Due to nonspecific signs and symptoms of toxicity and that frequently the duration and extent of exposure are unknown, diagnosis of most toxic element exposures depends heavily on laboratory testing. In general, toxicology testing is performed on urine or blood specimens. In selecting the best specimen for a specific test, it is important to recognize that toxic agents exhibit unique absorption, distribution, metabolism, and elimination kinetics or toxicokinetics. As such, the predicted toxicokinetics of the individual element(s) being tested must be coordinated with the selection of specimen type and timing of collection relative to the time of exposure. An exposure could be missed entirely if testing is performed on an inappropriate specimen. For example, exposure to methylmercury could be missed if testing is performed on urine as methylmercury is primarily excreted in fecal material. An exposure to arsenic could be missed if testing is performed with blood collected a few days after the exposure due to the short half of arsenic in blood. Preanalytical variables such as elimination patterns, analyte stability, and specimen collection procedures must be considered. For urine testing, 24-hour collections are preferred in order to compensate for variable elimination patterns throughout the day. Reporting results per gram of creatinine is also common to account for variable excretion and renal function. Random urine collections may not provide the most accurate profile of exposure when compared to a 24-hour collection, but they are useful for screening and qualitative detection of exposure to several potentially toxic agents. Any elevated result that is inconsistent with clinical expectations should be confirmed by testing a second specimen collection or a second specimen type. One challenge of specimen collection for toxicological studies is that several aspects of the collection, handling, and storage can introduce external contamination into the sample. Common sources of external contamination include patient clothing, skin, hair, collection environment (e.g., dust, aerosols, antiseptic wipes), and specimen handling variables (e.g., container, lid, preservatives). Concentrated acids are commonly used as urine preservatives; however, contaminants may also be introduced in either the acid itself or in the

process used to add the acid to the urine (e.g., pipette tips). Specimen containers and lids should also be devoid of contaminating organic and inorganic agents that may interfere with analytical testing. For example, certified “trace elementfree” blood collection tubes are available; these tubes commonly have a royal blue top and can be used for most trace elements testing though a tan-top tube is manufactured specifically for lead determinations. Another consideration when handling biological specimens for metals testing is the use of acid-washed pipette tips, containers, and other supplies to prevent contamination. Laboratories may also need to exercise precautions to prevent loss of toxic agents due to in vitro volatilization and metabolism. For instance, mercury and arsenic are particularly vulnerable to loss and metabolism, respectively, during sample processing and storage. These scenarios represent only a fraction of the many specimen handling considerations necessary to reduce preanalytical error in toxicology testing. Analysis of toxic agents in a clinical setting is typically a two-step process. The first step is a screening test, which is a rapid, simple, qualitative procedure intended to detect the presence of specific substances or classes of toxicants. In general, these procedures have good analytic sensitivity but lack specificity. A negative result can rule out a drug or toxicant; however, a positive result should be considered a presumptive positive until confirmed by a second, more specific method, which is the second step of the test process, or a confirmatory test. Confirmatory tests are generally quantitative and report the concentration of the substance in the specimen in contrast to qualitative screening tests that provide a result of positive (drug is present) or negative (drug is absent). A variety of analytical methods can be used for screening and confirmatory testing though immunoassays are the commonly used for drug screens. In some instances, these assays are specific for a single drug (e.g., tetrahydrocannabinol [THC]), but in most cases, the assay is to detect drugs within a general class (e.g., barbiturates and opiates). Thin-layer chromatography (TLC) is a relatively simple, inexpensive method for detecting various drugs and other organic compounds; gas chromatography (GC) is widely used and a well-established technique for qualitative and quantitative determination of many volatile substances. The reference method for quantitative identification of most organic compounds is gas chromatography coupled with a mass spectrometer (GC–MS) as the detector. Inorganic compounds, including speciation, may be quantitated using inductively coupled plasma-mass spectrometry (ICP-MS) or atomic absorption (AA) methods.

TOXICOLOGY OF SPECIFIC AGENTS Many chemical agents encountered on a regular basis have potential for toxicity. The focus of this section is to discuss some of the most commonly encountered nondrug toxins seen in a clinical setting, as well as those that present as medical emergencies with acute exposure.1

Alcohols The toxic effects of alcohol are both general and specific. Exposure to alcohol, like exposure to most volatile organic solvents, initially causes disorientation, confusion, and euphoria, but can progress to unconsciousness, paralysis, and, with high-level exposure, even death. Most alcohols display these effects at about equivalent molar concentrations. This similarity suggests a common depressant effect on the central nervous system (CNS) that appears to be mediated by changes in membrane properties. In most cases, recovery from CNS effects is rapid and complete after cessation of exposure. Distinct from the general CNS effects are the specific toxicities of each type of alcohol, which are usually mediated by biotransformation of alcohols to toxic products. There are several pathways by which short-chain aliphatic alcohols can be metabolized. Of these, hepatic conversion to an aldehyde, by alcohol dehydrogenase (ADH), and further conversion to an acid, by hepatic aldehyde dehydrogenase (ALDH), is the most significant. (Eq. 31-1)

Ethanol Ethanol exposure is common, and excessive consumption, with its associated consequences, is a leading cause of economic, social, and medical problems throughout the world.2 The economic impact is estimated to exceed $100 billion per year in terms of lost wages and productivity. Many social and family problems are associated with excessive ethanol consumption, and the burden to the health care system is significant. Ethanol-related disorders are consistently one of the top ten causes of hospital admissions, and approximately 20% of all hospital admissions have some degree of alcohol-related problems. It is estimated that 80,000 Americans die each year, either directly or indirectly, as a result of abusive alcohol consumption. This correlates to about a fivefold increase in premature mortality. In addition, consumption of ethanol during

pregnancy may lead to fetal alcohol syndrome or fetal alcohol effects, both of which are associated with delayed motor and mental development in children. Correlations have been established between blood alcohol concentration and the clinical signs and symptoms of acute intoxication. A blood alcohol concentration of 80 mg/dL has been established as the statutory limit for operation of a motor vehicle in the United States as this concentration is associated with a diminution of judgment and motor function. Determinations of blood ethanol concentration by the laboratory may be used in litigation of drunken driving cases and requires appropriate chain-of-custody procedures for specimen collection and documentation of acceptable quality control performance, instrument maintenance procedures, and proficiency testing records. Approximately 50% of the 40,000 to 50,000 annual automobile-related fatalities in the United States involve alcohol as a factor. Besides the short-term effects of ethanol, most pathophysiologic consequences of ethanol abuse are associated with chronic consumption. In an average adult, this correlates to the consumption of about 50 g of ethanol per day for about 10 years. This pattern of consumption has been associated with compromised function of various organs, tissues, and cell types; however, the liver appears to be affected the most. The pathologic sequence starts with the accumulation of lipids in hepatocytes. With continued consumption, this may progress to alcoholic hepatitis. In about 20% of individuals with long-term, highlevel alcohol intake, this develops into a toxic form of hepatitis. Of those who do not progress to toxic hepatitis, progression to liver cirrhosis is common. Cirrhosis of the liver can be characterized as fibrosis leading to functional loss of the hepatocytes. Progress through this sequence is associated with changes in many laboratory tests related to hepatic function including liver enzymes. Several laboratory indicators have the required diagnostic sensitivity and specificity to identify excessive ethanol consumption and most correlate well to the progression of ethanol-induced liver disease. Table 31.2 lists common laboratory indicators of prolonged ethanol consumption. TABLE 31.2 Common Indicators of Ethanol Abuse

GGT, γ-glutamyltransferase; AST, aspartate aminotransferase; ALT, alanine aminotransferase; HDL, high-density lipoprotein; MCV, mean cell volume. Several mechanisms have been proposed to mediate the pathologic effects of long-term ethanol consumption. Of these, adduct formation with acetaldehyde appears to play a key role. Hepatic metabolism of ethanol is a two-step enzymatic reaction with acetaldehyde as a reactive intermediate. Most ethanol is converted to acetate, or acetic acid, in this pathway; however, a significant portion of the acetaldehyde intermediate is released in the free state. (Eq. 31-2) Extracellular acetaldehyde is a transient species as a result of rapid adduct formation with amine groups of proteins. Many of the pathologic effects of ethanol have been correlated with the formation of these adducts, and formation of acetaldehyde adducts has also been shown to change the structure and function of various proteins.

Methanol Methanol is a common laboratory solvent that is also found in many household cleaners. It may be ingested accidentally as a component of many commercial products or as a contaminant of homemade liquors. Methanol is initially metabolized by hepatic ADH to the intermediate formaldehyde. Formaldehyde is then rapidly converted to formic acid by hepatic ALDH. The formation of formic acid causes severe metabolic acidosis, which can lead to tissue injury and possible death. Formic acid is also responsible for optic neuropathy that can lead to blindness.

Isopropanol Isopropanol, also known as rubbing alcohol, is also commercially available. It is

metabolized by hepatic ADH to acetone, which is its primary metabolic end product. Both isopropanol and acetone have CNS depressant effects similar to ethanol; however, acetone has a long half-life, and intoxication with isopropanol can therefore result in severe acute-phase ethanol-like symptoms that persist for an extended period.

Ethylene Glycol Ethylene glycol (1,2-ethanediol) is a common component of hydraulic fluid and antifreeze. Ingestion by children is relatively common because of its sweet taste. The immediate effects of ethylene glycol ingestion are similar to those of ethanol; however, metabolism by hepatic ADH and ALDH results in the formation of several toxic species including oxalic acid and glycolic acid, which result in severe metabolic acidosis. This is complicated by the rapid formation and deposition of calcium oxalate crystals in the renal tubules. With high levels of consumption, calcium oxalate crystal formation in the kidneys may result in renal tubular damage.

Determination of Alcohols From a medicolegal perspective, determinations of blood ethanol concentrations must be accurate and precise. Serum, plasma, and whole blood are acceptable specimens, and correlations have been established between ethanol concentration in these specimens and impairment of psychomotor function. Because ethanol uniformly distributes in total body water, serum, which has greater water content than whole blood, has a higher concentration per unit volume. Because of this difference in distribution, most states have standardized the acceptable specimen types admissible as evidence, and some jurisdictions even mandate a specific method (often GC) be used for legal ethanol determination. When acquiring a specimen for ethanol determination, several preanalytical issues must be considered to ensure the integrity of the sample. One of these requirements is that the venipuncture site should only be cleaned with an alcohol-free disinfectant. Also, because of the volatile nature of short-chain aliphatic alcohols, specimens must be capped at all times to avoid evaporation. Sealed specimens can be refrigerated or stored at room temperature for up to 14 days without loss of ethanol. Nonsterile specimens or those intended to be stored for longer periods of time should be preserved with sodium fluoride to avoid increases in ethanol content resulting from contamination due to bacterial

fermentation.

CASE STUDY 31.1 A patient with a provisional diagnosis of depression was sent to the laboratory for a routine workup. The complete blood cell count was unremarkable except for an elevated erythrocyte mean cell volume. Results of the urinalysis were unremarkable. The serum chemistry testing revealed slightly increased aspartate aminotransferase (AST), total bilirubin, and high-density lipoproteins (HDL). All other chemistry results, including glucose, urea, creatinine, cholesterol, pH, bicarbonate/carbon dioxide (pCO2), alanine aminotransferase (ALT), sodium, and potassium, were within the reference intervals. The physician suspects ethanol abuse; however, the patient denies an alcohol use. Subsequent testing revealed a serum γ-glutamyltransferase (GGT) three times the upper limit of the reference interval. No ethanol was detected in serum, and screening tests for infectious forms of hepatitis were negative.

questions 1. Are these results consistent with a patient who is consuming hazardous quantities of ethanol? 2. What additional testing would you recommend to rule ethanol abuse in or out?

Several analytic methods can be used to determine the concentration of ethanol in serum. Among these, osmometric, chromatographic, and enzymatic methods are the most commonly used. When osmolality is measured by freezing point depression, increases in serum osmolality correlate well with increases in serum ethanol concentration. The degree of increase in osmolality due to the presence of ethanol is expressed as the difference between the measured and the calculated osmolality otherwise referred to as the osmolal gap. It has been established that serum osmolality increases by approximately 10 mOsm/kg for each 60 mg/dL increase in serum ethanol; therefore, the osmolal gap is useful for estimating the amount of ethanol present in the serum. The osmolal gap can be

calculated as follows: (Eq. 31-3) This relationship, however, is not specific to ethanol, and increases in the osmolal gap also occur with certain metabolic imbalances. Therefore, use of the osmolal gap for determination of serum or blood ethanol concentration lacks analytic specificity; however, it is a useful screening test. GC is the established reference method for ethanol determinations and is quite useful as it can simultaneously quantitate other alcohols as well, such as methanol and isopropanol. Analysis begins with dilution of the serum or blood sample with a saturated solution of sodium chloride in a closed container. Volatiles within the liquid specimen partition into the airspace (head space) of the closed container. Sampling of the head space provides a clean specimen with little or no matrix effect. Quantitation can be performed by constructing a standard curve or calculating the concentration based on relative changes to an internal standard (n-propanol) as shown in Figure 31.2.

FIGURE 31.2 Headspace gas chromatography of alcohol. The concentration of each alcohol can be determined by comparison to the response from the internal standard n-propanol. Enzymatic methods for ethanol determination use a nonhuman form of ADH to oxidize ethanol in the specimen to acetaldehyde with simultaneous reduction

of NAD+ to NADH. (Eq. 31-4) The NADH produced can be monitored directly by absorbance at 340 nm or can be coupled to an indicator reaction. This form of ADH is relatively specific for ethanol (Table 31.1), and intoxication with methanol or isopropanol produces a negative or low result; therefore, a negative result by this method does not rule out ingestion of other alcohols. There is good agreement between the enzymatic reactions of ethanol and GC. The enzymatic reactions can be fully automated and do not require specialized instrumentation.

Carbon Monoxide Carbon monoxide is produced by incomplete combustion of carbon-containing substances. The primary environmental sources of carbon monoxide include gasoline engines, improperly ventilated furnaces, and wood or plastic fires. Carbon monoxide is a colorless, odorless, and tasteless gas that is rapidly absorbed into circulation from inspired air. When carbon monoxide binds to hemoglobin, it is called carboxyhemoglobin (COHb). The affinity of carbon monoxide for hemoglobin is 200 to 225 times greater than for oxygen.3,4 Air is approximately 20% oxygen by volume. If inspired air contained 0.1% carbon monoxide by volume, this would result in 50% carboxyhemoglobinemia at equilibrium. Because both carbon monoxide and oxygen compete for the same binding site, exposure to carbon monoxide results in a decrease in the concentration of oxyhemoglobin, and, for this reason, carbon monoxide is considered a very toxic substance. Carbon monoxide expresses its toxic effects by causing a leftward shift in the oxygen–hemoglobin dissociation curve resulting in a decrease in the amount of oxygen delivered to the tissue.3 The net decrease in the amount of oxygen delivered to the tissue results in hypoxia. The major toxic effects of carbon monoxide exposure are seen in organs with high oxygen demand, such as the brain and heart. The concentration of COHb, expressed as the percentage of COHb, presents relative to the capacity of the specimen to form COHb, and the corresponding symptoms are detailed in Table 31.3. The only treatment for carbon monoxide poisoning is 100% oxygen therapy. In severe cases, hyperbaric oxygen may be used to promote distribution of oxygen to the tissues. The halflife of COHb is roughly 60 to 90 minutes in a patient with normal respiratory

function breathing 100% oxygen. TABLE 31.3 Symptoms of Carboxyhemoglobinemia

COHb, carboxyhemoglobin. Several methods are available for the evaluation of carbon monoxide poisoning. One screening test for COHb relies on the fact that it has a cherry-red appearance. In this qualitative test, excessive carbon monoxide exposure is evaluated by adding 5 mL of 40% NaOH to 5 mL of an aqueous dilution of the whole blood specimen. Persistence of a pink color of the solution is consistent with a COHb level of 20% or greater. There are two primary quantitative assays for COHb: differential spectrophotometry and GC. Spectrophotometric methods work on the principle that different forms of hemoglobin present with different spectral absorbance curves. By measuring the absorbance at four to six different wavelengths, the concentration of the different species of hemoglobin, including COHb, can be determined by calculation. This is the most common method used for COHb and is the basis for several automated systems. GC methods, however, are considered

the reference method for determination of COHb due to their high accuracy and precision. Carbon monoxide is released from hemoglobin after treatment with potassium ferricyanide. After analytic separation, carbon monoxide is detected by changes in thermal conductivity, and the COHb concentration can be determined.

Caustic Agents Caustic agents are found in many household products and occupational settings. Even though any exposure to a strong acid or alkaline substance is associated with injury, aspiration and ingestion present the greatest hazards. Aspiration is usually associated with pulmonary edema and shock, which can rapidly progress to death. Ingestion produces lesions in the esophagus and gastrointestinal tract, which may produce perforations. This results in hematemesis, abdominal pain, and possibly shock. The onset of metabolic acidosis or alkalosis occurs rapidly after ingestion of caustic agents and corrective therapy for ingestion is usually by dilution.

Cyanide Cyanide is classified as a supertoxic substance that can exist as a gas, solid, or in solution. Because of the various forms, cyanide exposure can occur by inhalation, ingestion, or transdermal absorption. Cyanide is used in many industrial processes and is a component of some insecticides and rodenticides. It is also produced as a pyrolysis product from the burning of some plastics, including urea foams that are used as insulation in homes. Thus, carbon monoxide and cyanide exposure may account for a significant portion of the toxicities associated with fires and smoke inhalation. Ingestion of cyanide is also a common suicide agent. Cyanide expresses toxicity by binding to heme iron. Binding to mitochondrial cytochrome oxidase causes an uncoupling of oxidative phosphorylation. This leads to rapid depletion of cellular adenosine triphosphate as a result of the inability of oxygen to accept electrons. Increases in cellular oxygen tension and venous pO2 occur as a result of lack of oxygen utilization. At low levels of exposure, patients experience headaches, dizziness, and respiratory depression, which can rapidly progress to seizure, coma, and death at greater doses. Cyanide clearance is primarily mediated by rapid enzymatic conversion to thiocyanate, a nontoxic product rapidly cleared by renal filtration. Cyanide toxicity is associated with acute exposure at concentrations sufficient to exceed

the rate of clearance by this enzymatic process. Evaluation of cyanide exposure requires a rapid turnaround time, and several test methods are available. Ion-specific electrode (ISE) methods and photometric analysis following two-well microdiffusion separation are the most common methods. Chronic low-level exposure to cyanide is generally evaluated by determining a urinary thiocyanate concentration.

Metals and Metalloids Arsenic Arsenic is a metalloid that may exist bound to or as a primary constituent of many different organic and inorganic compounds. It exists in both naturally occurring and manmade substances; therefore, exposure to arsenic occurs in various settings. Environmental exposure through air and water is prevalent in many industrialized areas, and occupational exposure occurs in agriculture and smelting industries. Arsenic is also a common homicide and suicide agent. Ingestion of less harmful organic forms of arsenic, such as arsenobetaine and arsenocholine, can occur with foods such as clams, oysters, scallops, mussels, crustaceans (crabs and lobsters), and some bottom-feeding finfish. Arsenic toxicity is largely dependent on the valence state, solubility, and rate of absorption and elimination. The three major groups for arsenic include arsine gas (arsine trioxide), inorganic forms (trivalent and pentavalent), and organic forms (arsenobetaine and arsenocholine). The rate of absorption largely depends on the form of arsenic. Inhalation of arsine gas demonstrates the most acute toxicity. Organic arsenic-containing compounds, such as those found in seafood, are rapidly absorbed by passive diffusion in the gastrointestinal tract. Other forms are absorbed at a slower rate. Clearance of arsenic is primarily by renal filtration of the free, ionized state. Arsenic is rapidly cleared from the blood, such that blood levels may be normal even when urine levels remain markedly elevated. The “fish arsenic” such as arsenobetaine and arsenocholine is cleared in urine within 48 hours; however, the initial half-life of inorganic arsenic is close to 10 hours. Approximately 70% of inorganic arsenic is secreted in urine, of which, 50% of excreted inorganic arsenic has been transformed to the organic form. However, these patterns may vary with the dose and clinical status of the patient. Chronic toxicity of arsenic can be due to low level, persistent exposure that may lead to bioaccumulation or increased body burden of this metal. Arsenic expresses toxic effects by high-affinity binding to the thiol groups in

proteins; therefore, this can reduce the portion available for renal filtration and elimination. Because many proteins are capable of binding arsenic, the toxic symptoms of arsenic poisoning are nonspecific, and arsenic binding to proteins often results in a change in protein structure and function.5 Many cellular and organ systems are affected with arsenic toxicity; fever, anorexia, and gastrointestinal distress are often seen with chronic or acute arsenic ingestion at low levels. Peripheral and central damage to the nervous system, renal effects, hemopoietic effects, and vascular disease leading to death are associated with high levels of exposure. Analysis of arsenic is most commonly performed by atomic absorption spectrophotometry (AAS). Most forms of arsenic are only detectable in the blood for a few hours. More than 90% of an arsenic exposure is recovered in the urine within 6 days making urine the specimen of choice for an exposure occurring in the previous week. Some toxins can bind to sulfhydryl groups in keratin found in hair and fingernails. For this reason, long-term exposure to some toxins may also be assessed in these tissues. Typically, toxic element deposition in hair and fingernails is demonstrated 2 weeks after an exposure, and in arsenic poisoning cases, distinct white lines can be observed in the fingernails, which are referred to as Mees' lines.

Cadmium Cadmium is a metal found in many industrial processes, with its main use being in electroplating and galvanizing though it is also commonly encountered during the mining and processing of many other metals. Cadmium is a pigment found in paints and plastics and is the cathodal material of nickel–cadmium batteries. Due to its widespread industrial applications, this element has become a significant environmental pollutant. In the environment, cadmium binds strongly to organic matter where it is immobilized in soil and can be taken up by agricultural crops. Since tobacco leaves accumulate cadmium from the soil, regular use of tobaccocontaining products is a common route of human exposure. Smoking is estimated to at least double the lifetime body burden of cadmium. For nonsmokers, human exposure to cadmium is largely through the consumption of shellfish, organ meats, lettuce, spinach, potatoes, grains, peanuts, soybeans, and sunflower seeds. Cadmium expresses its toxicity primarily by binding to proteins; however, it can also bind to other cellular constituents. Cadmium distributes throughout the

body, but has a tendency to accumulate in the kidney, where most of its toxic effects are expressed. An early finding of cadmium toxicity is manifested by renal tubular dysfunction in which tubular proteinuria, glucosuria, and aminoaciduria are typically seen. In addition to renal dysfunction, concomitant parathyroid dysfunction and vitamin D deficiency may also occur. Itai-itai disease is characterized by severe osteomalacia and osteoporosis from the longterm consumption of cadmium-contaminated rice. Elimination of cadmium is very slow, as the biological half-life of cadmium is 10 to 30 years. Evaluation of excessive cadmium is most commonly accomplished by the determination of whole blood or urinary content using AAS.

Lead Lead is a by-product or component of many industrial processes, which has contributed to its widespread presence in the environment. It was a common constituent of household paints before 1972 and is still found in commercial paints and art supplies. Plumbing constructed of lead pipes or joined with leaded connectors has contributed to the lead concentration of water. Gasoline contained tetraethyl lead until 1978. The long-term utilization of leaded gasoline, lead-based paint, and lead-based construction materials has resulted in airborne lead, contaminated soil, and leaded dust. The lead content of foods is highly variable. In the United States, the average daily intake for an adult is between 75 and 120 μg. This level of intake is not associated with overt toxicity. Because lead is present in all biological systems and because no physiologic or biochemical function has been found, the key issue is identifying the threshold dose that causes toxic effects. Exposure to lead can occur by any route; however, ingestion of contaminated dietary constituents accounts for most exposures.5 Gastrointestinal absorption is influenced by various factors though the exact factors controlling the rate of absorption are unclear. However, susceptibility to lead toxicity appears to be primarily dependent on age. Adults absorb 5% to 15% of ingested lead, whereas children have a greater degree of absorption, and infants absorb nearly 30% to 40%. Absorbed lead binds with high affinity to many macromolecular structures and is widely distributed throughout the body. Lead distributes into two theoretical compartments: bone and soft tissue, with bone being the largest pool. Lead combines with the matrix of bone and can persist in this compartment for a long period due to its long half-life of almost 20 years. The half-life of lead in soft tissue is somewhat variable though the reported average half-life is 120 days.

Elimination of lead occurs primarily by renal filtration, but because only a small fraction of total body lead is present in circulation, the elimination rate is slow. Considering the relatively constant rate of exposure and the slow elimination rate, total body lead accumulates over a lifetime. As mentioned, the largest accumulation occurs in bone, but there is also significant accumulation in the kidneys, bone marrow, circulating erythrocytes, and peripheral and central nerves. Lead toxicity is multifaceted and occurs in a dose-dependent manner (Fig. 31.3). Abdominal or neurological symptoms manifest after acute exposure only. The neurologic effects of lead are of particular importance. Lead exposure causes encephalopathy characterized by cerebral edema and ischemia. Severe lead poisoning can result in stupor, convulsions, and coma. Lower levels of exposure may not present with these symptoms; however, low-level exposure may result in subclinical effects characterized by behavioral changes, hyperactivity, attention deficit disorder and a decrease in intelligence quotient (IQ) scores. Higher levels of exposure have also been associated with demyelinization of peripheral nerves resulting in a decrease in nerve conduction velocity.

FIGURE 31.3 Comparison of effects of lead on children and adults. (Reprinted from Royce SE, Needleman HL, eds. Case Studies in Environmental Medicine: Lead Toxicity. Washington, DC: U.S. Public Health Service, ATSDR; 1990.) Children appear particularly sensitive to these effects and are now evaluated for lead poisoning before entry into school. The normal threshold for blood lead levels (BLL) set by the Centers for Disease Control and Prevention is 10 μg/dL. Growth deficits are seen in children with BLL greater than 10 μg/dL, and anemia may occur at a BLL of 20 μg/dL. Children with a BLL less than 10 μg/dL have been reported to suffer from permanent IQ and hearing deficits. As a result, many states have lowered the upper limit to 5 μg/dL. A threshold to identify permanent effects of lead poisoning is not currently known. Lead is also a potent inhibitor of many enzymes, which leads to many of the

toxic effects of lead exposure. The most noteworthy effects are on vitamin D metabolism and the heme synthetic pathway. Decreased serum concentrations of both 25-hydroxy and 1,25-dihydroxy vitamin D are seen in excessive lead exposure resulting in changes in bone and calcium metabolism. Anemia is the result of the inhibition of the heme synthetic pathway, which results in increases in the concentration of several intermediates in this pathway, including aminolevulinic acid and protoporphyrin. Increases in protoporphyrin result in high concentrations of zinc protoporphyrin in circulating erythrocytes. Zinc protoporphyrin is a highly fluorescent compound, and measurement of this fluorescence has been used to screen for lead toxicity in the clinical laboratory. Increased urinary aminolevulinic acid is a highly sensitive and specific indicator of lead toxicity that correlates well with blood levels. Another hematologic finding of lead poisoning is the presence of basophilic stippling in erythrocytes as a result of inhibition of erythrocytic pyrimidine nucleotidase. This enzyme is responsible for the removal of residual DNA after extrusion of the nucleus. When identified, basophilic stippling is a key indicator of lead toxicity. Excessive lead exposure has also been associated with hypertension, carcinogenesis, birth defects, compromised immunity, and several renal effects. Early stages of toxicity are associated with renal tubular dysfunction, resulting in glycosuria, aminoaciduria, and hyperphosphaturia. Late stages are associated with tubular atrophy and glomerular fibrosis, which may result in a decreased glomerular filtration rate. Treatment of lead poisoning involves cessation of exposure and treatment with therapeutic chelators, such as ethylenediaminetetraacetic acid and dimercaptosuccinic acid. These substances are capable of removing lead from soft tissue and bone by forming low molecular weight, high-affinity complexes that can be cleared by renal filtration. The efficacy of this therapy is monitored by determining the urinary concentration of lead. The assessment of total body burden of lead poisoning is best evaluated by the quantitative determination of lead concentration in whole blood. The use of urine is also valid but correlates closer to the level of recent exposure. Care must be taken during specimen collection to ensure that the specimen does not become contaminated from exogenous sources, and lead-free containers (tan-top K2EDTA tube) are recommended for this purpose. Several methods can be used to measure lead concentration. Chromogenic reactions and anodic stripping voltammetry methods have been used, but lack sufficient analytical sensitivity. Point-of-care units have employed these

methodologies to screen for lead exposure in children or adults in the workplace. X-ray fluorescence is used to measure environmental levels of lead in nonbiological samples such as soil and foods. For biological specimens, graphite furnace AAS has been used to confirm whole BLLs, but a new standard of measurement has been set with quantitative ICP-MS.

Mercury Mercury is a metal that exists in three forms: elemental, which is a liquid at room temperature; as inorganic salts; and as a component of organic compounds. Inhalation and accidental ingestion of inorganic and organic forms in industrial settings is the most common reason for toxic levels. Consumption of contaminated foods is the major source of exposure in the general population. Each form of mercury has different toxic characteristics. Elemental mercury (Hg0) can be ingested without significant effects, and inhalation of elemental mercury is insignificant due to its low vapor pressure. Cationic mercury (Hg2+) is moderately toxic; whereas, organic mercury, such as methylmercury (CH3Hg+), is extremely toxic. Considering that the most common route of exposure to mercury is via ingestion, the primary factor determining toxicity is gastrointestinal absorbance. Elemental mercury is not absorbed well because of its viscous nature. Inorganic mercury is only partially absorbed, but has significant local toxicity in the gastrointestinal tract. The portion that is absorbed distributes uniformly throughout the body. The organic forms of mercury are rapidly and efficiently absorbed by passive diffusion and partitions into hydrophobic compartments. This results in high concentrations in the brain and peripheral nerves.5 In these lipophilic compartments, organic mercury is biotransformed to its divalent state, allowing it to bind to neuronal proteins. Elimination of systemic mercury occurs primarily via renal filtration of bound low molecular weight species or the free, ionized state. Considering that most mercury is bound to protein, the elimination rate is slow, and chronic exposure, therefore, exerts a cumulative effect. Mercury toxicity is a result of protein binding, which leads to a change of protein structure and function. The most significant result of this interaction is the inhibition of many enzymes. After ingestion of inorganic mercury, binding to intestinal proteins results in acute gastrointestinal disturbances. Ingestion of moderate amounts may result in severe bloody diarrhea because of ulceration and necrosis of the gastrointestinal tract. In severe cases, this may lead to shock and death. The absorbed portion of ingested inorganic mercury affects many

organs. Clinical findings include tachycardia, tremors, thyroiditis, and, most significantly, a disruption of renal function. The renal effect is associated with glomerular proteinuria and loss of tubular function. Organic mercury may also have a renal effect at high levels of exposure; however, neurological symptoms are the primary toxic effects of this hydrophobic form. Low levels of exposure cause tremors, behavioral changes, mumbling speech, and loss of balance. Higher levels of exposure may result in hyporeflexia, hypotension, bradycardia, and renal dysfunction and can lead to death. Analysis of mercury is by AAS using whole blood or an aliquot of a 24-hour urine specimen or anodal stripping voltammetry. Analysis of mercury by AAS requires special techniques as a result of the volatility of elemental mercury.

Pesticides Pesticides are substances that have been intentionally added to the environment to kill or harm an undesirable life form. Pesticides can be classified into several categories including insecticides, herbicides, fungicides, rodenticides, etc. These agents are generally applied to control vector-borne disease and urban pests and to improve agricultural productivity. Pesticides can be found in occupational settings and in the home; therefore, there are frequent opportunities for exposure. Contamination of food is the major route of exposure for the general population. Inhalation, transdermal absorption, and ingestion as a result of hand-to-mouth contact are common occupational and accidental routes of exposure. Ideally, the actions of pesticides would be target specific. Unfortunately, most are nonselective and result in toxic effects to many nontarget species, including humans. Pesticides come in many different forms with a wide range of potential toxic effects. The health effects of short-term, low-level exposure to most of these agents have yet to be well elucidated. Extended low-level exposure may result in chronic disease states. Of primary concern, though, is high-level exposure, which may result in acute disease states or death. The most common victims of acute poisoning are people who are applying pesticides without taking appropriate safety precautions to avoid exposure. Ingestion by children at home is also common, and ingestion of pesticides is also a common in suicide attempts. There is a wide variation in the chemical configuration of pesticides, ranging from simple salts of heavy metals to complex high molecular weight organic compounds. Insecticides are the most prevalent of pesticides. Based on chemical configuration, the organophosphates, carbamates, and halogenated hydrocarbons

are the most common insecticides. Organophosphates are the most abundant and are responsible for about one-third of all pesticide poisonings. Organophosphates and carbamates function by inhibiting acetylcholinesterase, an enzyme present in both insects and mammals. In mammals, acetylcholine is a neurotransmitter found in both central and peripheral nerves. It is also responsible for the stimulation of muscle cells and several endocrine/exocrine glands. The actions of acetylcholine are terminated by the actions of membrane-bound, postsynaptic acetylcholinesterase. Inhibition of this enzyme results in the prolonged presence of acetylcholine on its receptor, which produces a wide range of systemic effects. Low levels of exposure are associated with salivation, lacrimation, and involuntary urination and defecation. Higher levels of exposure result in bradycardia, muscular twitching, cramps, apathy, slurred speech, and behavioral changes. Death due to respiratory failure may also occur. Absorbed organophosphates bind with high affinity to several proteins, including acetylcholinesterase. Protein binding prevents the direct analysis of organophosphates; thus, exposure is evaluated indirectly by the measurement of acetylcholinesterase inhibition. Inhibition of this enzyme has been found to be a sensitive and specific indicator of organophosphate exposure, but because acetylcholinesterase is a membrane-bound enzyme, serum activity is low. To increase the analytic sensitivity of this assay, erythrocytes that have high surface activity are commonly used. Evaluation of erythrocytic acetylcholinesterase activity for detection of organophosphate exposure, however, is not commonly performed in most laboratories because of low demand and the lack of an automated method. An alternative test that has become more widely available is the measurement of serum pseudocholinesterase (SChE) activity. SChE is inhibited by organophosphates in a similar manner to the erythrocytic enzyme. Unlike the erythrocytic enzyme, however, changes in the serum activity of SChE lack sensitivity and specificity for organophosphate exposure. SChE is found in the liver, pancreas, brain, and serum though its biological function is not well defined. Decreased levels of SChE can occur in acute infection, pulmonary embolism, hepatitis, and cirrhosis. There are also several variants of this enzyme that demonstrate diminished activity; therefore, decreases in SChE are not specific to organophosphate poisoning. The reference interval for SChE is between 4,000 and 12,000 U/L with intraindividual variation, the degree of variance within an average individual, of about 700 U/L. Symptoms associated with organophosphate toxicity occur at about a 40% reduction in activity. An

individual whose normal SChE is on the high side of the reference interval and who has been exposed to toxic levels of organophosphates may still have SChE activity within the reference interval. Because of these factors, determination of SChE activity lacks sensitivity in the diagnosis of organophosphate poisoning and should only be used as a screening test. Immediate antidotal therapy can be initiated in cases of suspected organophosphate poisoning with a decreased activity of SChE; however, continuation of therapy and documentation of such poisoning should be confirmed by testing of the erythrocytic enzyme.

TOXICOLOGY DRUGS

OF

THERAPEUTIC

In some cases, toxicity is the result of accidental or intentional overdosage of pharmaceutical drugs. All drugs are capable of toxic effects when improperly administered. This discussion focuses on the therapeutic drugs most commonly tested for in the clinical laboratory.

Salicylates Acetylsalicylic acid, or aspirin, is a commonly used analgesic, antipyretic, and anti-inflammatory drug. It functions by decreasing thromboxane and prostaglandin formation through the inhibition of cyclooxygenase. At recommended doses, there are several noteworthy adverse effects, including interference with platelet aggregation and gastrointestinal function. There is also an epidemiologic relationship between aspirin, childhood viral infections (e.g., varicella, influenza), and the onset of Reye's syndrome. Acute ingestion of high doses of aspirin is associated with various toxic effects through several different mechanisms.1 Because it is an acid, excessive salicylate ingestion is associated with nonrespiratory (metabolic) acidosis. Salicylate is also a direct stimulator of the respiratory center, and hyperventilation can result in respiratory alkalosis. In many instances, the net result of the combined effects is an immediate mixed acid–base disturbance. Salicylates also inhibit the Kreb'ss cycle resulting in excess conversion of pyruvate to lactic acid. In addition, at high levels of exposure, salicylates stimulate mobilization and use of free fatty acid, resulting in excess ketone body formation. These factors all contribute to nonrespiratory (metabolic) acidosis that may lead to death. Treatment for aspirin overdose involves neutralizing and eliminating the excess acid and maintaining the electrolyte balance.

Correlations have been established between serum concentrations of salicylates and toxic outcomes. Several methods are available for the quantitative determination of salicylate in serum. GC and liquid chromatography (LC) methods provide the highest analytical sensitivity and specificity, but are not widely used due to high equipment expense and required technical skill. Several immunoassay methods are available; however, the most commonly used method is a chromogenic assay known as the Trinder reaction. In this reaction, salicylate reacts with ferric nitrate to form a colored complex that is then measured spectrophotometrically.

Acetaminophen Acetaminophen (Tylenol), either solely or in combination with other compounds, is a commonly used analgesic drug. In healthy subjects, therapeutic dosages have few adverse effects. Overdose of acetaminophen, however, is associated with severe hepatotoxicity (Fig. 31.4).

FIGURE 31.4 Rumack-Matthew nomogram. Prediction of acetaminopheninduced hepatic damage based on serum concentration. (Reprinted from Rumack BH, Matthew H. Acetaminophen poisoning and toxicity. Pediatrics. 1975;55:871, with permission) Absorbed acetaminophen is bound with high affinity to various proteins, resulting in a low free fraction. Thus, renal filtration of the parent drug is minimal, and most is eliminated by hepatic uptake, biotransformation, conjugation, and excretion. Acetaminophen can follow several different pathways through this process, each forming a different product. The pathway of major concern is the hepatic mixed-function oxidase (MFO) system. In this

system, acetaminophen is first transformed to reactive intermediates, which are then conjugated with reduced glutathione. In overdose situations, glutathione can become depleted, yet reactive intermediates continue to be produced. This results in an accumulation of reactive intermediates inside the cell. Because some intermediates are free radicals, this results in a toxic effect to the cell leading to necrosis of the liver, the organ in which these reactions are occurring. The time frame for the onset of hepatocyte damage is relatively long. In an average adult, serum indicators of hepatic damage do not become abnormal until 3 to 5 days after ingestion of a toxic dose. The initial symptoms of acetaminophen toxicity are vague and not predictive of hepatic necrosis.6 The serum concentration of acetaminophen that results in depletion of glutathione has been determined for an average adult. Unfortunately, acetaminophen is rapidly cleared from serum, and determinations are often made many hours after ingestion, making this information of little utility. To aid in this situation, nomograms (Fig. 31.4) are available that predict hepatotoxicity based on serum concentrations of acetaminophen at a known time after ingestion. It is also worth noting that chronic, heavy consumers of ethanol metabolize acetaminophen at a faster rate than average, resulting in a more rapid formation of reactive intermediates and increased possibility of depleting glutathione. Therefore, alcoholic patients are more susceptible to acetaminophen toxicity, and using the nomogram for interpretation in these patients is inappropriate. The reference method for the quantitation of acetaminophen in serum is high-performance liquid chromatography (HPLC). This method, however, is not widely used in clinical laboratories because of equipment expense and the level of experience required to operate the instruments. Immunoassays are currently the most common analytical methods used for serum acetaminophen determinations with competitive enzyme or fluorescence polarization systems being most frequently used.

TOXICOLOGY OF DRUGS OF ABUSE Assessment of drug abuse is of medical interest for many reasons. In drug overdose, it is essential to identify the responsible agent to ensure appropriate and timely treatment. In a similar manner, identification of drug abuse in nonoverdose situations provides a rationale for the treatment of addiction. For these reasons, testing for drugs of abuse is commonly performed. This typically involves screening a single urine specimen for many substances by qualitative test procedures. In most instances, this procedure only detects recent drug use;

therefore, with abstinence of relatively short duration, many abusing patients can avoid detection. In addition, a positive drug screen cannot discriminate between single casual use and chronic abuse. Identification of chronic abuse usually involves several positive test results in conjunction with clinical evaluation. In a similar manner, a positive drug screen does not determine the time frame or dose of the drug taken. Drug abuse or overdose can occur with prescription, over- thecounter, or illicit drugs. The focus of this discussion is on substances with addictive potential. The use of drugs for recreational or performance enhancement purposes is relatively common. The National Institute on Drug Abuse (NIDA) reports that approximately 30% of the adult population (older than high-school age) have used an illicit drug. Drug abuse testing has become commonplace in professional, industrial, and athletic settings. The potential punitive measures associated with this testing may involve or result in civil or criminal litigation. Therefore, the laboratory must ensure that all results are accurate and all procedures have been properly documented as these results may be used in court as evidence. This requires the use of analytical methods that have been validated as accurate and precise. It also requires scrupulous documentation of specimen security. Protocols and procedures must be established to prevent and detect specimen adulteration. Measurement of urinary temperature, pH, specific gravity, and creatinine is commonly performed to ensure that specimens have not been diluted or treated with substances that may interfere with testing. Specimen collection should be monitored and a chain of custody established to guard against specimen exchange. Testing for drugs of abuse can be done by several methods. A two-tiered approach of screening and confirmation is usually used. Screening procedures should be simple, rapid, inexpensive, and capable of being automated. They are often referred to as spot tests. In general, screening procedures have good analytic sensitivity with marginal specificity meaning that a negative result can rule out an analyte with a reasonable degree of certainty. These methods usually detect classes of drugs based on similarities in chemical configuration, which allows the detection of parent compounds and congeners that have similar effects. Considering that many designer drugs are modified forms of established drugs of abuse, these methods increase the scope of the screening process. A drawback to this type of analysis is that it may also detect chemically related substances that have no or low abuse potential; therefore, interpretation of positive test results requires integration of clinical context and further testing. Confirmation testing must use methods with high sensitivity and specificity;

many of these tests provide quantitative as well as qualitative information. Confirmatory testing requires the use of a method different from that used in the screening procedure. GC–MS is the reference method for confirmation of most analytes. There are several general analytic procedures commonly used for the analysis of drugs of abuse. Chromogenic reactions are occasionally used for screening procedures, but immunoassays are more widely used for screening and confirmatory testing. In general, immunoassays offer a high degree of sensitivity and are easily automated. A wide variety of chromatographic techniques are used for qualitative identification and quantitation of drugs of abuse. TLC is an inexpensive method for screening many drugs simultaneously and has the advantage that no instrumentation is required. GC and LC allow complex mixtures of drugs to be separated and quantitated, but these methods are generally labor intensive and not well suited to screening. Trends in drug abuse vary geographically and between different socioeconomic groups. For a clinical laboratory to provide effective service requires knowledge of the drug or drug groups likely to be found within the patient population it serves. Fortunately, the process of selecting which drugs to test for has been aided by national studies that have identified the drugs of abuse most commonly seen in various populations (Table 31.4). This provides the basis for test selection in most situations. The following discussion focuses on select drugs with a high potential for abuse. TABLE 31.4 Prevalence of Common Drugs of Abuse

MDMA, methylenedioxymethamphetamine. This table provides approximate frequencies of relevant drugs of abuse as surveyed from national data. The percentage values estimate the prevalence of use in individuals aged 18–25, the most common users, who by survey claim to have used drugs within the past 30 days. Adapted from Department of Health and Human Services, Substance Abuse and Mental Health Administration, and Office of Applied Studies. Results from the 2006 National Survey on Drug Use and Health: National Findings. Retrieved February 8, 2008, from http://www.drugabusestatistics.samhsa.gov/nsduh/2k6nsduh/2k6Results.pdf.

Amphetamines Amphetamine and methamphetamine are therapeutic drugs used for narcolepsy and attention deficit disorder. These drugs are stimulants with a high abuse potential as they produce an initial sense of increased mental and physical capacity along with a perception of well-being. These initial effects are followed by restlessness, irritability, and possibly psychosis. Abatement of these late effects is often countered with repeated use; drug tolerance and psychological dependence develop with chronic use. Overdose, although rare in experienced users, results in hypertension, cardiac arrhythmias, convulsions, and possibly death. Various compounds chemically related to amphetamines are components of over-the-counter medications, including ephedrine, pseudoephedrine, and

phenylpropanolamine. These amphetamine-like compounds are common in allergy and cold medications. Identification of amphetamine abuse involves analysis of urine for the parent drugs. Immunoassay systems are commonly used as the screening procedure. Because of variable cross-reactivity with over-the-counter medications that contain amphetamine-like compounds, a positive result by immunoassay is considered presumptive positive only. Confirmation of immunoassay-positive tests is most commonly made with LC or GC methods.

Methylenedioxymethamphetamine Methylenedioxymethamphetamine (MDMA) is an illicit amphetamine derivative commonly referred to as “ecstasy.”7,8 Although it was strongly associated with club culture in the 1990s, its use has continued to grow. There are as many as 200 “designer” analogues that have been developed to produce effects comparable to those of MDMA. MDMA and its analogues are primarily administered orally in tablets of 50 to 150 mg. Other, less-frequent routes of administration are inhalation, injection, or smoking. MDMA has a circulating half-life of approximately 8 to 9 hours. The majority of the drug is eliminated by hepatic metabolism, although 20% is eliminated unchanged in the urine.

CASE STUDY 31.2 An emergency department patient with a provisional diagnosis of overdose with an over-the-counter cold medicine undergoes a drug screen. Test results from immunoassay screening were negative for opiates, barbiturates, benzodiazepines, THC, and cocaine, but positive for amphetamines. The salicylate level was 15 times the upper limit of the therapeutic range. Results for acetaminophen and ethanol were negative.

questions 1. What would be the expected results of arterial blood gas analysis? 2. What would be the expected results of a routine urinalysis? 3. What are some of the possible reasons the amphetamine screen is positive?

The onset of effect is 30 to 60 minutes, and duration is about 3.5 hours. The desired effects include hallucination, euphoria, empathic and emotional responses, and increased visual and tactile sensitivity. Adverse effects include headaches, nausea, vomiting, anxiety, agitation, impaired memory, violent behavior, tachycardia, hypertension, respiratory depression, seizures, hyperthermia, cardiac toxicity, liver toxicity, and renal failure. The presenting symptoms along with patient behavior and history must be taken into account as routine drug screening by immunoassay of a urine specimen will usually not test positive. Further analysis and confirmation of MDMA are generally performed using GC–MS.

Anabolic Steroids Anabolic steroids are a group of compounds that are chemically related to the male sex hormone testosterone. These artificial substances were developed in the 1930s as a therapy for male hypogonadism though it was soon discovered that the use of these compounds in healthy subjects increases muscle mass. In many instances, this results in an improvement in athletic performance. Recent studies have reported that 6.5% of adolescent boys and 1.9% of adolescent girls reported the use of steroids without a prescription. Most illicit steroids are obtained through the black market from underground laboratories and foreign sources. The quality and purity of these drugs are highly variable. In most instances, the acute toxic effects of these drugs are related to inconsistent formulation, which may result in high dosages and impurities. A variety of both physical and psychological effects have been associated with steroid abuse. Chronic use has been associated with toxic hepatitis as well as accelerated atherosclerosis and abnormal aggregation of platelets, both of which predispose individuals to stroke and myocardial infarction. In addition, steroid abuse causes an enlargement of the heart. In this condition, heart muscle cells develop faster than the associated vasculature, which may lead to ischemia of heart muscle cells. This predisposes the individual to cardiac arrhythmias and possible death. In males, chronic steroid use is associated with testicular atrophy, sterility, and impotence. In females, steroid abuse causes the development of masculine traits, breast reduction, and sterility. Evaluation of anabolic steroid use can be challenging. Until recently, the primary forms abused were animal derived or synthetic forms. There are several, well-established methods for the detection of the parent drug and its metabolite

for the majority of these; however, the newer forms may be difficult to detect. To address this and related issues, the ratio of testosterone to epitestosterone is commonly used as a screening test; high ratios are associated with exogenous testosterone administration.9

Cannabinoids Cannabinoids are a group of psychoactive compounds found in marijuana. Of these, THC is the most potent and abundant. Marijuana, or its processed product, hashish, can be smoked or ingested. A sense of well-being and euphoria are the subjective effect of exposure. It is also associated with an impairment of shortterm memory and intellectual function. Effects of chronic use have not been well established though tolerance, and a mild dependence may develop over time. THC overdose has not been associated with any specific adverse effects. THC is a lipophilic substance, which is rapidly removed from circulation by passive distribution into hydrophobic compartments, such as the brain and fat. This results in slow elimination as a result of redistribution back into circulation and subsequent hepatic metabolism. The half-life of THC in circulation is 1 day after a single use and 3 to 5 days for chronic, heavy consumers. Hepatic metabolism of THC produces several products that are primarily eliminated in urine. The major urinary metabolite is 11-nor-tetrahydrocannabinol-9-carboxylic acid (THC-COOH). This metabolite can be detected in urine for up to 5 days after a single use or up to 4 weeks following chronic, heavy use. Immunoassay tests for THC-COOH are used to screen for marijuana consumption, and GC–MS is used for confirmation. Both methods are sensitive and specific and because of the low limit of detection of both methods, it is possible to find THC-COOH in urine as a result of passive inhalation. Urinary concentration standards have been established to discriminate between passive and direct inhalation.

Cocaine Cocaine is an effective local anesthetic with few adverse effects at therapeutic concentrations. At higher circulating concentrations, it is a potent CNS stimulator that elicits a sense of excitement and euphoria. Cocaine is an alkaloid salt that can be administered directly (e.g., by insufflation or intravenous injection) or inhaled as a vapor when smoked in the free base form known as crack. The half-life of circulating cocaine is brief; approximately 30 minutes to 1 hour. Acute cocaine toxicity is associated with hypertension, arrhythmia, seizure,

and myocardial infarction. Because of its short half-life, maintaining the subjective effects over a single extended period requires repeated dosages of increasing quantity; therefore, correlations between serum concentration and the subjective or toxic effects cannot be established. Because the rate of change is more important than the serum concentration, a primary factor that determines the toxicity of cocaine is the dose and route of administration. Intravenous administration presents with the greatest hazard, closely followed by smoking. Cocaine's short half-life is a result of rapid hepatic hydrolysis to inactive metabolites, which is the major route of elimination for cocaine. Only a small portion of the parent drug can be found in urine after an administered dose and the primary product of hepatic metabolism is benzoylecgonine, which is largely eliminated in urine. The half-life of benzoylecgonine is 4 to 7 hours; however, it can be detected in urine for up to 3 days after a single use. The presence of this metabolite in urine is a sensitive and specific indicator of cocaine use. In chronic heavy abusers, it can be detected in urine for up to 20 days after the last dose. The primary screening test for detection of cocaine use is measurement of benzoylecgonine in urine by immunoassay. Confirmation testing is most commonly performed using GC–MS.

Opiates Opiates are a class of substances capable of analgesia, sedation, and anesthesia. All are derived from or chemically related to substances derived from the opium poppy. The naturally occurring substances include opium, morphine, and codeine. Heroin, hydromorphone (Dilaudid), and oxycodone (Percodan) are chemically modified forms of the naturally occurring opiates. Meperidine (Demerol), methadone (Dolophine), propoxyphene (Darvon), pentazocine (Talwin), and fentanyl (Sublimaze) are the common synthetic opiates. Opiates have a high abuse potential, and chronic use leads to tolerance with physical and psychological dependence. Acute overdose presents with respiratory acidosis due to depression of respiratory centers, myoglobinuria, and possibly an increase in serum indicators of cardiac damage (e.g., CKMB, troponin). High-level opiate overdose may lead to death caused by cardiopulmonary failure. Treatment of overdose includes the use of the antagonist naloxone. Laboratory testing for opiates usually involves initial screening by immunoassay. Most immunoassays are designed to detect the presence of morphine and codeine; however, cross-reactivity allows for detection of many of the opiates including naturally occurring, chemically modified, and synthetic

forms. GC–MS is the method of choice for confirmation testing.

Phencyclidine Phencyclidine (PCP) is an illicit drug with stimulant, depressant, anesthetic, and hallucinogenic properties. Adverse effects are commonly noted at doses that produce the desired subjective effects, such as agitation, hostility, and paranoia. Overdose is generally associated with stupor and coma. PCP can be ingested or inhaled by smoking PCP-laced tobacco or marijuana. It is a lipophilic drug that rapidly distributes into fat and brain tissue. Elimination is slow as a result of redistribution into circulation and hepatic metabolism. Approximately 10% to 15% of an administered dose is eliminated and unchanged in urine, which allows for identification of the parent drug in urine. In chronic, heavy users, PCP can be detected up to 30 days after abstinence. Immunoassay is used as the screening procedure with GC–MS as the confirmatory method.

Sedatives–Hypnotics Many therapeutic drugs can be classified as sedatives–hypnotics or tranquilizers, and all members of this class are CNS depressants. They have a wide range of approved therapeutic roles, but they also have high abuse potential, ranging from high to low. These drugs often become available for illegal use through diversion from approved sources. Barbiturates and benzodiazepines are the most common types of sedative–hypnotics abused. Although barbiturates have a higher abuse potential, benzodiazepines are more commonly found in abuse and overdose situations. This appears to be a result of availability. There are many individual drugs within the barbiturate and benzodiazepine classification. Secobarbital, pentobarbital, and phenobarbital are the more commonly abused barbiturates. Diazepam (Valium), chlordiazepoxide (Librium), and lorazepam (Ativan) are the most commonly abused benzodiazepines. Overdose with sedatives–hypnotics initially presents with lethargy and slurred speech, which can rapidly progress to coma. Respiratory depression is the most serious toxic effect of most of these agents though hypotension can occur with barbiturates as well. The toxicity of many of these agents is potentiated by ethanol use. Immunoassay is the most common screening procedure for both barbiturates and benzodiazepines. Broad cross-reactivity within members of each group allows for the detection of many individual drugs. GC or LC methods can be used for confirmatory testing.

For additional student resources, please visit http://thepoint.lww.com

at

questions 1. Compound A is reported to have an oral LD50 of 5 mg/kg body weight. Compound B is reported to have an LD50 of 50 mg/kg body weight. Of the following statements regarding the relative toxicity of these two compounds, which is TRUE? a. Ingestion of low amounts of compound A would be predicted to cause more deaths than an equal dose of compound B. b. Ingestion of compound B would be expected to produce nontoxic effects at a dose greater than 100 mg/kg body weight. c. Neither compound A nor compound B is toxic at any level of oral exposure. d. Compound A is more rapidly adsorbed from the gastrointestinal tract than compound B. e. Compound B would be predicted to be more toxic than compound A if the exposure route were transdermal. 2. Which of the following statements best describes the TD50 of a compound? a. The dosage of a substance that would be predicted to cause a toxic effect in 50% of the population b. The dosage of a substance that is lethal to 50% of the population c. The dosage of a substance that would produce therapeutic benefit in 50% of the population d. The percentage of individuals who would experience a toxic response at 50% of the lethal dose e. The percentage of the population who would experience a toxic response after an oral dosage of 50 mg 3. Of the following analytic methods, which is most commonly used as the confirmatory method for identification of drugs of abuse? a. GC with mass spectrometry b. Scanning differential calorimetry

c. Ion-specific electrode d. Immunoassay e. Nephelometry 4. A weakly acidic toxin (pKa = 4.0) that is ingested will a. Be passively absorbed in the stomach (pH = 3.0) b. Not be absorbed because it is ionized c. Not be absorbed unless a specific transporter is present d. Be passively absorbed in the colon (pH = 7.5) e. Be absorbed only if a weak base is ingested at the same time 5. What is the primary product of methanol metabolism by the ADH and ALDH system? a. Formic acid b. Acetone c. Acetaldehyde d. Oxalic acid e. Formaldehyde 6. Which of the following statements concerning cyanide toxicity is TRUE? a. Inhalation of smoke from burning plastic is a common cause of cyanide exposure, and cyanide expresses its toxicity by inhibition of oxidative phosphorylation. b. Inhalation of smoke from burning plastic is a common cause of cyanide exposure. c. Cyanide is a relatively nontoxic compound that requires chronic exposure to produce a toxic effect. d. Cyanide expresses its toxicity by inhibition of oxidative phosphorylation. e. All of these are true. 7. Which of the following laboratory results would be consistent with acute high-level oral exposure to an inorganic form of mercury (Hg2+)? a. All of these b. High concentrations of mercury in whole blood and urine c. Proteinuria d. Positive occult blood in stool

e. None of these 8. A child presents with microcytic, hypochromic anemia. The physician suspects iron deficiency anemia. Further laboratory testing reveals a normal total serum iron and iron-binding capacity; however, the zinc protoporphyrin level was very high. A urinary screen for porphyrins was positive. Erythrocytic basophilic stippling was noted on the peripheral smear. Which of the following laboratory tests would be best applied to this case? a. Whole blood lead b. Urinary thiocyanate c. COHb d. Urinary anabolic steroids e. Urinary benzoylecgonine 9. A patient with suspected organophosphate poisoning presents with a low SChE level. However, the confirmatory test, erythrocyte acetylcholinesterase, presents with a normal result. Excluding analytic error, which of the following may explain these conflicting results? a. The patient has late-stage hepatic cirrhosis or the patient has a variant of SChE that displays low activity. b. The patient has late-stage hepatic cirrhosis. c. The patient was exposed to low levels of organophosphates. d. The patient has a variant of SChE that displays low activity. e. All of these are correct. 10. A patient enters the emergency department in a coma. The physician suspects a drug overdose. Immunoassay screening tests for opiates, barbiturates, benzodiazepines, THC, amphetamines, and PCP were all negative. No ethanol was detected in serum. Can the physician rule out drug overdose as the cause of this coma with these results? a. No b. Yes c. Maybe

references 1. Eldridge DL, Holstege CP. Utilizing the laboratory in the poisoned patient. Clin Lab Med.

2. 3. 4. 5. 6. 7. 8. 9.

2006;26:13–30. Thorne D, Kaplan KJ. Laboratory indicators of ethanol consumption. Clin Lab Sci. 1999;120:343. Kao LW, Nanagas KA. Toxicity associated with carbon monoxide. Clin Lab Med. 2006;26:99–125. Widdop B. Analysis of carbon monoxide. Ann Clin Biochem. 2002;39:378–391. Ibrahim D, Froberg B, Wolf A, et al. Heavy metal poisoning: clinical presentations and pathophysiology. Clin Lab Med. 2006;26:67–97. Rowden AK, Norvell J, Eldridge DL, et al. Acetaminophen poisoning. Clin Lab Med. 2006;26:49–65. Haroz R, Greenberg MI. New drugs of abuse in North America. Clin Lab Med. 2006;26:147–164. Nyberg Karlsen S, Spigset O, Slordal L. The dark side of ecstasy: neuropsychiatric symptoms after exposure to 3,4-methylenedioxymethamphetamine. Basic Clin Pharmacol Toxicol. 2007;102:15–24. Green G. Doping control for the team physician: a review of drug testing procedures in sport. Am J Sports Med. 2006;34:1690–1698.

32 Circulating Tumor Markers: Basic Concepts and Clinical Applications CHRISTOPER R. MCCUDDEN and MONTE S. WILLIS

Chapter Outline Types of Tumor Markers Applications of Tumor Marker Detection Screening and Susceptibility Testing Prognosis Monitoring Effectiveness of Therapy and Disease Recurrence

Laboratory Considerations for Tumor Marker Measurement Immunoassays High-Performance Liquid Chromatography Immunohistochemistry and Immunofluorescence Enzyme Assays

Frequently Ordered Tumor Markers α-Fetoprotein Cancer Antigen 125 Carcinoembryonic Antigen Human Chorionic Gonadotropin Prostate-Specific Antigen Future Directions

Questions Suggested Reading References Chapter Objectives Upon completion of this chapter, the clinical laboratorian should be able to do the following: Discuss the incidence of cancer. Explain the role of tumor markers in cancer management. Identify the characteristics or properties of an ideal tumor marker. State the major clinical value of tumor markers. Name the major tumor types and their associated markers. Describe the major properties, methods of analysis, and clinical use of α-fetoprotein, cancer antigen 125, carcinoembryonic antigen, β-human chorionic gonadotropin, and prostate-

specific antigen. Explain the use of enzymes and hormones as tumor markers.

Key Terms Cancer Neoplasm Oncofetal antigens Oncogene Staging Tumor marker

For additional student resources, please visit http://thepoint.lww.com

at

Cancer remains the second leading cause of mortality in developed countries, accounting for 23% of deaths in the United States and approximately 3 million deaths/year globally. Approximately 42% of males and 38% of females will develop invasive cancer in their lifetime; males have a lifetime risk of dying from cancer of 23%, whereas females have a 19% risk.1 Cancer is a broad term used to describe more than 200 different malignancies that affect more than 50 tissue types. Despite considerable efforts to reduce the incidence of malignancies, it is estimated that there were 1.6 million new cases of cancer in the United States in 2015 (Table 32.1; see additional current global cancer statistics at http://globocan.iarc.fr/factsheet.asp). TABLE 32.1 Estimated New Cases of Cancer and Deaths From Cancer in the United States

Biologically, cancer refers to the uncontrolled growth of cells that often forms a solid mass or tumor (neoplasm) and spreads to other areas of the body. A complex combination of inherited and acquired genetic mutations lead to tumor formation (tumorigenesis) and spreading (metastasis) (for comprehensive reviews, see refs 2, 3). During tumorigenesis, mutations activate growth factors (e.g., epidermal growth factor) and oncogenes (e.g., K-ras), in combination with inhibition of apoptosis, tumor suppressor, and cell cycle regulation genes (e.g., BRCA1, p53, and cyclins). As cancer progresses toward metastasis, additional genetic changes are required, such as loss of cell adhesion proteins (e.g., βcatenin and E-cadherin) and activation of angiogenesis genes (e.g., vascular endothelial growth factor) (Fig. 32.1). An understanding of these genetic mechanisms is the basis for many current and future cancer treatments.

FIGURE 32.1 Genetic changes associated with cancer. A combination of acquired and/or hereditary defects causes tumor formation and metastasis. These processes begin with unregulated proliferation and transformation, followed by invasion and loss of cellular adhesion. A rich vascular supply of oxygen and nutrients is necessary to facilitate growth of a tumor larger than 100 to 200 μm. APC, familial adenomatous polyposis coli, mutated in colorectal cancers; BRCA1, breast cancer susceptibility gene; E-cadherin, adhesion molecule; EGF, epithelial growth factor; MMP, matrix metalloproteinase; p53, cell cycle regulator, mutated in 50% of cancers; pRB, retinoblastoma protein, mutated in many cancers; Ras, small G protein, mutated in many cancers; TIMP, tissue inhibitor of metalloproteinase; VEGF, vascular endothelial growth factor, drug target for inhibin of angiogenesis. A combination of factors determines cancer severity and are used to classify its stage. Depending on the type of cancer, these factors include tumor size, histology, regional lymph node involvement, and presence of metastasis. For most solid tumors (e.g., breast, lung, and kidney), cancer is broadly classified (using roman numerals I to IV) into four stages (Fig. 32.2). Stages correlate with disease severity, where higher stages are indicative of larger tumors and/or significant spreading and severe systemic disease. With disease progression, both proliferation and metastasis occur at the expense of normal organ processes, which is usually the ultimate cause of cancer-associated morbidity and mortality.

FIGURE 32.2 Generalized cancer staging and progression. Numerous factors are used in combination to define cancer stage; these include tumor size, extent of invasion, lymph node involvement, metastasis, and histologic assessments (basis for the TNM staging system). In this simplified diagram, stage is presented as a function of invasion and spreading regionally and to other tissues; the primary tumor is not shown.

TYPES OF TUMOR MARKERS Cancer can be detected and monitored using biologically relevant tumor markers. Tumor markers are produced either directly by the tumor or as an effect of the tumor on healthy tissue (host). Tumor markers encompass an array of diverse molecules such as serum proteins, oncofetal antigens, hormones, metabolites, receptors, and enzymes. A variety of enzymes are elevated nonspecifically in tumors (Table 32.2). These elevated enzymes are largely a result of the high metabolic demand of these proliferative cells. Accordingly, enzyme levels tend to correlate with tumor burden, making them clinically useful for monitoring the success of therapy. Serum proteins, such as β2-microglobulin and immunoglobulins, are also used to monitor cancer therapy (Table 32.3). β2-Microglobulin is found on the surface of

all nucleated cells and can, therefore, be used as a nonspecific marker of the high cell turnover common in tumors. In hematologic malignancies such as multiple myeloma, immunoglobulins provide a relatively specific measure of plasma cell production of monoclonal proteins. In endocrine malignancies, hormones and hormone metabolites are widely used as specific markers of secreting tumors (Table 32.4). Hormones can be valuable in diagnosing neuroblastomas, as well as pituitary and adrenal adenomas. One of the first classes of tumor markers discovered was the oncofetal antigens. Oncofetal antigens such as carcinoembryonic antigen (CEA) and α-fetoprotein (AFP) are expressed transiently during normal development and are then turned on again in the formation of tumors (see Table 32.5 for use of oncofetal antigens). Other tumor markers include monoclonal defined antigens identified from human tumor extracts and cell lines. While these antibodies are directed toward specific carbohydrate or cancer antigens and are best used for monitoring treatment of tumors that secrete these epitopes (Table 32.6). Finally, receptors are used to classify tumors for therapy (Table 32.7). These “nonserologic” markers are outside the scope of the chapter, but they are an important example of the diversity of tumor markers. Prototypic examples of such a marker are estrogen and progesterone receptors and growth factors (HER-2), which are used to choose between endocrine and cytotoxic therapy; endocrine therapy, such as tamoxifen, typically is more effective in patient with ER- and PR-positive patients. TABLE 32.2 Enzyme Tumor Markers

EA, enzyme assay; IA, immunoassay; IHC, immunohistochemistry; RIA, radioimmunoassay. TABLE 32.3 Serum Protein Tumor Markers

IA, immunoassay; IFE, immunofixation electrophoresis; SPE, serum protein electrophoresis. TABLE 32.4 Endocrine Tumor Markers

ACTH, adrenocorticotropic hormone; ADH, antidiuretic hormone; GH, growth hormone; HVA, homovanillic acid; 5-HIAA, hydroxyindoleacetic acid; PTH, parathyroid hormone; PRL, prolactin; VMA, vanillylmandelic acid; ELISA, enzyme-linked immunosorbent assay; HPLC, high-performance liquid chromatography; IA, immunoassay; LC-MS/MS, liquid chromatography– TANDEM mass spectrometry; MTC, medullary thyroid carcinoma; RIA, radioimmunoassay; SIADH, syndrome of inappropriate antidiuretic hormone secretion. aScreening family members for MTC. bHVA and VMA are used in combination for diagnosis of neuroblastomas.

cMetanephrine, normetanephrine.

TABLE 32.5 Use of Serum α-Fetoprotein and Human Chorionic Gonadotropin for Testicular Cancer Classification

AFP, α-fetoprotein; hCG, human chorionic gonadotropin. TABLE 32.6 Carbohydrate and Cancer Antigen Tumor Markers

CA, cancer antigen. TABLE 32.7 Receptor Tumor Markers

ELISA, enzyme-linked immunosorbent assay; FISH, fluorescence in situ hybridization; IHC, immunohistochemistry. Tumor markers are an invaluable set of tools that health care providers can use for a variety of clinical modalities. Depending on the marker and the type of malignancy, tumor markers for screening, diagnosis, prognosis, therapy monitoring, and detecting recurrence applications are in routine clinical use (Fig. 32.3).

FIGURE 32.3 Tumor markers are used for screening, prognosis, treatment monitoring, and detecting recurrence of several types of cancer. Whereas few markers are used for screening, many are used to monitor therapy. Endocrine and hormone metabolite markers are often used to aid in diagnosis of secreting tumors. List is not comprehensive but provides examples of the most commonly used markers. Note: PSA screening remains controversial, see text.

APPLICATIONS OF TUMOR MARKER DETECTION The ideal tumor marker would be tumor specific, absent in healthy individuals, and readily detectable in body fluids. Unfortunately, all of the presently available tumor markers do not fit this ideal model. However, numerous tumor markers have been identified that have a high enough specificity and sensitivity to be used on a targeted basis for aiding diagnosis, prognosis, detection of recurrence, and monitoring the response to treatment (Fig. 32.3). Clinically, tumor markers are used in combination with clinical signs, symptoms, and histology to facilitate clinical decision making.

Screening and Susceptibility Testing With the possible exception of prostate-specific antigen (PSA),* no tumor

marker identified to date can be used to effectively screen asymptomatic populations. This is because most of the clinically used tumor markers are found in normal cells and benign conditions in addition to cancer. Screening asymptomatic populations would therefore result in detection of false positives (patients without disease with detectable tumor marker), leading to undue alarm and risk to patients (e.g., unnecessary imaging, biopsy, and surgery). Presently, only a few tumor markers are used to screen populations with high incidence (targeted screening). *Screening for prostate cancer remains controversial. Currently strategies focus on informed decision-making support to enable patients to weight the benefits of detecting disease early against the harms of overtreatment. Susceptibility to cancer can be determined using molecular diagnostics in patients with breast, ovarian, or colon cancer by identifying germ-line mutations in patients with a family history of these diseases. Screening for susceptibility to breast and ovarian cancers is done by identifying germ-line BRCA1 and BRCA2 mutations. Similarly, familial colon cancers can be identified by the presence of the adenomatous polyposis coli (APC) gene. Since greater than 99% of people with familial APC develop colon cancer by the age of 40 years, prophylactic colectomy is routinely performed on APC+ patients. While gene testing can be done from blood samples, these are not really considered circulating tumor markers and therefore not discussed further.

Prognosis Tumor marker concentration generally increases with tumor progression, reaching their highest levels when tumors metastasize. Therefore, serum tumor marker levels at diagnosis can reflect the aggressiveness of a tumor and help predict the outcome for patients. High concentrations of a serum tumor marker at diagnosis might indicate the presence of malignancy and possible metastasis, which is associated with a poorer prognosis. In other instances, the mere presence or absence of a particular marker may be valuable. Such is the case with some of the receptors used to base chemotherapeutic treatment in breast cancer as described above, where endocrine therapy is indicated only in the presence of given markers.

Monitoring Effectiveness of Therapy and Disease Recurrence

One of the most useful and common applications of tumor markers is monitoring therapy efficacy and detecting disease recurrence. After surgical resection, radiation, or drug therapy of cancer (chemotherapy), tumor markers are routinely followed serially. In patients with elevated tumor markers at diagnosis, effective therapy results in a dramatic decrease or disappearance of the tumor marker. If the initial treatment is effective, the appearance of circulating tumor markers can then be used as a highly sensitive marker of recurrence. Many tumor markers have a lead time of several months before disease would be detected by other modalities (e.g., imaging), allowing for earlier identification and treatment in cases of relapse.

LABORATORY CONSIDERATIONS FOR TUMOR MARKER MEASUREMENT The unique characteristics and concentrations of tumor makers require special laboratory considerations. Two major considerations are the size and variability of the tumor marker concentration between different manufacturers due to the lack of harmonization and standardization. Lack of standardization makes comparison of serial patient results using different assays treacherous. There are multiple reasons why these assays are not comparable, including differences in antibody specificity, analyte heterogeneity, assay design, lack of standard reference material, calibration, kinetics, and variation in reference ranges. To most accurately monitor tumor marker concentrations in a patient, it is important to use the same methodology (or kit). It is also important to perform diligent quality control during lot changes. This includes a careful comparison of QC material and patient samples because detection of tumor markers can vary widely between reagent lots; this is particularly a concern where polyclonal antibodies are used as reagents (e.g., serum free light chains). The other main consideration for tumor marker measurement is the wide range of concentrations encountered clinically. Tumor markers often vary in concentration by orders of magnitude, making accurate measurement challenging compared with routine chemistry analytes (e.g., concentration extremes for sodium are between 120 and 160 mmol/L, whereas human chorionic gonadotropin [hCG] may vary between 10 and 10,000,000 mIU/mL!). Handling these ranges requires careful attention to dilution protocols and the risk of antigen excess. These considerations are discussed in the context of specific methodology in the following sections.

Immunoassays Immunoassays are the most commonly used method to measure tumor markers. There are many advantages to this method, such as the ability to automate testing and relative ease of use. Many tumor markers are amenable to automation and relatively rapid analysis using large immunoassay or integrated chemistry test platforms. However, there are some unique factors to be considered when using immunoassays to measure tumor markers including assay linearity, antigen excess (hook effect), and the potential for heterophile antibodies.

COMMON CANCER TERMS Angiogenesis: Development of new blood vessels to supply oxygen and nutrients to cells Apoptosis: Programmed cell death Cell cycle: Phases of cell activity divided into G, S, and M (growth, DNA synthesis, and mitosis, respectively) Neoplasm: Synonymous with “tumor,” it refers to uncontrolled tissue growth; it may be cancerous (malignant) or noncancerous (benign). Derived from Greek meaning “new formation.” Oncogene: Encodes a protein that, when mutated, promotes uncontrolled cell growth Tumor suppressor gene: Encodes a protein involved in protecting cells from unregulated growth

Linearity The linear range is the span of analyte concentrations over which a linear relationship exists between the analyte and signal. Linearity is determined by analyzing (in replicates) specimens spanning the reportable range. Guidelines for this determination are outlined in the Clinical Laboratory Improvement Amendments (CLIA) guidelines for linearity.4 Samples exceeding the linear range, which is much more likely to occur in the detection of tumor markers, need to be systemically diluted to determine values within the reportable linear range. Dilutions must be done with careful consideration of the diluent and awareness of the risk of error if using manual calculations (it is common practice to have manual calculations reviewed by another individual). Excessively high

tumor marker concentrations can result in falsely low measurements, a phenomenon known as antigen excess or hook effect. Hook Effect When analyte concentrations exceed the analytical range excessively, there is potential for antigen excess or hook effect. When very high antigen concentrations are present, capture and/or label antibodies can be saturated, resulting in a lack of “sandwich” formation and thus in a significant decrease in signal. The name hook effect refers to the shape of the concentration–signal curve when the reagents are saturated with excess antigen (Fig. 32.4). But the practical understanding of the hook effect is that it causes the actual tumor marker concentration to be grossly underestimated. If clinical suspicion is high for an elevated tumor marker, it can be identified by the laboratory with dilution and repeat testing. Samples displaying hook effect will yield higher (accurate) values on dilution (Fig. 32.4). This phenomenon typically only affects sandwichtype immunoassays.

FIGURE 32.4 Hook effect (antigen excess) can occur with tumor markers because they may be found at very high concentrations. When reagents are depleted by excess antigen (the tumor marker), falsely low results may occur (represented by the “neat” curve). Dilution of samples can be used to detect and account for hook effect (represented by the “diluted” line).

Heterophile Antibodies Significant interference can be seen in immunoassays if an individual has circulating antibodies against human or animal immunoglobulin reagents. A subset of heterophilic antibodies are human antianimal antibodies or human antimouse antibodies (HAMAs). HAMAs are most commonly encountered in patients who have been given mouse monoclonal antibodies for therapeutic reasons or who have been exposed to mice,5 but they may be idiopathic. In patients, these antibodies can cause false-positive or, less commonly, falsenegative results by cross-linking the capture/label antibody (see Chapter 8). To confirm that heterophilic antibodies are present, samples may be diluted and the linearity of the dilutions is analyzed (similar to eliminating hook effect). Samples with heterophilic antibodies do not give linear results upon dilution. The presence of antianimal immunoglobulins can also be detected directly with commercial reagents. Nonimmune animal serum is often added to immunoassays to minimize the effects of heterophilic antibodies, and there are commercial blocking reagents that can be used to remove HAMAs. Many monoclonal therapeutic agents are now derived to include only fragments of an antibody so that patients do not develop heterophilic antibodies to the full antibody. In the laboratory, heterophile antibodies can be detected by investigating results that are inconsistent with history and clinical scenario. Common Analytical Concerns Applied to Tumor Marker Immunoassays Immunoassays for tumor markers can be affected by interference from icterus, lipemia, hemolysis, and antibody cross-reactivity in the same manner as other immunoassays. As with all automated tests, the potential for carryover with high levels of tumor marker analytes can also be a concern, leading to falsely elevated levels in patients if adequate washing steps are not included between patient samples.

High-Performance Liquid Chromatography High-performance liquid chromatography (HPLC) is commonly used to detect small molecules, such as endocrine metabolites. With respect to tumor markers, HPLC is used to detect catecholamine metabolites in plasma and urine. Generally, there is an extraction process, by which the analytes of interest are separated from either plasma or urine. Extractions are applied to a column where they are separated by their physical characteristics (charge, size, and polarity).

Catecholamines and catecholamine metabolites are used to help diagnose carcinoid tumors, pheochromocytoma, and neuroblastoma. Neuroblastoma is a common malignant tumor occurring in approximately 7.5% of children under 15 years of age. Neuroblastoma is diagnosed by the detection of high levels of plasma epinephrine, norepinephrine, and dopamine (catecholamines). Pheochromocytoma, a rare tumor associated with hypertension, is diagnosed by detecting elevated plasma metanephrines (along with urine vanillylmandelic acid and free catecholamines). Carcinoid tumors are serotonin-secreting tumors that arise from the small intestine, appendix, or rectum leading to a host of symptoms (carcinoid syndrome), including pronounced flushing, bronchial constriction, cardiac valve lesions, and diarrhea. The diagnosis of carcinoid tumors involves the detection of 5-hydroxyindoleacetic acid, which is a serotonin metabolite. In all of these cases, HPLC is used to detect the hormones and metabolites secreted by these tumors for diagnosis, therapeutic monitoring, and recurrence. HPLC is not subject to hook effect, lot-to-lot antibody variation, and heterophile antibodies but is more labor intensive and requires more experience and skill than automated immunoassays.

Immunohistochemistry and Immunofluorescence While not found in circulation, it is important for laboratorians to be familiar with solid tissue tumor markers. These are identified in tissue sections typically from a fine-needle aspirate or biopsy samples. Specific antibodies (and the proper control antibodies) are incubated with tissue sections to detect the presence (or absence) of antigens using colorimetric or fluorescent secondary antibodies. In many ways, this is similar to detection by immunoassay, but the added value is the ability to determine whether the antigen in question is in a particular cell type (such as a tumor), in the specific subcellular location. A good example of the use of a tumor marker that is detected by immunohistochemistry is the identification of estrogen and progesterone receptors in breast cancer. When breast tumors are positive for estrogen and progesterone receptors at the cell surface, they tend to respond to hormonal therapy, while tumors lacking these receptors are treated with other chemotherapeutic modalities.6

Enzyme Assays The detection of elevated circulating enzymes generally cannot be used to identify a specific tumor or site of tumor. One key exception to this is the PSA, which is in fact an enzyme; PSA is a serine protease of the kallikrein family, found in both diseased and benign prostate glands (it is also found in low

concentrations in amniotic fluid, breast milk, and some other cancers). Before the widespread use of immunoassays and the discovery of oncofetal antigens, enzyme detection use was widespread. When cells die (autolysis/necrosis) or undergo changes in membrane permeability, enzymes are released from intracellular pools into circulation where they are readily detected. Examples of enzymes that have been used as tumor markers include alkaline phosphatase (bone, liver, leukemia, and sarcoma), lactate dehydrogenase (liver, lymphomas, leukemia, and others), and of course PSA (prostate). Enzyme activity assays (see also Chapter 13) are used to quantify all of these enzymes, with the exception of PSA, which is measured by immunoassay.

FREQUENTLY MARKERS

ORDERED

TUMOR

α-Fetoprotein AFP is an abundant serum protein normally synthesized by the fetal liver that is reexpressed in certain types of tumors. This reexpression during malignancy classifies AFP as a carcinoembryonic protein. AFP is often elevated in patients with hepatocellular carcinoma (HCC) and germ cell tumors.

Regulation and Physiology AFP is a 70-kD glycoprotein related to albumin that normally functions as a transport protein. Like albumin, it is involved in regulating fetal oncotic pressure. During development, AFP peaks at approximately one-tenth the concentration of albumin at 30 weeks of gestation. The upper normal limit for serum AFP is approximately 15 ng/mL (reference intervals are method dependent) in healthy adults. Infants initially have high serum AFP values that decline to adult levels at an age of 7 to 10 months.7

Clinical Application and Interpretation AFP is used for the diagnosis, staging, prognosis, and treatment monitoring of HCC. Also known as hepatoma, HCC is a tumor that originates in the liver, often due to chronic diseases, such as hepatitis and cirrhosis. Patients with HCC frequently have elevated serum AFP. However, as with most tumor markers, AFP is not completely specific. For example, AFP can also be increased in benign conditions such as pregnancy and other nonmalignant liver disease, as

well as other types of malignancies (e.g., testicular cancer—see below). Although it is not widely used for screening in Europe and North America, AFP has been used to detect HCC in populations with high disease prevalence such as in China. When used for screening high-risk populations, AFP has a sensitivity ranging from 40% to 65% and specificity of 80% to 95% (at cutoffs ranging from 20 to 30 ng/mL).8 Very high levels of AFP (>500 ng/mL) in highrisk individuals are considered diagnostic of HCC.9 Several expert groups, including the National Comprehensive Cancer Network, National Academy of Clinical Biochemistry, and the British Society of Gastroenterology, now recommend that AFP be used in conjunction with ultrasound imaging every 6 months in patients at high risk for developing HCC. This includes patients with hepatitis B virus–induced and/or hepatitis C virus–induced liver cirrhosis.10 High levels of AFP in HCC are associated with poor prognosis and are exemplified in individuals who do not respond to therapy or have residual disease following surgery. Correspondingly, a decrease in circulating AFP levels after treatment is associated with prolonged survival rates. It is therefore recommended that serial measurements of AFP be used to monitor treatment and postsurgery in patients with HCC. The other major use for AFP as a tumor marker is for classification and monitoring therapy for testicular cancer. Testicular cancer includes several subtypes broadly classified into seminomatous and nonseminomatous tumors. Seminomatous tumors form directly from malignant germ cells, whereas nonseminomatous tumors differentiate into embryonal carcinoma, teratoma, choriocarcinoma, and yolk sac tumors (endodermal sinus tumor).11 AFP is used in combination with β-human chorionic gonadotropin (β-hCG) to classify nonseminomatous tumors (Table 32.5). Serum AFP is also useful for tumor staging; AFP is increased in 10% to 20% of stage I tumors, 50% to 80% of stage II tumors, and 90% to 100% of stage III nonseminomatous testicular cancer. As with HCC, AFP can be used serially to monitor therapy efficacy and disease progression, where increases are indicative of relapse or resistance.

METHODOLOGY AFP is measured using any of a variety of commercially available automated immunoassays. These are typically sandwich immunoassays relying on monoclonal or polyclonal antibodies directed toward different regions of AFP. Serial monitoring of AFP should be done using the same laboratory and assay

method to ensure changes (or lack of change) are due to the tumor and not assay variation. As with other glycoproteins, AFP displays some heterogeneity where certain isoforms are preferentially produced by malignant cells; AFP isoforms differ in their glycosylation and sialyation. Antibodies against these isoforms produced by malignant cells may in the future be used to improve the specificity of AFP immunoassays.

Application and Pathophysiology The primary applications of AFP as a tumor marker are for HCC and nonseminomatous testicular cancer. AFP is typically used as a marker to monitor therapy, detect residual tumor, or detect relapse; AFP is also used as part of maternal serum screening for neural tube defects and chromosomal abnormalities.

Cancer Antigen 125 Cancer antigen 125 (CA-125) was first defined by a murine monoclonal antibody raised against a serous ovarian carcinoma cell line.10 CA-125 may be useful for detecting ovarian tumors at an early stage and for monitoring treatments without surgical restaging.

Regulation and Physiology CA-125 is expressed in the ovary, in other tissues of müllerian duct origin, and in human ovarian carcinoma cells. The CA-125 gene encodes a high molecular weight (200,000 to 1,000,000 kDa) mucin protein containing a putative transmembrane region and a tyrosine phosphorylation site.12 Although it is not usually found in serum, CA-125 may be elevated in patients with endometriosis, during the first trimester of pregnancy, or during menstruation.

Clinical Application and Interpretation CA-125 is a serologic marker of ovarian cancer. Ovarian cancer accounts for approximately 3% of the newly diagnosed malignancies in women and is among the top five causes of cancer-related death (Table 32.1). Ovarian cancer includes a broad range of categories, including sex cord tumors, stromal tumors, germ cell tumors, and, most commonly, epithelial cell tumors. As with most other tumor markers, CA-125 should not be used to screen for ovarian cancer in asymptomatic individuals. However, CA-125 is elevated in a high percentage of

ovarian tumors and is recommended as an annual test for women with a family or prior history of ovarian cancer. CA-125 levels also correlate with ovarian cancer stage. CA-125 is elevated in 50% of patients with stage I disease, 90% of patients with stage II, and more than 90% of patients with stage III or IV. Other tumor markers for ovarian cancer are under development. Human epididymis protein 4 (HE4) is another Food and Drug Administration (FDA)– cleared test for ovarian cancer. HE4 offers improved specificity over CA-125, due to the presence of elevated CA-125 found in nonmalignant conditions, such as endometriosis. Tumor marker identification is still commonly published, with new markers on their way.

CASE STUDY 32.1 A 33-year-old man with a history of chronic liver disease presents with edema, abdominal pain, and recent weight loss. Laboratory examination reveals a low platelet count, hypoalbuminemia, and prolonged prothrombin time and partial thromboplastin time.

questions 1. Which tumor marker may aid in diagnosing this patient? 2. What additional laboratory tests would be useful in diagnosing this patient? 3. The patient is treated with surgery; how should tumor markers be used to determine the success of surgery?

Methodology CA-125 can be detected by immunoassays that use OC125 and M11 antibodies. These monoclonal antibodies recognize distinct nonoverlapping regions of the CA-125 epitope. CA-125 is available on many automated platforms. However, results from different platforms are not interchangeable due to differences between reagent detection methods.

Application and Pathophysiology

CA-125 is predominantly used to monitor therapy and to distinguish benign masses from ovarian cancer.13 For example, in postmenopausal women with a palpable abdominal mass, a high level (>95 U/mL) of CA-125 has a 90% positive predictive value for ovarian cancer. For therapy monitoring, CA-125 is useful both for predicting the success of surgery (debulking procedures) and for determining the efficacy of chemotherapy. Therefore, patients with elevated CA125 following either treatment modality have a poor prognosis. Prognosis and CA-125 half-life are related; a CA-125 half-life of less than 20 days is associated with longer survival; the average half-life of CA-125 is 4.5 days.14,15 The upper normal range for serum CA-125 is typically 35 U/mL.

Carcinoembryonic Antigen CEA was discovered in the 1960s and is a prototypical example of an oncofetal antigen; it is expressed during development and then reexpressed in tumors. CEA is the most widely used tumor marker for colorectal cancer and is also frequently elevated in lung, breast, and gastrointestinal tumors. CEA can be used to aid in the diagnosis, prognosis, and therapy monitoring of colorectal cancer. Although high levels of CEA (>10 ng/mL) are frequently associated with malignancy, high levels of CEA are not specific for colorectal cancer, and therefore, CEA is not used for screening.

Regulation and Physiology CEA is a large heterogeneous glycoprotein with a molecular weight of approximately 200 kDa. It is part of the immunoglobulin superfamily and is involved in apoptosis, immunity, and cell adhesion. Because of its role in cell adhesion, CEA has been postulated to be involved in metastasis. Akin to other serologic tumor markers, CEA may be elevated nonspecifically because of impaired clearance or through increased production. Increased CEA concentrations have been observed in heavy smokers and in some patients following radiation treatment and chemotherapy. CEA may also be elevated in patients with liver damage due to prolonged clearance. The upper normal range for serum CEA is 3 to 5 ng/mL depending on the assay.

Clinical Application and Interpretation The main clinical use of CEA is as a marker for colorectal cancer. In colon cancer, CEA is used for prognosis, in postsurgery surveillance, and to monitor response to chemotherapy. For prognosis, CEA can be used in combination with

histology and the TNM (see definition box) staging system to establish the need for adjuvant therapy (addition of chemotherapy or treatment after surgery). Adjuvant therapy is indicated in patients with stage II disease (i.e., tumor has spread beyond immediate colon but not to lymph nodes) who have high levels of CEA.16

Methodology Although CEA assays historically used polyclonal antibodies, these have largely been replaced by the use of monoclonal anti-CEA antibodies. CEA is available on numerous commercial automated platforms. Due to the high heterogeneity of CEA, it is essential that the same assay method is applied to serial monitoring.

Application and Pathophysiology Before surgical resection, baseline CEA values are typically obtained to confirm successful removal of the tumor burden. After surgery and during chemotherapy, it is recommended that CEA levels be serially monitored every 2 to 3 months to detect recurrence and determine therapy efficacy; the half-life of CEA is approximately 2 to 8 days depending on the assay and the individual. If treatment is successful, CEA levels should drop into the reference interval in 1 to 4 months. CEA is not recommended for screening asymptomatic individuals for colorectal cancer. While there are no specific guidelines recommending the use of CEA in other types of cancer, it may be of value for detecting recurrence of antigen-positive breast and gastrointestinal cancers and medullary thyroid carcinoma and to aid in the diagnosis of non–small cell lung cancer.

TNM STAGING SYSTEM T—tumor size and involvement/invasion of nearby tissue; scale 0–4 N—regional lymph nodes involvement; scale 0–3 M—metastasis; extent of tumor spreading from one tissue to another; scale 0–1 Example grading of a tumor: T1 N0 M0 = small tumor, no nodal involvement, and no metastasis

Human Chorionic Gonadotropin hCG is a dimeric hormone normally secreted by trophoblasts to promote implantation of the blastocyst and the placenta to maintain the corpus luteum through the first trimester of pregnancy. Some types of tumor invasion are actually similar to uterine implantation, except that implantation in pregnancy is regulated and limited. hCG is elevated in trophoblastic tumors, mainly choriocarcinoma, and germ cell tumors of the ovary and testis.

Regulation and Physiology hCG is a 45-kD glycoprotein consisting of α- and β-subunits. A unique aspect of hCG is that it is degraded into multiple fragments. In serum, this results in the presence of the intact molecule, nicked hCG, the free β-subunit (β-hCG), and a hyperglycosylated intact form. Either intact hCG or the free β-subunit may be elevated in malignancies, and most assays detect multiple fragments of hCG.

CASE STUDY 32.2 A 65-year-old man presents to the emergency department after he had abnormally tarry-colored stool on multiple occasions. He has had gastrointestinal discomfort and has felt increasingly tired during the past 2 months. Physical examination reveals a guaiac-positive stool. A subsequently colonoscopy identified a circumferential mass in the sigmoid colon. A biopsy was performed, which identified the mass as an adenocarcinoma. CEA level was obtained as part of the presurgery workup.

questions 1. Is the CEA test useful as a screening test for colon carcinoma? 2. What other conditions can result in elevated CEA levels? 3. How is CEA used to monitor patients after surgery for colon cancer?

Clinical Application and Interpretation hCG has several clinical applications as a tumor marker. It is a prognostic

indicator for ovarian cancer, a diagnostic marker for classification of testicular cancer, and the most useful marker for detection of gestational trophoblastic diseases (GTDs).2 GTDs include four distinct types of tumors (hydatidiform mole, persistent/invasive gestational trophoblastic neoplasia, choriocarcinoma, and placental site trophoblastic tumors) that are classified by clinical history, ultrasound, histology, and hCG levels. hCG is invariably elevated in women with GTDs17 and is often found at higher levels than are observed in normal pregnancy (i.e., >100,000 mIU/mL). It is particularly a helpful marker for monitoring GTD therapy, as levels of hCG correlate with tumor mass and prognosis; hCG is not actually cleared by the FDA for use as a tumor marker despite its widespread utility.

Methodology hCG can be measured by using any of a variety of widely available automated immunoassays. Typical assays use monoclonal capture and tracer antibodies targeted toward epitopes in the β-subunit and intact hCG. Total β-hCG assays are the most useful assays because they detect both intact hormone and free β-hCG. Due to the variability in hCG assays,18 it is imperative that patients be monitored with the same technique. It is also important for laboratories to be aware of the relative cross-reactivity of their assay with different hCG isoforms; because hCG assays are designed to detect pregnancy, they are not all equivalent for application as tumor markers.

Application and Pathophysiology In testicular cancer, the free β-hCG subunit is elevated in 60% to 70% of patients with nonseminomas. hCG can be used in combination with AFP and biopsies to diagnose subtypes of testicular cancer (Table 32.5). Ectopic β-hCG is also occasionally elevated in ovarian cancer and some lung cancers. In practice, free β-hCG is sensitive and specific for aggressive neoplasms; the free β-hCG is not detectable in the serum of healthy subjects.

Prostate-Specific Antigen PSA is a 28-kD glycoprotein produced in the epithelial cells of the acini and ducts of the prostatic ducts in the prostate. It is a serine protease of the kallikrein gene family. It functionally regulates seminal fluid viscosity and is instrumental in dissolving the cervical mucus cap, allowing sperm to enter.

Regulation and Physiology In healthy men, low circulating levels of PSA can be detected in the serum. There are two major forms of PSA that are found circulating in the blood: (1) free and (2) complexed. Most of the circulating PSA is complexed to α1antichymotrypsin or α2-macroglobulin. Assays to detect total and free PSA have been developed. While the detection of total PSA has been used in screening for and in monitoring of prostate cancer, evidence for the usefulness in detecting free PSA as a fraction of total has been identified. Patients with malignancy have a lower percentage of free PSA. As with other tumor markers, PSA is not entirely specific. Men with benign prostatic hyperplasia (BPH) and prostatitis can also have high PSA levels. Additional markers, such as prostate cancer gene3 (PCA-3), are starting to be used to address this lack of specificity, though at this time prostate cancer tumor markers remain controversial and actively researched to improve the sensitivity and specificity.

CASE STUDY 32.3 A 25-year-old man with a history of testicular cancer is followed post surgery over the course of 10 months, with β-hCG and AFP monitored. The patient is treated with radiation at 2 months, followed by chemotherapy (taxol, ifosfamide, and cisplatin) from months 6 through 9 (see Case Study Fig. 32.3.1).

CASE STUDY FIGURE 32.3.1 Time course of hCG and AFP in patient with testicular cancer. The patient was treated with radiation at 2 months

and then again with chemotherapy (taxol, ifosfamide, and cisplatin) from months 6 through 9. Reference range for AFP is less than 10 μg/L and that for hCG is less than 5 U/L.

questions 1. What type of germ cell tumor might this patient have based on the serum AFP and β-hCG levels? 2. Explain the pattern of AFP and hCG observed in the graph. 3. Can a final diagnosis be made based only on the tumor marker findings? If not, why not?

Clinical Application and Interpretation Several large clinical trials have shown that although PSA can reduce mortality from prostate cancer, it is only a small reduction in overall risk.19, 20, 21 Moreover, harm from screening in the form of biopsies and treatment may outweigh the benefits. Thus, current recommendations focus on informed decision making, where the risks and benefits are outlined and individual patients decide whether or not to be screened.22, 23, 24 Conversations about screening should begin at age 50 (American Cancer Society and American Urological Association).25 Men at higher risk (first-degree relative with prostate cancer or African American) may consider screening at between 40 to 45 years of age. Serial monitoring is appropriate with a 2-year screening interval may be for men with PSA levels less than 2.0 ng/mL. Screening should always include a digital rectal examination (DRE). Screening utility decreases with age and is not appropriate for men with less than a 10-year life expectancy. In addition to the use of standard cutoff values of total PSA (

Smile Life

When life gives you a hundred reasons to cry, show life that you have a thousand reasons to smile

Get in touch

© Copyright 2015 - 2024 AZPDF.TIPS - All rights reserved.