Information

What range of dose should be used?


This is a dose-response experiment testing a new cancer drug. the darker line represents cancer cells. what range of dose should be used? I think it's 2-4 because this affects cancer cells only. is this correct?


In the most basic sense you want to kill the most cancerous cells whilst minimizing the regular somatic cell death. Almost all cancer medications affect regular cells, too - though the better ones do so at a minimum whilst being effective. In reality, it's also nearly impossible to kill all of the cancerous cells. The goal is to bring them below detectable levels, which can allow the body to finish the job. Leaving significant amounts of cancerous cells alive won't do the patient any good - they'll just continue to proliferate and the patient will be back for more operations or treatments soon.

So, with the goal of minimizing benign cell cost and completely eradicating the cancerous line, on your crude chart that falls at about "4".


Community-acquired pneumonia:
Oral:
-Immediate-release: 500 mg orally as a single dose on day 1, followed by 250 mg orally once a day on days 2 to 5
-Extended-release: 2 g orally once as a single dose

Parenteral: 500 mg IV once a day as a single dose for at least 2 days, followed by 500 mg (immediate-release formulation) orally to complete a 7- to 10-day course of therapy

Comment: Extended-release formulations should be taken on an empty stomach.

Uses:
-Treatment of mild community acquired pneumonia due to Chlamydophila pneumoniae, Haemophilus influenzae, Mycoplasma pneumoniae, or Streptococcus pneumoniae in patients appropriate for oral therapy
-Treatment of community-acquired pneumonia due to C pneumoniae, H influenzae, Legionella pneumophila, Moraxella catarrhalis, M pneumoniae, or S pneumoniae in patients who require initial IV therapy


Abilify Dosage

Medically reviewed by Drugs.com. Last updated on July 8, 2020.

Generic name: ARIPIPRAZOLE 2mg
Dosage form: tablet, oral solution, orally disintegrating tablet, injection

Schizophrenia

The recommended starting and target dose for ABILIFY is 10 or 15 mg/day administered on a once-a-day schedule without regard to meals. ABILIFY has been systematically evaluated and shown to be effective in a dose range of 10 to 30 mg/day, when administered as the tablet formulation however, doses higher than 10 or 15 mg/day were not more effective than 10 or 15 mg/day. Dosage increases should generally not be made before 2 weeks, the time needed to achieve steady-state [see Clinical Studies (14.1)] .

Maintenance Treatment: Maintenance of efficacy in schizophrenia was demonstrated in a trial involving patients with schizophrenia who had been symptomatically stable on other antipsychotic medications for periods of 3 months or longer. These patients were discontinued from those medications and randomized to either ABILIFY 15 mg/day or placebo, and observed for relapse [see Clinical Studies (14.1)] . Patients should be periodically reassessed to determine the continued need for maintenance treatment.

The recommended target dose of ABILIFY is 10 mg/day. Aripiprazole was studied in adolescent patients 13 to 17 years of age with schizophrenia at daily doses of 10 and 30 mg. The starting daily dose of the tablet formulation in these patients was 2 mg, which was titrated to 5 mg after 2 days and to the target dose of 10 mg after 2 additional days. Subsequent dose increases should be administered in 5 mg increments. The 30 mg/day dose was not shown to be more efficacious than the 10 mg/day dose. ABILIFY can be administered without regard to meals [see Clinical Studies (14.1)] . Patients should be periodically reassessed to determine the need for maintenance treatment.

Switching from Other Antipsychotics

There are no systematically collected data to specifically address switching patients with schizophrenia from other antipsychotics to ABILIFY or concerning concomitant administration with other antipsychotics. While immediate discontinuation of the previous antipsychotic treatment may be acceptable for some patients with schizophrenia, more gradual discontinuation may be most appropriate for others. In all cases, the period of overlapping antipsychotic administration should be minimized.

Bipolar I Disorder

Acute Treatment of Manic and Mixed Episodes

Adults: The recommended starting dose in adults is 15 mg given once daily as monotherapy and 10 mg to 15 mg given once daily as adjunctive therapy with lithium or valproate. ABILIFY can be given without regard to meals. The recommended target dose of ABILIFY is 15 mg/day, as monotherapy or as adjunctive therapy with lithium or valproate. The dose may be increased to 30 mg/day based on clinical response. The safety of doses above 30 mg/day has not been evaluated in clinical trials.

Pediatrics: The recommended starting dose in pediatric patients (10 to 17 years) as monotherapy is 2 mg/day, with titration to 5 mg/day after 2 days, and a target dose of 10 mg/day after 2 additional days. Recommended dosing as adjunctive therapy to lithium or valproate is the same. Subsequent dose increases, if needed, should be administered in 5 mg/day increments. ABILIFY can be given without regard to meals [see Clinical Studies (14.2)] .

Adjunctive Treatment of Major Depressive Disorder

The recommended starting dose for ABILIFY as adjunctive treatment for patients already taking an antidepressant is 2 to 5 mg/day. The recommended dosage range is 2 to 15 mg/day. Dosage adjustments of up to 5 mg/day should occur gradually, at intervals of no less than 1 week [see Clinical Studies (14.3)] . Patients should be periodically reassessed to determine the continued need for maintenance treatment.

Irritability Associated with Autistic Disorder

Pediatric Patients (6 to 17 years)

The recommended dosage range for the treatment of pediatric patients with irritability associated with autistic disorder is 5 to 15 mg/day.

Dosing should be initiated at 2 mg/day. The dose should be increased to 5 mg/day, with subsequent increases to 10 or 15 mg/day if needed. Dose adjustments of up to 5 mg/day should occur gradually, at intervals of no less than 1 week [see Clinical Studies (14.4)] . Patients should be periodically reassessed to determine the continued need for maintenance treatment.

Tourette's Disorder

Pediatric Patients (6 to 18 years)

The recommended dosage range for Tourette's Disorder is 5 to 20 mg/day.

For patients weighing less than 50 kg, dosing should be initiated at 2 mg/day with a target dose of 5 mg/day after 2 days. The dose can be increased to 10 mg/day in patients who do not achieve optimal control of tics. Dosage adjustments should occur gradually at intervals of no less than 1 week.

For patients weighing 50 kg or more, dosing should be initiated at 2 mg/day for 2 days, and then increased to 5 mg/day for 5 days, with a target dose of 10 mg/day on day 8. The dose can be increased up to 20 mg/day for patients who do not achieve optimal control of tics. Dosage adjustments should occur gradually in increments of 5 mg/day at intervals of no less than 1 week. [See Clinical Studies (14.5)].

Patients should be periodically reassessed to determine the continued need for maintenance treatment.

Agitation Associated with Schizophrenia or Bipolar Mania (Intramuscular Injection)

The recommended dose in these patients is 9.75 mg. The recommended dosage range is 5.25 to 15 mg. No additional benefit was demonstrated for 15 mg compared to 9.75 mg. A lower dose of 5.25 mg may be considered when clinical factors warrant. If agitation warranting a second dose persists following the initial dose, cumulative doses up to a total of 30 mg/day may be given. However, the efficacy of repeated doses of ABILIFY injection in agitated patients has not been systematically evaluated in controlled clinical trials. The safety of total daily doses greater than 30 mg or injections given more frequently than every 2 hours have not been adequately evaluated in clinical trials [see Clinical Studies (14.6)] .

If ongoing ABILIFY therapy is clinically indicated, oral ABILIFY in a range of 10 to 30 mg/day should replace ABILIFY injection as soon as possible [see Dosage and Administration (2.1 and 2.2)] .

Administration of ABILIFY Injection

To administer ABILIFY Injection, draw up the required volume of solution into the syringe as shown in Table 1. Discard any unused portion.

Table 1: ABILIFY Injection Dosing Recommendations
Single-Dose Required Volume of Solution
5.25 mg 0.7 mL
9.75 mg 1.3 mL
15 mg 2 mL

ABILIFY Injection is intended for intramuscular use only. Do not administer intravenously or subcutaneously. Inject slowly, deep into the muscle mass.

Parenteral drug products should be inspected visually for particulate matter and discoloration prior to administration, whenever solution and container permit.

Dosage Adjustments for Cytochrome P450 Considerations

Dosage adjustments are recommended in patients who are known CYP2D6 poor metabolizers and in patients taking concomitant CYP3A4 inhibitors or CYP2D6 inhibitors or strong CYP3A4 inducers (see Table 2). When the coadministered drug is withdrawn from the combination therapy, ABILIFY dosage should then be adjusted to its original level. When the coadministered CYP3A4 inducer is withdrawn, ABILIFY dosage should be reduced to the original level over 1 to 2 weeks. Patients who may be receiving a combination of strong, moderate, and weak inhibitors of CYP3A4 and CYP2D6 (e.g., a strong CYP3A4 inhibitor and a moderate CYP2D6 inhibitor or a moderate CYP3A4 inhibitor with a moderate CYP2D6 inhibitor), the dosing may be reduced to one-quarter (25%) of the usual dose initially and then adjusted to achieve a favorable clinical response.

Table 2: Dose Adjustments for ABILIFY in Patients who are known CYP2D6 Poor Metabolizers and Patients Taking Concomitant CYP2D6 Inhibitors, 3A4 Inhibitors, and/or CYP3A4 Inducers
Factors Dosage Adjustments for ABILIFY
Known CYP2D6 Poor Metabolizers Administer half of usual dose
Known CYP2D6 Poor Metabolizers taking concomitant strong CYP3A4 inhibitors (e.g., itraconazole, clarithromycin) Administer a quarter of usual dose
Strong CYP2D6 (e.g., quinidine, fluoxetine, paroxetine) or CYP3A4 inhibitors (e.g., itraconazole, clarithromycin) Administer half of usual dose
Strong CYP2D6 and CYP3A4 inhibitors Administer a quarter of usual dose
Strong CYP3A4 inducers (e.g., carbamazepine, rifampin) Double usual dose over 1 to 2 weeks

When adjunctive ABILIFY is administered to patients with major depressive disorder, ABILIFY should be administered without dosage adjustment as specified in Dosage and Administration (2.3) .

Dosing of Oral Solution

The oral solution can be substituted for tablets on a mg-per-mg basis up to the 25 mg dose level. Patients receiving 30 mg tablets should receive 25 mg of the solution [see Clinical Pharmacology (12.3)] .

Dosing of Orally Disintegrating Tablets

The dosing for ABILIFY Orally Disintegrating Tablets is the same as for the oral tablets [see Dosage and Administration (2.1, 2.2, 2.3, and 2.4)] .


PH and Water

pH is a measure of how acidic/basic water is. The range goes from 0 to 14, with 7 being neutral. pHs of less than 7 indicate acidity, whereas a pH of greater than 7 indicates a base. The pH of water is a very important measurement concerning water quality.

PH and Water

No, you don't often hear your local news broadcaster say "Folks, today's pH value of Dryville Creek is 6.3!" But pH is quite an important measurement of water. Maybe for a science project in school you took the pH of water samples in a chemistry class . and here at the U.S. Geological Survey we take a pH measurement whenever water is studied. Not only does the pH of a stream affect organisms living in the water, a changing pH in a stream can be an indicator of increasing pollution or some other environmental factor.

PH: Definition and measurement units

By the way. for a solution to have a pH, it has to be aqueous (contains water). Thus, you can't have a pH of vegetable oil or alcohol.

pH is a measure of how acidic/basic water is. The range goes from 0 to 14, with 7 being neutral. pHs of less than 7 indicate acidity, whereas a pH of greater than 7 indicates a base. pH is really a measure of the relative amount of free hydrogen and hydroxyl ions in the water. Water that has more free hydrogen ions is acidic, whereas water that has more free hydroxyl ions is basic. Since pH can be affected by chemicals in the water, pH is an important indicator of water that is changing chemically. pH is reported in "logarithmic units". Each number represents a 10-fold change in the acidity/basicness of the water. Water with a pH of five is ten times more acidic than water having a pH of six.

Importance of pH

The pH of water determines the solubility (amount that can be dissolved in the water) and biological availability (amount that can be utilized by aquatic life) of chemical constituents such as nutrients (phosphorus, nitrogen, and carbon) and heavy metals (lead, copper, cadmium, etc.). For example, in addition to affecting how much and what form of phosphorus is most abundant in the water, pH also determines whether aquatic life can use it. In the case of heavy metals, the degree to which they are soluble determines their toxicity. Metals tend to be more toxic at lower pH because they are more soluble. (Source: A Citizen's Guide to Understanding and Monitoring Lakes and Streams)

Diagram of pH

As this diagram shows, pH ranges from 0 to 14, with 7 being neutral. pHs less than 7 are acidic while pHs greater than 7 are alkaline (basic). Normal rainfall has a pH of about 5.6—slightly acidic due to carbon dioxide gas from the atmosphere. You can see that acid rain can be very acidic, and it can affect the environment in a negative way.

The pH scale ranges from 0 to 14, with 7 being neutral. pHs less than 7 are acidic while pHs greater than 7 are alkaline (basic).

Credit: robin_ph / stock.adobe.com

Measuring pH

The U.S. Geological Survey analyzes hundreds of thousands of water samples every year. Many measurements are made right at the field site, and many more are made on water samples back at the lab. pH is an important water measurement, which is often measured both at the sampling site and in the lab. There are large and small models of pH meters. Portable models are available to take out in the field and larger models, such as this one, are used in the lab.

To use the pH meter in the photograph below, the water sample is placed in the cup and the glass probe at the end of the retractable arm is placed in the water. Inside the thin glass bulb at the end of the probe there are two electrodes that measure voltage. One electrode is contained in a liquid that has a fixed acidity, or pH. The other electrode responds to the acidity of the water sample. A voltmeter in the probe measures the difference between the voltages of the two electrodes. The meter then translates the voltage difference into pH and displays it on the little screen on the main box.

A portable electronic pH meter.

Before taking a pH measurement, the meter must be "calibrated." The probe is immersed in a solution that has a known pH, such as pure water with a neutral pH of 7.0. The knobs on the box are used to adjust the displayed pH value to the known pH of the solution, thus calibrating the meter.

Taking pH at home or school

One of the most popular school science projects is to take the pH of water from different sources. Chances are your school (and certainly not you) does not have an electronic pH meter lying around, but you can still get an estimate of pH by using litmus paper. Litmus paper, which can be found at pet-supply stores (to check the pH of aquariums) is simply a strip of paper that, when a sample of water is dropped onto it, turns a certain color, giving a rough estimate of pH.

PH and water quality

Excessively high and low pHs can be detrimental for the use of water. High pH causes a bitter taste, water pipes and water-using appliances become encrusted with deposits, and it depresses the effectiveness of the disinfection of chlorine, thereby causing the need for additional chlorine when pH is high. Low-pH water will corrode or dissolve metals and other substances.

Pollution can change a water's pH, which in turn can harm animals and plants living in the water. For instance, water coming out of an abandoned coal mine can have a pH of 2, which is very acidic and would definitely affect any fish crazy enough to try to live in it! By using the logarithm scale, this mine-drainage water would be 100,000 times more acidic than neutral water -- so stay out of abandoned mines.

Variation of pH across the United States

The pH of precipitation, and water bodies, vary widely across the United States. Natural and human processes determine the pH of water. The National Atmospheric Deposition Program has developed maps showing pH patterns, such as the one below showing the spatial pattern of the pH of precipitation at field sites for 2002. You should be aware that this contour map was developed using the pH measurements at the specific sampling locations thus, the contours and isolines were created using interpolation between data points. You should not necessarily use the map to document the pH at other particular map locations, but rather, use the map as a general indicator of pH throughout the country.


New drug class could treat range of cancers with faulty BRCA genes

Scientists have identified a new class of targeted cancer drugs that offer the potential to treat patients whose tumours have faulty copies of the BRCA cancer genes.

The drugs, known as POLQ inhibitors, specifically kill cancer cells with mutations in the BRCA genes while leaving healthy cells unharmed.

And crucially, they can kill cancer cells that have become resistant to PARP inhibitors - an existing treatment for patients with BRCA mutations.

Researchers are already planning to test the new drug class in upcoming clinical trials. If the trials are successful, POLQ inhibitors could enter the clinic as a new approach to treating a range of cancers with BRCA mutations, such as breast, ovarian, pancreatic and prostate cancer.

Scientists at The Institute of Cancer Research, London, and the pharmaceutical company Artios, explored the potential of using POLQ inhibitors in treating cancer cells with defects in the BRCA genes.

Their study, published today (Thursday) in Nature Communications, was funded by Artios, Cancer Research UK and Breast Cancer Now.

For some time now, scientists have known that genetically removing a protein known as POLQ killed cells with BRCA gene defects, although drugs that prevent POLQ from working had not been identified.

In this new work, the researchers identified prototype drugs that not only stop POLQ from working, but which also kill cancer cells with BRCA gene mutations.

Both BRCA genes and POLQ are involved in repairing DNA. Cancer cells can survive without one or other of them, but if both are blocked or their genes switched off, cancer cells can no longer repair their DNA and they die.

Researchers found that when cells were treated with POLQ inhibitors, cancer cells with BRCA gene mutations were stripped of their ability to repair their DNA and died, but normal cells did not. By killing cancer cells with BRCA gene mutations, while leaving normal cells unharmed, POLQ inhibitors could offer a treatment for cancer with relatively few side effects.

Researchers also found that POLQ inhibitors work very well when used together in combination with PARP inhibitors.

The addition of POLQ inhibitors meant that PARP inhibitors were effective when used at a lower dose. And in laboratory tests in rats and in organoids - three-dimensional mini-tumours grown in the lab - POLQ inhibitors were able to shrink BRCA-mutant cancers that had stopped responding to PARP inhibitors because of a defect in a set of genes known as the 'Shieldins'.

This suggests that POLQ inhibitors could offer an alternative treatment where PARP inhibitors are no longer working. Researchers believe that using a POLQ inhibitor in combination with a PARP inhibitor in patients with cancers that have faulty BRCA genes could prevent resistance from emerging in the first place.

Scientists at The Institute of Cancer Research (ICR), funded by Breast Cancer Now and Cancer Research UK, discovered how to genetically target PARP inhibitors against BRCA-mutant cancers and, with colleagues at The Royal Marsden NHS Foundation Trust, helped run clinical trials leading to the first PARP inhibitor being approved for use.

The next step will now be to test POLQ inhibitors in clinical trials led by Artios.

Study co-leader, Professor Chris Lord, Professor of Cancer Genomics at The Institute of Cancer Research, London,and Deputy Director of the Breast Cancer Now Toby Robins Research Centre at the ICR, said:

"All cells have to be able to repair damage to their DNA to stay healthy - otherwise mutations build up and eventually kill them. We have identified a new class of precision medicine that strips cancers of their ability to repair their DNA. This new type of treatment has the potential to be effective against cancers which already have weaknesses in their ability to repair their DNA, through defects in their BRCA genes. And excitingly, the new drugs also seem to work against cancer cells that have stopped responding to an existing treatment called PARP inhibitors - potentially opening up a new way of overcoming drug resistance. I'm very keen to see how they perform in clinical trials."

Professor Paul Workman, Chief Executive of The Institute of Cancer Research, London, said:

"It's exciting that the new POLQ inhibitors should provide a different approach to treating cancers with BRCA gene defects - and particularly that this class of drugs should retain their activity in cancers that have developed resistance to PARP inhibitors. Most exciting of all is the potential of combining POLQ and PARP inhibitor drugs to prevent the evolution of BRCA-mutant cancers into more aggressive, drug-resistant forms - a major challenge that we see in the clinic."

Study Co-Leader, Dr Graeme Smith, Chief Scientific Officer at Artios Pharma, Cambridge, said:

"These exciting preclinical results provide a clear rationale for future clinical studies with a POLQ inhibitor. At Artios, we are on track to initiate our POLQ clinical programme before the year end to explore POLQ inhibition in the sensitive cancer types that this study has uncovered. Our planned POLQ inhibitor clinical studies will leverage these results, exploring combination treatment with PARP inhibitors and different types of DNA damaging agents." Michelle Mitchell, chief executive at Cancer Research UK said:"More than 25 years ago we helped discover the BRCA gene, which spurred on our scientists to work with others to develop PARP inhibitors, which are now benefiting many patients. But we are always trying to find newer and better ways to outstep cancer, especially when it stops responding to current treatments. By revisiting weaknesses in the BRCA repair pathway, researchers have not only found a way to make PARP inhibitors more effective, but they may have also identified an entirely new class of targeted drugs for BRCA cancers, which could include pancreatic cancer which has limited treated options. We look forward to seeing if these promising results in the lab transfer into benefits for patients when tested in trials."

Dr Simon Vincent, Director of Research, Support and Influencing at Breast Cancer Now, said:

"Men and women with a change in one of their BRCA genes are at greater risk of being diagnosed with breast cancer, and around 5% of the 55,000 cases of breast cancer diagnosed in UK each year are caused by an inherited altered gene, which includes BRCA1 and BRCA2 genes.

"It's therefore hugely exciting thatPOLQ inhibitors could provide a targeted treatment option for people whose cancer is caused by altered BRCA genes. As a targeted treatment, we hope that POLQ inhibitors could be a kinder alternative, with less side effects than current treatment options.

"Drug resistance is a major hurdle that we must tackle to stop women dying from breast cancer, so it is also exciting that POLQ inhibitors offer a hope of overcoming resistance in some cases.

"We hope that future research will confirm that POLQ inhibitors can benefit people with breast cancer in these ways."

For more information please contact Molly Andrews in the ICR press office on 020 7153 5246 or [email protected] For enquiries out of hours, please call 07595 963 613.

The Institute of Cancer Research, London, is one of the world's most influential cancer research organisations.

Scientists and clinicians at The Institute of Cancer Research (ICR) are working everyday to make a real impact on cancer patients' lives. Through its unique partnership with The Royal Marsden NHS Foundation Trust and 'bench-to-bedside' approach, the ICR is able to create and deliver results in a way that other institutions cannot. Together the two organisations are rated in the top four centres for cancer research and treatment globally.

The ICR has an outstanding record of achievement dating back more than 100 years. It provided the first convincing evidence that DNA damage is the basic cause of cancer, laying the foundation for the now universally accepted idea that cancer is a genetic disease. Today it is a world leader at identifying cancer-related genes and discovering new targeted drugs for personalised cancer treatment.

The ICR is a charity and relies on support from partner organisations, funders and the general public. A college of the University of London, it is the UK's top-ranked academic institution for research quality, and provides postgraduate higher education of international distinction.

The ICR's mission is to make the discoveries that defeat cancer. The research was conducted at the Breast Cancer Now Toby Robins Research Centre at The Institute of Cancer Research, London.

For more information visitICR.ac.uk

    - Cancer Research UK is the world's leading cancer charity dedicated to saving lives through research.

- Cancer Research UK's pioneering work into the prevention, diagnosis and treatment of cancer has helped save millions of lives.

- Cancer Research UK has been at the heart of the progress that has already seen survival in the UK double in the last 40 years.

- Today, 2 in 4 people survive their cancer for at least 10 years. Cancer Research UK's ambition is to accelerate progress so that by 2034, 3 in 4 people will survive their cancer for at least 10 years.

- Cancer Research UK supports research into all aspects of cancer through the work of over 4,000 scientists, doctors and nurses.

- Together with its partners and supporters, Cancer Research UK's vision is to bring forward the day when all cancers are cured.

For further information about Cancer Research UK's work or to find out how to support the charity, please call 0300 123 1022 or visithttp://www. cancerresearchuk. org.

Follow us onTwitterandFacebook.

    - Breast Cancer Now is the UK's first comprehensive breast cancer charity, combining world-class research and life-changing care.

- Breast Cancer Now's ambition is that, by 2050, everyone who develops breast cancer will live and be supported to live well.

- Breast Cancer Now, the research and care charity, launched in October 2019, created by the merger of specialist support and information charity Breast CancerCare and leading research charity Breast Cancer Now.

- Visit breastcancernow.org or follow us on Twitter or on Facebook.

- For support or information call Breast Cancer Now's free Helpline on 0808 800 6000.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.


Sources of Potassium

The best way to get your recommended daily amount of potassium is to eat it from natural food sources. You can find potassium in fruits, vegetables, animal products, legumes and nuts. Of course, there are several different types of potassium found in each of these. As you work to maintain a healthy daily dose of potassium, start filling your diet with foods rich in the mineral.

According to USDA FoodData Central, some of the foods with the highest levels of potassium include:

  • 2 tablespoons of tomato paste — 290 milligrams
  • One bunch of spinach — 1,900 milligrams
  • 1 cup of dried apricots — 1,510 milligrams
  • One large baked sweet potato with skin — 855 milligrams
  • 1 cup of cooked lentils — 731 milligrams
  • A half-cup of raisins — 600 milligrams
  • One seeded, peeled avocado — 690 milligrams
  • 1 cup of navy beans — 842 milligrams
  • 1 cup of mashed acorn squash — 644 milligrams
  • 1/2 cup of dried prunes — 635 milligrams

What if my second vaccine dose is early or delayed? Here’s what the CDC says

There’s a lot of confusion out there about second doses of the COVID-19 vaccine.

The COVID-19 vaccine rollout has been chaotic in California. Guidance on how to get the second dose has changed from “you have to make your own appointment” to “we’ll tell you when to come back” to “OK, you should hear from us, but if you don’t, just come back.” The Los Angeles County department of public health admitted on Twitter that it had been stressful.

People are also concerned about the timing of the second dose. Over at the L.A. Times tip line, we’ve been getting inquiries about second dose appointments that are not precisely timed after the first.

In its clinical trials, Pfizer administered the second dose 21 days after the first. For Moderna, it was 28 days. According to the U.S. Centers for Disease Control and Prevention, the second dose should be administered as close to the recommended date as possible. But total precision isn’t required.

If it’s a little earlier, that’s allowed: “Second doses administered within a grace period of four days earlier than the recommended date for the second dose are still considered valid,” the CDC says on its website.

A couple of weeks later is fine too. “If it is not feasible to adhere to the recommended interval and a delay in vaccination is unavoidable, the second dose of Pfizer-BioNTech and Moderna COVID-19 vaccines may be administered up to six weeks (42 days) after the first dose.”

California is reserving large amounts of COVID-19 vaccine supply for people needing their second dose, leaving fewer first doses available.

Dr. Diane Griffin is a virologist at the Johns Hopkins Bloomberg School of Public Health who studies immune responses to viral infections and vaccines. She said in an interview with The Times that while there aren’t studies of the efficacy of these vaccines when the second dose isn’t administered right on schedule, based on what we do know about other vaccines and immune responses, there’s no reason it won’t work just as well.

“I think that perfection is the enemy of the good,” she said.

In other words: Don’t stress about your second dose being a little early or late. The important thing is that you get it.


References

  • 1. National Council on Radiation Protection and Measurements. National Council on Radiation Protection and Measurements. Scientific Committee 6-2 on Radiation Exposure of the U.S. Population. Ionizing Radiation Exposure of the Population of the United States: Recommendations of the National Council on Radiation Protection and Measurements . Bethesda, MD : National Council on Radiation Protection and Measurements . 2009 . Google Scholar
  • 2.

Sciahbasi A, Frigoli E, Sarandrea A, Rothenbühler M, Calabrò P, Lupi A, Tomassini F, Cortese B, Rigattieri S, Cerrato E, Zavalloni D, Zingarelli A, Calabria P, Rubartelli P, Sardella G, Tebaldi M, Windecker S, Jüni P, Heg D, Valgimigli M

Georges JL, Karam N, Tafflet M, Livarek B, Bataille S, Loyeau A, Mapouata M, Benamer H, Caussin C, Garot P, Varenne O, Barbou F, Teiger E, Funck F, Karrillon G, Lambert Y, Spaulding C, Jouven X

Abdelaal E, Plourde G, MacHaalany J, Arsenault J, Rimac G, Déry JP, Barbeau G, Larose E, De Larochellière R, Nguyen CM, Allende R, Ribeiro H, Costerousse O, Mongrain R, Bertrand OF


What range of dose should be used? - Biology

Hazard characterizations are typically developed by compiling information from a variety of data sources, using a plethora of test protocols. Each of these data sources contributes in varying degrees to an understanding of the pathogen-host-matrix interactions that influence the potential public health risks attributable to different disease agents. An appreciation of the strengths and limitations of the various data sources is critical to selecting appropriate data for use, and to establishing the uncertainty associated with dose-response models that are developed from different data sets and test protocols.

Active data collection is required, because reliance on passive data submission or data in published form does not usually provide enough information in sufficient detail to construct dose-response models. Relevant data come preferably from peer-reviewed journals. Given the current lack of data for hazard characterization, it is also advisable to evaluate the availability of unpublished, high-quality data sources. Risk assessors should communicate with experimenters, epidemiologists, food or water safety regulatory persons, and others who may have useful data that could contribute to the analysis. An example of such is the outbreak information collected by the Japanese Ministry of Health and which was used for dose-response modelling of Salmonella (FAO/WHO, 2002a). When such data are used, the criteria and results of evaluation must be carefully documented. If using material published on the Internet, care should be taken to establish the provenance, validity and reliability of the data, and the original source, if known.

Understanding the characteristics of data sources is important to the selection and interpretation of data. Risk assessors often use data for a purpose other than that for which it was originally intended. Risk assessors and modellers need to know the means by which the data they use are collected, and the purpose of their collection. The properties of the available data will depend on the perspective of the researchers generating the data (e.g. experimenter versus epidemiologist). Therefore, knowledge of the source and original purpose of the available data sets is important in the development of dose-response models. The following sections attempt to capture in brief the strengths and limitations of each of several classes of data sources.

4.1 Human studies

4.1.1 Outbreak investigations

When there is a common-source outbreak of foodborne or waterborne disease of sufficient magnitude, an epidemiological investigation is generally undertaken to identify the cause of the problem, to limit its further spread, and to provide recommendations on how the problem can be prevented in the future. An outbreak of confirmed etiology that affects a clearly defined group can provide particularly complete information about the range of illness that a pathogen can cause, particular behaviour or other host characteristics that may increase or decrease the risk, and - if there is clinical follow up - the risk of sequelae. When the outbreak is traced to a food or water source that can be quantitatively cultured under circumstances that allow the original dose to be estimated, the actual dose-response can be measured. Even when that is not possible, dose-effect relations can often be observed that show variation in clinical response to changes in relative dose, and is part of the classic approach to an outbreak investigation. This may include looking for higher attack rates among persons who consumed more of the implicated vehicle, but may also include variation in symptom prevalence and complications. There are good public health reasons for gathering information on the amount of the implicated food or water consumed. An outbreak that is characterized by a low attack rate in a very large population may be an opportunity to define the host-response to very low doses of a pathogen, if the actual level of contamination in the food can be measured. In addition, data from outbreaks are the ultimate "anchor" for dose-response models and are an important way to validate risk assessments.

An outbreak investigation can capture the diversity of host response to a single pathogenic strain. This can include the definition of the full clinical spectrum of illness and infection, if a cohort of exposed individuals can be examined and tested for evidence of infection and illness, independent of whether they were ill enough to seek medical care or diagnose themselves. It also includes definition of subgroups at higher risk, and the behaviour, or other host factors, that may increase or decrease that risk, given a specific exposure. Collecting information on underlying illness or pre-existing treatments is routine in many outbreak investigations.

Obtaining highly specific details of the food source and its preparation in the outbreak setting is often possible, because of the focus on a single food or meal, and may suggest specific correlates of risk that cannot be determined in the routine evaluation of a single case. Often, the observations made in outbreaks suggest further specific applied research to determine the behaviour of the pathogen in that specific matrix, handled in a specific way. For example, after a large outbreak of shigellosis was traced to chopped parsley, it was determined that Shigella sonnei grows abundantly on parsley left at room temperature if the parsley is chopped, but does not multiply if the parsley is intact. Such observations are obviously important to someone modelling the significance of low-level contamination of parsley.

Where samples of the implicated food or water vehicle can be quantitatively assayed for the pathogen, in circumstances that allow estimation of the original dose, an outbreak investigation has been a useful way to determine the actual clinical response to a defined dose in the general population.

Follow-up investigations of a (large) cohort of cases identified in an outbreak may allow identification and quantification of the frequency of sequelae, and the association of sequelae with specific strains or subtypes of a pathogen.

If preparations have been made in advance, the outbreak may offer a setting for the evaluation of methods to diagnose infection, assess exposure or treat the infection.

The primary limitation is that the purpose and focus of outbreak investigations is to identify the source of the infection in order to prevent additional cases, rather than to collect a wide range of information. The case definitions and methods of the investigation are chosen for efficiency, and often do not include data that would be most useful in a hazard characterization, and may vary widely among different investigations. The primary goal of the investigation is to quickly identify the specific source(s) of infection, rather than to precisely quantify the magnitude of that risk. Key information that would allow data collected in an investigation to be useful for risk assessments is therefore often missing or incomplete. Estimates of dose or exposure in outbreaks may be inaccurate because:

It was not possible to obtain representative samples of the contaminated food or water.

If samples were obtained, they may have been held or handled in such a way after exposure occurred as to make meaningless the results of testing.

Laboratories involved in outbreak testing are mainly concerned with presence/absence, and may not be set up to conduct enumeration testing.

It is very difficult to detect and quantify viable organisms in the contaminated food or water (e.g. viable Cryptosporidium oocysts in water).

Estimates of water or food consumption by infected individuals, and of the variability therein, are poor.

There is inadequate knowledge concerning the health status of the exposed population, and the number of individuals who consumed food but did not become ill (a part of whom may have developed asymptomatic infection, whereas others were not infected at all).

The size of the total exposed population is uncertain.

In such instances, use of outbreak data to develop dose-response models generally requires assumptions concerning the missing information. Fairly elaborate exposure models may be necessary to reconstruct exposure under the conditions of the outbreak. If microbiological risk assessors and epidemiologists work together to develop more comprehensive outbreak investigation protocols, this should promote the collection of more pertinent information. This might also help to identify detailed information that was obtained during the outbreak investigation but was not reported.

Even when all needed information is available, the use of such data may bias the hazard characterization if there are differences in the characteristics of pathogen strains associated with outbreaks versus sporadic cases. The potential for such bias may be evaluated by more detailed microbiological studies on the distribution of growth, survival and virulence characteristics in outbreak and endemic strains.

Estimates of attack rate may be an overestimate when they are based on signs and symptoms rather than laboratory-confirmed cases. Alternatively, in a case-control study conducted to identify a specific food or water exposure in a general population, the attack rate may be difficult to estimate, and may be underestimated, depending on the thoroughness of case finding.

The reported findings depend strongly on the case-definition used. Case definitions may be based on clinical symptoms, on laboratory data or a combination thereof. The most efficient approach could be to choose a clinical case definition, and validate it with a sample of cases that are confirmed by laboratory tests. This may include some non-specific illnesses among the cases. In investigations that are limited to culture-confirmed cases, or cases infected with a specific subtype of the pathogen, investigators may miss many of the milder or non-diagnosed illness occurrences, and thus underestimate the risk. The purpose of the outbreak investigation may lead the investigators to choices that are not necessarily the best for hazard characterization.

4.1.2 Surveillance and annual health statistics

Countries and several international organizations compile health statistics for infectious diseases, including those that are transmitted by foods and water. Such data are critical to adequately characterize microbial hazards. In addition, surveillance-based data have been used in conjunction with food survey data to estimate dose-response relations. It must be noted that, usually, analysis of such aggregated data requires many assumptions to be made, thus increasing uncertainty in results.

Annual health statistics provide one means of both anchoring and validating dose-response models. The effectiveness of dose-response models is typically assessed by combining them with exposure estimates and determining if they approximate the annual disease statistics for the hazard.

Using annual disease statistics to develop dose-response models has the advantage that it implicitly considers the entire population and the wide variety of factors that can influence the biological response. Also, illness results from exposure to a variety of different strains. These data also allow for the relatively rapid initial estimation of the dose-response relationship. This approach is highly cost-effective since the data are generated and complied for other purposes. Available databases often have sufficient detail to allow consideration of special subpopulations.

The primary limitation of the data is that they are highly dependent on the adequacy and sophistication of the surveillance system used to collect the information. Typically, public health surveillance for foodborne diseases depends on laboratory diagnosis. Thus it only captures those who were ill enough to seek care (and able to pay for it), and who provided samples for laboratory analysis. This can lead to a bias in hazard characterizations toward health consequences associated with the developed nations that have an extensive disease surveillance infrastructure. Within developed countries, the bias may be towards diseases with relatively high severity, that more frequently lead to medical diagnoses than mild, self-limiting diseases. International comparisons are difficult because a set of defined criteria for reporting is lacking at an international level. Another major limitation in the use of surveillance data is that it seldom includes accurate information on the attribution of disease to different food products, on the levels of disease agent in food and the number of individuals exposed. Use of such data to develop dose-response relations is also dependent on the adequacy of the exposure assessment, the identification of the portions of the population actually consuming the food or water, and the estimate of the segment of the population at increased risk.

4.1.3 Volunteer feeding studies

The most obvious means for acquiring information on dose-response relations for foodborne and waterborne pathogenic microorganisms is to expose humans to the disease agent under controlled conditions. There have been a limited number of pathogens for which feeding studies using volunteers have been carried out. Most have been in conjunction with vaccine trials.

Using human volunteers is the most direct means of acquiring data that relates an exposure to a microbial hazard with an adverse response in human populations. If planned effectively, such studies can be conducted in conjunction with other clinical trials, such as the testing of vaccines. The results of the trials provide a direct means of observing the effects of the challenge dose on the integrated host defence response. The delivery matrix and the pathogen strain can be varied to evaluate food matrix and pathogen virulence effects.

There are severe ethical and economic limitations associated with the use of human volunteers. These studies are generally conducted only with primarily healthy individuals between the ages of 18 and 50, and thus do not examine the segments of the human population typically most at risk. Pathogens that are life threatening or that cause disease only in high-risk subpopulations are not amenable to volunteer studies. Typically, the studies investigate a limited number of doses with a limited number of volunteers per dose. The dose ranges are generally high to ensure a response in a significant portion of the test population, i.e. the doses are generally not in the region of most interest to risk assessors.

The process of (self-)selection of volunteers may induce bias that can affect interpretation of findings. Feeding studies are not a practical means to address strain virulence variation. The choice of strain is therefore a critical variable in such studies. Most feeding studies use only rudimentary immunological testing prior to exposure. More extensive testing could be useful in developing susceptibility biomarkers.

Usually, feeding studies involve only a few strains, which are often laboratory domesticated or collection strains and may not represent wild-type strains. In addition, the conditions of propagation and preparation immediately before administration are not usually standardized or reported, though these may affect tolerance to acid, heat or drying, as well as altering virulence. For example, passage of Vibrio cholerae through the gastrointestinal tract induces a hyperinfectious state, which is perpetuated even after purging into natural aquatic reservoirs. This phenotype is expressed transiently, and lost after growth in vitro (Merrel et al., 2002). In many trials with enteric organisms, they are administered orally with a buffering substance, specifically to neutralize the effect of gastric acidity, which does not directly translate into what the dose response would be if ingested in food or water.

In the development of experimental design, the following points need to be considered:

How is dose measured (both units of measurement and the process used to measure a dose)?

How do the units in which a dose is measured compare with the units of measurement for the pathogen in an environmental sample?

Total units measured in a dose may not all be viable units or infectious units.

Volunteers given replicate doses may not all receive the same amount of inoculum.

How is the inoculum administered? Does the protocol involve simultaneous addition of agents that alter gastric acidity or promote the passage of microorganisms through the stomach without exposure to gastric acid?

How do you know you dosed a naive volunteer (serum antibodies may have dropped to undetectable levels or the volunteer may have been previously infected with a similar pathogen that may not be detected by your serological test)?

What is the sensitivity and specificity of the assay used to determine infection?

When comparing the dose-response of two or more organisms, one must compare similar biological end-points, e.g. infection vs illness.

4.1.4 Biomarkers

Biomarkers are measurements of host characteristics that indicate exposure of a population to a hazard or the extent of adverse effect caused by the hazard. They are generally minimally invasive techniques that have been developed to assess the status of the host. The United States National Academy of Science has classified biomarkers into three classes, as follows:

Biomarker of exposure - an exogenous substance or its metabolite, or the product of an interaction between a xenobiotic agent and some target molecule or cell, that is measured in a compartment within an organism.

Biomarker of effect - a measurable biochemical, physiological or other alteration within an organism that, depending on magnitude, can be recognized as an established or potential health impairment or disease.

Biomarker of susceptibility - an indicator of an inherent or acquired limitation of an organism's ability to respond to the challenge of exposure to a specific xenobiotic substance.

Even though this classification was developed against the background of risk assessment of toxic chemicals, these principles can be useful in interpreting data on pathogens.

These techniques provide a means of acquiring biologically meaningful data while minimizing some of the limitations associated with various techniques involving human studies. Typically, biomarkers are measures that can be acquired with minimum invasiveness while simultaneously providing a quantitative measure of a response that has been linked to the disease state. As such, they have the potential to increase the number of replicates or doses that can be considered, or to provide a means by which objectivity can be improved, and increased precision and reproducibility of epidemiological or clinical data can be achieved. Biomarkers may also provide a means for understanding the underlying factors used in hazard characterization. A biomarker response may be observed after exposure to doses that do not necessarily cause illness (or infection). Biomarkers can be used either to identify susceptible populations or to evaluate the differential response in different population subgroups.

It should also be noted that the most useful biomarkers are linked to illness by a defined mechanism, that is, the biological response has a relationship to the disease process or clinical symptom. If a biomarker is known to correlate with illness or exposure, then this information may be useful in measuring dose-response relationships, even if the subjects do not develop clinical symptoms. Biomarkers such as these can be used to link animal studies with human studies for the purposes of dose-response modelling. This is potentially useful because animal models may not produce clinical symptoms similar to humans. In which case, a biomarker may serve as a surrogate end-point in the animal.

Biomarkers are often indicators of infection, illness, severity, duration, etc. As such, there is a need to establish a correlation between the amplitude of the biomarker response and illness conditions. Biomarkers primarily provide information on the host status, unless protocols are specifically designed to assess the effects of different pathogen isolates or matrices.

The only currently available biomarkers for foodborne and waterborne pathogens are serological assays. The main limitation for such assays is that, in general, the humoral immune response to bacterial and parasitic infections is limited, transient and non-specific. For example, efforts to develop an immunological assay for Escherichia coli O157 infections have shown that a distinctive serological response to the O antigen is seen typically in the most severe cases, but is often absent in cases of culture-confirmed diarrhoea without blood. In contrast, serological assays are often quite good for viruses. Other biomarkers, such as counts of subsets of white blood cells or production of gaseous oxides of nitrogen are possible, but have not been tested extensively in human populations.

4.1.5 Intervention studies

Intervention studies are human trials where the impact of a hazard is evaluated by reducing exposure for a defined sample of a population. The incidence of disease or the frequency of a related biomarker is then compared to a control population to assess the magnitude of the response differential for the two levels of exposure.

Intervention studies have the advantage of studying an actual population under conditions that are identical to or that closely approach those of the general population. In such a study, the range of host variability is accommodated. These studies are particularly useful in evaluating long-term exposures to levels of pathogens to which the consumer is likely to be subjected. Since intervention studies examine the diminution of an effect in the experimental group, the identified parameters would implicitly include the pathogen, host and food-matrix factors that influence the control groups. Potentially, one could manipulate the degree of exposure (dose) by manipulating the stringency of the intervention.

Since exposure for the control group occurs as a result of normal exposure, the pathogen, host and food-matrix effects are not amenable to manipulation. Great care must be given to setting up appropriate controls, and in actively diagnosing biological responses of interest in both the test and control populations. It is often the case that intervention studies result in declines in response that are less than those predicted by the initial exposure. This is often due to the identification of a second source of exposure or an overestimation of the efficacy of the intervention. However, such data by itself is often of interest.

Testable interventions - i.e. feasible in terms of technical, cultural and social issues - are "conservative" in that there are ethical boundaries. Thus they must be implemented within a defined population and, apart from being technically feasible, must be socially acceptable and compatible with the preferences and technical abilities of this population.

4.2 Animal studies

Animal studies are used to overcome many of the logistical and ethical limitations that are associated with human-volunteer feeding studies. There are a large variety of different animal models that are used extensively to understand the pathogen, host and matrix factors that affect characteristics of foodborne and waterborne disease, including the establishment of dose-response relations.

The use of surrogate animals to characterize microbial hazards and establish dose-response relations provides a means for eliminating a number of the limitations of human-volunteer studies while still maintaining the use of intact animals to examine disease processes. A number of animal models are relatively inexpensive, thus increasing the potential for testing a variety of strains and increased numbers of replicates and doses. The animals are generally maintained under much more controlled conditions than human subjects. Immunodeficient animal strains and techniques for suppressing the immune system and other host defences are available and provide a means for characterizing the response in special subpopulations. Testing can be conducted directly on animal subpopulations such as neonates, aged or pregnant populations. Different food vehicles can be investigated readily.

The major limitation is that the response in the animal model has to be correlated with that in humans. There is seldom a direct correlation between the response in humans and that in animals. Often, differences between the anatomy and physiology of humans and animal species lead to substantial differences in dose-response relations and the animal's response to disease. For a number of diseases, there is no good animal model. Several highly effective models (e.g. primates or pigs) can be expensive, and may be limited in the number of animals that can be used per dose group. Some animals used as surrogates are highly inbred and consequently lack genetic diversity. Likewise, they are healthy and usually of a specific age and weight range. As such, they generally do not reflect the general population of animals of that species, let alone the human population. Ethical considerations in many countries limit the range of biological end-points that can be studied.

When surrogate pathogens or surrogate animal models are used, the biological basis for the use of the surrogate must be clear.

Using data obtained with animal models to predict health effects in humans could take advantage of the use of appropriate biomarkers.

It is important to use pathogen strains that are identical or closely related to the strain of concern for humans, because, even within the same species and subspecies, different strains of pathogens may have different characteristics that cause variation in their abilities to enter and infect the host and cause illness.

4.3 In vitro studies

In vitro studies involve the use of cell, tissue or organ cultures and related biological samples to characterize the effect of the pathogen on the host. They are of most use for qualitative investigations of pathogen virulence, but may also be used to evaluate in detail the effects of defined factors on the disease process.

In vitro techniques can readily relate the characteristics of a biological response with specific virulence factors (genetic markers, surface characteristics and growth potential) under controlled conditions. This includes the use of different host cells or tissue cultures to represent different population groups, and manipulation of the environment under which the host cells or tissues are exposed to the pathogen, in order to characterize differences in dose-response relations between general and special populations. In vitro techniques can be used to investigate the relations between matrix effects and the expression of virulence markers. Large numbers of replicates and doses can be studied under highly controlled conditions. These techniques can be used to readily compare multiple species and cell types to validate relationships between humans and surrogate animals. They are particularly useful as a means of providing information concerning the mechanistic basis for dose-response relations.

The primary limitation is the indirect nature of information concerning dose-response relations. One cannot directly relate the effects observed with isolated cells and tissues to disease conditions that are observed within intact humans, such as the effect of integrated host defences. To compare with humans, there is need for a means to relate the quantitative relations observed in the in vitro system to those observed in the host. These types of studies are usually limited to providing details of factors affecting dose-response relations and to augmenting the hazard characterization, but are unlikely to be a direct means of establishing dose-response models useful for risk assessments. For many organisms, the specific virulence mechanisms and markers involved are unknown, and may vary between strains of the same species.

4.4 Expert elicitation

Expert elicitation is a formal approach to the acquisition and use of expert opinions, in the absence of or to augment available data.

When there is a lack of the specific data needed to develop dose-response relations, but there are scientific experts with knowledge and experience pertinent to the elucidation of the information required, expert elicitation provides a means of acquiring and using this information so that consideration of dose-response relations can be initiated. This can involve the development of a distribution for a parameter in a model for which there is no, little or inconsistent numerical data, through the use of accepted processes that outline the lines of evidence or weight of evidence for generation of the opinion and use of the results. It is generally not expensive, particularly in relation to short-term needs.

Results obtained are dependent on the methodology used, and are inherently subjective and thus open to debate. The results are also dependent on the experts selected and may have limited applicability for issues involving an emerging science.

4.5 Data evaluation

Risk assessors must evaluate both the quality of the available sources of data for the purpose of the analysis, and the means of characterizing the uncertainty of all the data used. Formalized quality control of raw data and its subsequent treatment is desirable, but also highly dependent on availability and the use to which the data are applied. There is no formalized system for evaluation of data for hazard characterization. Few generalizations can be made, but the means by which data are collected and interpreted needs to be transparent. "Good" data are complete, relevant and valid: complete data are objective relevant data are case-specific and validation is context specific.

Complete data includes such things as the source of the data and the related study information, such as sample size, species studied and immune status. Characteristics of relevant data include age of data region or country of origin purpose of study species of microorganism involved sensitivity, specificity and precision of microbiological methods used and data collection methods. Observations in a database should be "model free" - i.e. reported without interpretation by a particular model - to allow data to be used in ways that the original investigator might not have considered. This may require access to raw data, which may be difficult to achieve in practice. Using the Internet for such purposes should be encouraged, possibly by creating a Web site with data sets associated with published studies.

Valid data is that which agrees with others in terms of comparable methods and test development. In general, human data need less extrapolation and are preferred to animal data, which in turn are preferable to in vitro data. Data on the pathogen of concern are preferred to data on surrogate organisms, which should only be used on the basis of solid biological evidence, such as common virulence factors.

Currently, the recommended practice is to consider all available data as a potential source of information for hazard characterization. Data that can be eliminated from the risk assessment depends on the purpose and stage of the assessment. In the early stages of risk assessment, small data sets or those with qualitative values may be useful, whereas the later stages of risk assessment may include only those data that have been determined to have high quality standards. Excluding data from the analysis should be based on predefined criteria, and not based solely on statistical criteria. If the analysis is complicated by extreme heterogeneity or by outliers, it is advisable to stratify the data according to characteristics of the affected population, to microbial species, to matrix type or to any other suitable criterion. This practice should provide increased insight rather than information loss.

Sources of data are either peer-reviewed or non-peer-reviewed literature. Although peer-reviewed data are generally preferable for scientific studies, they also have some important drawbacks as inputs for dose-response modelling. First and foremost, they have limited availability. Also, important information may be missing concerning how dose and response data were obtained, as outlined here below. Data presentation in the peer-reviewed literature is usually in an aggregated form, not providing the level of detail necessary for uncertainty analysis. In older papers, the quality control of the measurement process may be poorly documented. For any of these reasons, the analyst might wish to add information from other sources. In that case, the quality of the data should be explicitly reviewed, preferably by independent experts.

An important aspect with regard to dose information is the performance characteristics of the analytical method. Ideally, a measurement reflects with a high degree of accuracy the true number of pathogens in the inoculum. Accuracy is defined as the absence of systematic error (trueness) and of random error (precision). Trueness of a microbiological method is defined by the recovery of target organisms, the inhibitory power against non-target organisms, and the differential characteristics of the method, as expressed in terms of sensitivity and specificity. Precision is related to the nature of the test (plating vs enrichment), the number of colonies counted or the number of positive subcultures, and the dispersion of the inoculum in the test sample (see Havelaar et al., 1993). It is also important to know the variation in ingested dose between individuals, related to the dispersion of the pathogens in the inoculum, but also in relation to different quantities of the inoculum being ingested. These characteristics are of particular relevance when using observational data on naturally occurring infections. A pathogen's infectivity can be affected by both the matrix and the previous history of the pathogen, and this should be taken into account.

With regard to response information , it is important to note whether the outcome was represented as a binary or a continuous outcome. Current dose-response models (see Chapter 6) are applicable to binary outcomes, and this requires that the investigator define the criteria for both positive and negative responses. The criteria used for this differentiation may vary between studies, but should explicitly be taken into account. Another relevant aspect is the characteristics of the exposed population (age, immunocompetence, previous exposure, etc.).

The aspects listed in this section are not primarily intended for differentiating "good" from "bad" data for hazard characterization, but rather to guide the subsequent analysis and the use of the dose-response information in a risk assessment model.


Acetaminophen (OTC)

Risk for rare, but serious skin reactions that can be fatal these reactions include Stevens-Johnson Syndrome (SJS), toxic epidermal necrolysis (TEN), and acute generalized exanthematous pustulosis (AGEP) symptoms may include skin redness, blisters and rash

Limit acetaminophen dose from all sources and routes to 25%

Pregnancy Categories

A: Generally acceptable. Controlled studies in pregnant women show no evidence of fetal risk.

B: May be acceptable. Either animal studies show no risk but human studies not available or animal studies showed minor risks and human studies done and showed no risk.

C: Use with caution if benefits outweigh risks. Animal studies show risk and human studies not available or neither animal nor human studies done.

D: Use in LIFE-THREATENING emergencies when no safer drug available. Positive evidence of human fetal risk.

X: Do not use in pregnancy. Risks involved outweigh potential benefits. Safer alternatives exist.