Clinical Illness and Outcomes in Patients with Ebola in Sierra Leone BACKGROUND Limited clinical and laboratory data are available on patients with Ebola virus disease (EVD). The Kenema Government Hospital in Sierra Leone, which had an existing infrastructure for research regarding viral hemorrhagic fever, has received and cared for patients with EVD since the beginning of the outbreak in Sierra Leone in May 2014. METHODS We reviewed available epidemiologic, clinical, and laboratory records of patients in whom EVD was diagnosed between May 25 and June 18, 2014. We used quantitative reverse-transcriptase– polymerase-chain-reaction assays to assess the load of Ebola virus (EBOV, Zaire species) in a subgroup of patients. RESULTS Of 106 patients in whom EVD was diagnosed, 87 had a known outcome, and 44 had detailed clinical information available. The incubation period was estimated to be 6 to 12 days, and the case fatality rate was 74%. Common findings at presentation included fever (in 89% of the patients), headache (in 80%), weakness (in 66%), dizziness (in 60%), diarrhea (in 51%), abdominal pain (in 40%), and vomiting (in 34%). Clinical and laboratory factors at presentation that were associated with a fatal outcome included fever, weakness, dizziness, diarrhea, and elevated levels of blood urea nitrogen, aspartate aminotransferase, and creatinine. Exploratory analyses indicated that patients under the age of 21 years had a lower case fatality rate than those over the age of 45 years (57% vs. 94%, P=0.03), and patients presenting with fewer than 100,000 EBOV copies per milliliter had a lower case fatality rate than those with 10 million EBOV copies per milliliter or more (33% vs. 94%, P=0.003). Bleeding occurred in only 1 patient. CONCLUSIONS The incubation period and case fatality rate among patients with EVD in Sierra Leone are similar to those observed elsewhere in the 2014 outbreak and in previous outbreaks. Although bleeding was an infrequent finding, diarrhea and other gastrointestinal manifestations were common. (Funded by the National Institutes of Health and others.) Trial of the Route of Early Nutritional Support in Critically Ill Adults BACKGROUND Uncertainty exists about the most effective route for delivery of early nutritional support in critically ill adults. We hypothesized that delivery through the parenteral route is superior to that through the enteral route. METHODS We conducted a pragmatic, randomized trial involving adults with an unplanned admission to one of 33 English intensive care units. We randomly assigned patients who could be fed through either the parenteral or the enteral route to a delivery route, with nutritional support initiated within 36 hours after admission and continued for up to 5 days. The primary outcome was all-cause mortality at 30 days. RESULTS We enrolled 2400 patients; 2388 (99.5%) were included in the analysis (1191 in the parenteral group and 1197 in the enteral group). By 30 days, 393 of 1188 patients (33.1%) in the parenteral group and 409 of 1195 patients (34.2%) in the enteral group had died (relative risk in parenteral group, 0.97; 95% confidence interval, 0.86 to 1.08; P=0.57). There were significant reductions in the parenteral group, as compared with the enteral group, in rates of hypoglycemia (44 patients [3.7%] vs. 74 patients [6.2%]; P=0.006) and vomiting (100 patients [8.4%] vs. 194 patients [16.2%]; P<0.001). There were no significant differences between the parenteral group and the enteral group in the mean number of treated infectious complications (0.22 vs. 0.21; P=0.72), 90-day mortality (442 of 1184 patients [37.3%] vs. 464 of 1188 patients [39.1%], P=0.40), in rates of 14 other secondary outcomes, or in rates of adverse events. Caloric intake was similar in the two groups, with the target intake not achieved in most patients. CONCLUSIONS We found no significant difference in 30-day mortality associated with the route of delivery of early nutritional support in critically ill adults. (Funded by the United Kingdom National Institute for Health Research; CALORIES Current Controlled Trials number, ISRCTN17386141.) One-Unit versus Two-Unit Cord-Blood Transplantation for Hematologic Cancers BACKGROUND Umbilical-cord blood has been used as the source of hematopoietic stem cells in an estimated 30,000 transplants. The limited number of hematopoietic cells in a single cord-blood unit prevents its use in recipients with larger body mass and results in delayed hematopoietic recovery and higher mortality. Therefore, we hypothesized that the greater numbers of hematopoietic cells in two units of cord blood would be associated with improved outcomes after transplantation. METHODS Between December 1, 2006, and February 24, 2012, a total of 224 patients 1 to 21 years of age with hematologic cancer were randomly assigned to undergo double-unit (111 patients) or single-unit (113 patients) cord-blood transplantation after a uniform myeloablative conditioning regimen and immunoprophylaxis for graft-versus-host disease (GVHD). The primary end point was 1-year overall survival. RESULTS Treatment groups were matched for age, sex, self-reported race (white vs. nonwhite), performance status, degree of donor–recipient HLA matching, and disease type and status at transplantation. The 1-year overall survival rate was 65% (95% confidence interval [CI], 56 to 74) and 73% (95% CI, 63 to 80) among recipients of double and single cord-blood units, respectively (P=0.17). Similar outcomes in the two groups were also observed with respect to the rates of disease-free survival, neutrophil recovery, transplantation-related death, relapse, infections, immunologic reconstitution, and grade II–IV acute GVHD. However, improved platelet recovery and lower incidences of grade III and IV acute and extensive chronic GVHD were observed among recipients of a single cord-blood unit. CONCLUSIONS We found that among children and adolescents with hematologic cancer, survival rates were similar after single-unit and double-unit cord-blood transplantation; however, a single-unit cord-blood transplant was associated with better platelet recovery and a lower risk of GVHD. (Funded by the National Heart, Lung, and Blood Institute and the National Cancer Institute; ClinicalTrials.gov number, NCT00412360.) Antidepressant Use in Pregnancy and the Risk of Cardiac Defects BACKGROUND Whether the use of selective serotonin-reuptake inhibitors (SSRIs) and other antidepressants during pregnancy is associated with an increased risk of congenital cardiac defects is uncertain. In particular, there are concerns about a possible association between paroxetine use and right ventricular outflow tract obstruction and between sertraline use and ventricular septal defects. METHODS We performed a cohort study nested in the nationwide Medicaid Analytic eXtract for the period 2000 through 2007. The study included 949,504 pregnant women who were enrolled in Medicaid during the period from 3 months before the last menstrual period through 1 month after delivery and their liveborn infants. We compared the risk of major cardiac defects among infants born to women who took antidepressants during the first trimester with the risk among infants born to women who did not use antidepressants, with an unadjusted analysis and analyses that restricted the cohort to women with depression and that used propensity-score adjustment to control for depression severity and other potential confounders. RESULTS A total of 64,389 women (6.8%) used antidepressants during the first trimester. Overall, 6403 infants who were not exposed to antidepressants were born with a cardiac defect (72.3 infants with a cardiac defect per 10,000 infants), as compared with 580 infants with exposure (90.1 per 10,000 infants). Associations between antidepressant use and cardiac defects were attenuated with increasing levels of adjustment for confounding. The relative risks of any cardiac defect with the use of SSRIs were 1.25 (95% confidence interval [CI], 1.13 to 1.38) in the unadjusted analysis, 1.12 (95% CI, 1.00 to 1.26) in the analysis restricted to women with depression, and 1.06 (95% CI, 0.93 to 1.22) in the fully adjusted analysis restricted to women with depression. We found no significant association between the use of paroxetine and right ventricular outflow tract obstruction (relative risk, 1.07; 95% CI, 0.59 to 1.93) or between the use of sertraline and ventricular septal defects (relative risk, 1.04; 95% CI, 0.76 to 1.41). CONCLUSIONS The results of this large, population-based cohort study suggested no substantial increase in the risk of cardiac malformations attributable to antidepressant use during the first trimester. (Funded by the Agency for Healthcare Research and Quality and the National Institutes of Health.) Case–Control Study of Human Papillomavirus and Oropharyngeal Cancer BACKGROUND Substantial molecular evidence suggests a role for human papillomavirus (HPV) in the pathogenesis of oropharyngeal squamous-cell carcinoma, but epidemiologic data have been inconsistent. METHODS We performed a hospital-based, case–control study of 100 patients with newly diagnosed oropharyngeal cancer and 200 control patients without cancer to evaluate associations between HPV infection and oropharyngeal cancer. Multivariate logistic-regression models were used for case– control comparisons. RESULTS A high lifetime number of vaginal-sex partners (26 or more) was associated with oropharyngeal cancer (odds ratio, 3.1; 95% confidence interval [CI], 1.5 to 6.5), as was a high lifetime number of oral-sex partners (6 or more) (odds ratio, 3.4; 95% CI, 1.3 to 8.8). The degree of association increased with the number of vaginal-sex and oral-sex partners (P values for trend, 0.002 and 0.009, respectively). Oropharyngeal cancer was significantly associated with oral HPV type 16 (HPV-16) infection (odds ratio, 14.6; 95% CI, 6.3 to 36.6), oral infection with any of 37 types of HPV (odds ratio, 12.3; 95% CI, 5.4 to 26.4), and seropositivity for the HPV-16 L1 capsid protein (odds ratio, 32.2; 95% CI, 14.6 to 71.3). HPV-16 DNA was detected in 72% (95% CI, 62 to 81) of 100 paraffin-embedded tumor specimens, and 64% of patients with cancer were seropositive for the HPV-16 oncoprotein E6, E7, or both. HPV-16 L1 seropositivity was highly associated with oropharyngeal cancer among subjects with a history of heavy tobacco and alcohol use (odds ratio, 19.4; 95% CI, 3.3 to 113.9) and among those without such a history (odds ratio, 33.6; 95% CI, 13.3 to 84.8). The association was similarly increased among subjects with oral HPV-16 infection, regardless of their tobacco and alcohol use. By contrast, tobacco and alcohol use increased the association with oropharyngeal cancer primarily among subjects without exposure to HPV-16. CONCLUSIONS Oral HPV infection is strongly associated with oropharyngeal cancer among subjects with or without the established risk factors of tobacco and alcohol use. CPAP versus Oxygen in Obstructive Sleep Apnea BACKGROUND Obstructive sleep apnea is associated with hypertension, inflammation, and increased cardiovascular risk. Continuous positive airway pressure (CPAP) reduces blood pressure, but adherence is often suboptimal, and the benefit beyond management of conventional risk factors is uncertain. Since intermittent hypoxemia may underlie cardiovascular sequelae of sleep apnea, we evaluated the effects of nocturnal supplemental oxygen and CPAP on markers of cardiovascular risk. METHODS We conducted a randomized, controlled trial in which patients with cardiovascular disease or multiple cardiovascular risk factors were recruited from cardiology practices. Patients were screened for obstructive sleep apnea with the use of the Berlin questionnaire, and home sleep testing was used to establish the diagnosis. Participants with an apnea–hypopnea index of 15 to 50 events per hour were randomly assigned to receive education on sleep hygiene and healthy lifestyle alone (the control group) or, in addition to education, either CPAP or nocturnal supplemental oxygen. Cardiovascular risk was assessed at baseline and after 12 weeks of the study treatment. The primary outcome was 24-hour mean arterial pressure. RESULTS Of 318 patients who underwent randomization, 281 (88%) could be evaluated for ambulatory blood pressure at both baseline and follow-up. On average, the 24-hour mean arterial pressure at 12 weeks was lower in the group receiving CPAP than in the control group (−2.4 mm Hg; 95% confidence interval [CI], −4.7 to −0.1; P=0.04) or the group receiving supplemental oxygen (−2.8 mm Hg; 95% CI, −5.1 to −0.5; P=0.02). There was no significant difference in the 24-hour mean arterial pressure between the control group and the group receiving oxygen. A sensitivity analysis performed with the use of multiple imputation approaches to assess the effect of missing data did not change the results of the primary analysis. CONCLUSIONS In patients with cardiovascular disease or multiple cardiovascular risk factors, the treatment of obstructive sleep apnea with CPAP, but not nocturnal supplemental oxygen, resulted in a significant reduction in blood pressure. (Funded by the National Heart, Lung, and Blood Institute and others; HeartBEAT ClinicalTrials.gov number, NCT01086800.)