DLP+ Single Cell Genomic Library A98180 for Diffuse large B-cell lymphoma patient sample TFRI_Pair_13_DLBCL
DLP+ Single Cell Genomic Library A98225 for Diffuse large B-cell lymphoma patient sample TFRIPAIR12_DLBCL_rel
KCL lpWGS samples for copy number analysis
This dataset consists of the exome sequencing data for 30 tumour and germline DNA pairs derived from relapsed/refractory DLBCL.
Targeted capture of exonic and intronic regions of interest for the study of genomic alterations in multiple myeloma.
VCF file with genome-wide data for 62 Iberian Roma samples.
Biomarker data for KATHERINE: Biomarker data include RNA-seq time point, Percent of tumor content, PAM50 subtypes, normalized gene expression of ERBB2, CD8 and CD274, normalized immune signature expression.
Programmatic submissions (XML based) For further information please check our Submission FAQs, submission quickguide as well as submission terms! Introduction Besides the Submitter Portal tool, EGA supports programmatic sequence and clinical data metadata submissions. If you are not sure what this means, you may want to explore our brief metadata introduction. Programmatic submissions are recommended for array-based submission. Moreove, it may be of help if your submission is recurrent or it is difficult to manage manually due to its sheer size. Otherwise, we highly recommend using the Submitter Portal to perform submissions. In this page we will guide you through the required steps to programmatically submit data to the EGA. Programmatic submissions require your metadata to be structured for an easy and straightforward validation and archival. It basically consists in formatting your metadata as Extensible markup language (XML) files and submitting them to the EGA using the WEBIN Before submitting metadata to the EGA, it is important to ensure that the information in your XML files is compliant with our standards. You can see further details on how these standards are maintained at EGA at our EGA Schemas documentation page. Using WEBIN, you can validate your XML files against EGA's schemas to ensure that your metadata is compliant before submission. WEBIN services WEBIN production service WEBIN test service We advise you to submit your metadata to the test service when submitting to the production service for the first time. The test service is identical to the production service except that all submissions will be discarded in the following 24 hours. This allows you to learn about the submission process without having to worry about data being submitted. Authentication Authentication is required each time a submission is made. The submission service uses HTTPS protocol for metadata encryption and identification to provide a secure submission environment. Data file upload Both Runs and Analyses reference files (e.g. FASTQ need to be uploaded to the EGA before these metadata objects are submitted. In other words, if you submit a Run that references a file that we cannot find associated with your account, the metadata submission will fail. See further details on how to upload your files in our File Upload documentation. Metadata model of the EGA Our metadata model is formed by multiple metadata objects. Check further details in our documentation at our EGA Schema documentation page. Working with EGA XMLs files Now that the basic concepts of the EGA metadata have been described, you can start preparing your programmatic submission through XML. Here you will find the guidance on how to prepare the XML files. Programmatic Submission Tutorial Video Take a look at the Programmatic Submission Tutorial Video, which explains the workflow of a programmatic submission and goes over an example metadata submission. Programmatic Submission Tutorial Video. When building your XML files, we recommend using text editors (e.g.Sublime Text or VisualStudio) that allow you to visualise the structure of the XML with ease. Furthermore, these editors constantly check the consistency of the XML structure. Alternatively, and if the submission consists of a big number of objects (specially analyses), you may find the tool star2xml handy. This tool allows for a direct conversion between metadata in a tabular format (e.g. a spreadsheet) into XMLs. Identifying objects: Aliases and center names Every EGA object must be uniquely identified within the submission account using their alias attribute. The aliases can be used in submissions to make references between EGA objects. Let us dig into EGA's use of aliases and center names: alias: every object should have a name that is unique within your submission account. Once submitted successfully, every alias will be assigned a unique and permanent accession (EGA ID). refname: when an object references another by its alias, the alias of the referenced object goes into the "refname" attribute of the referencing object. For example, if a sample has the alias "sample1", and an experiment uses this sample, then the experiment's "EXPERIMENT/SAMPLE/refname" attribute should be "sample1". center_name: The "center_name" attribute is required within the submission XML and, if not provided when the object is submitted, it will be automatically filled using your default EGA account center_name. This element is the "controlled vocabulary acronym or abbreviation that is provided to the account holder when the account is first generated". If the submitter is brokering a submission for another institute, the submitter should use their special broker account name in broker_name while the data centre acronym remains in center_name. Log-in details should have been provided when you requested a submission account. Please contact our Helpdesk team if you have any questions. run_center: Many submitting centers contract out the actual sample sequencing to another center. In these cases, the sequencing center should be acknowledged in the run_center attribute. Again, this is controlled vocabulary and the acronym should be sought from EGA helpdesk before submitting. Please contact our Helpdesk team if you have any questions. Prepare your XMLs The goal of this section is to provide sufficient information to be able to create the metadata XML documents required for programmatic submissions. Please note, the EGA utilises the XML schemas maintained at the European Nucleotide Archive (ENA). It is important due to the fact that by using a similar system, some pieces of documentation from the ENA's programmatic submission can also help you with your programmatic submission to the EGA. For example, you can submit programmatically without using a Submission XML by following the steps at Submission actions without submission XML. A submission does not have to contain all different types of XMLs. For example, it is possible to submit only a few samples; or a study that is later to be referenced. You can submit each object one by one, or submit all in a batch: you choose what method of submission works best for you. We do recommend, nevertheless, that you submit the objects to be referenced (e.g. samples or studies) first, and the objects that reference these (e.g. experiments or datasets) afterwards. You can see a graphical view of these objects and their relationships at our EGA Schemas page. Independently of the submission scenario, you will always require a Dataset XML. The entity of a dataset is what is used to control access to the given data, in the form of runs or analyses. In other words, when a requester is granted access, it is through the dataset and the objects (e.g. runs or analyses) that the dataset contains, granting access to them in one go. Given the nature of the EGA, a dataset XML will always be required for the data access. First, we will differentiate between submissions of "raw" and "processed" data: Runs and Analyses, respectively. Run data submissions Raw data derives from instruments "as is". For example, a plain sequence file (e.g. FASTQ or unaligned BAM files) would be considered raw data. A typical raw (unaligned) sequence read submission consists of 8 XMLs: Submission Study Sample Experiment Run DAC Policy Dataset When technical reads (e.g. barcodes, adaptors or linkers) are included in the submitted raw sequences, a spot descriptor must be submitted to describe the position of the technical reads so that they can be removed. The following data files can be submitted without providing spot descriptor information in the experiment/run XML: BAM files (single reads) SFF files (single reads without barcodes) FastQ files (single reads without any technical reads) Complete Genomics files Analysis data submissions Processed data is, in some way, refined raw data. This includes raw data that has been processed by some form of analysis method (e.g. alignment, noise reduction, etc.). For example, an aligned sequence (e.g. BAM file), that was created using raw FASTQ files, would be a processed file. This category includes most types of data: sequence alignment files (e.g. BAM or CRAM), clinical data (e.g. phenopackets), sequence variation files (e.g. VCF), sequence annotation, etc. A typical EGA analysis data submission consists of 7 EGA XML: Submission Study Sample Analysis DAC Policy Dataset We accept three different types of analysis data submissions: BAM files (for multiple read alignments) VCF files (for sequence variations) Phenotype files (in any format) In anycase, keep in mind that samples must be created in order to be referenced in the analyses. In other words, the provenance of the information within the BAM, VCF and phenotype files Example XMLs Below you can find a non-extensive list of example XMLs with descriptive fields (i.e. explaining what to provide in each field). Furthermore, you can also find real examples (i.e. the true value of the provided fields) in our GitHub repository. Submission XML The submission XML is used to validate, submit or update any number of other objects. The submission XML refers to other XMLs. New submissions use the ADD action to submit new objects. Object updates are done using the MODIFY action and objects can be validated using the VERIFY action. Descriptive submission XML example True values submission XML example Study XML The study XML is used to describe the study containing a title, a study type and abstract as it would appear in a publication. Descriptive study XML example True values study XML example Please use the following notation within the property "STUDY_LINKS" when including PubMed citations in the Study XML: <STUDY_LINKS> <STUDY_LINK> <XREF_LINK> <DB>PUBMED</DB> <ID>18987735</ID> </XREF_LINK> </STUDY_LINK> </STUDY_LINKS> Sample XML The sample XML is used to describe the samples used to obtain the data, whether they were sequenced, measured in any other way, or have an associated phenotype. The mandatory fields include information about the taxonomy of the sample, sex, subject ID and phenotype. For example, the mandatory attribute fields for each sample would look like these, within the array of "SAMPLE_ATTRIBUTES": <SAMPLE_ATTRIBUTES> <SAMPLE_ATTRIBUTE> <TAG>subject_id</TAG> <VALUE>free text!</VALUE> </SAMPLE_ATTRIBUTE> <SAMPLE_ATTRIBUTE> <TAG>sex</TAG> <VALUE>female/male/unknown</VALUE> </SAMPLE_ATTRIBUTE> <SAMPLE_ATTRIBUTE> <TAG>phenotype</TAG> <VALUE>Free text, EFO terms (e.g. EFO:0000574) are recommended</VALUE> </SAMPLE_ATTRIBUTE> </SAMPLE_ATTRIBUTES> Sample is one of the most important objects to be described biologically, it is highly recommended that “TAG-VALUE” pairs are generated as SAMPLE_ATTRIBUTES to describe the sample in as much detail as possible. For example, were we to give the population ancestry of the sample, we could add a new attribute to the array, in which, for example, we would indicate that the sample derives from an individual of "Mende in Sierra Leone" (MSL), with an african ancestry: <SAMPLE_ATTRIBUTE> <TAG>Population</TAG> <VALUE>MSL</VALUE> </SAMPLE_ATTRIBUTE> Given that VALUE and TAG are free text, the combinations are limitless in order to give you full flexibility on the information you want to provide. We recommend you use the Experimental Factor Ontology (EFO) to describe the phenotypes of your samples. You can provide more than one phenotype by adding more items to the array of SAMPLE_ATTRIBUTES. Phenotypes considered essential for understanding the data submission should be provided. Each phenotype described should be listed as a separate sample attribute <SAMPLE_ATTRIBUTE> </SAMPLE_ATTRIBUTE>. There is no limit to the number of phenotypes that can be submitted. If a suitable EFO accession cannot be found for your phenotype attribute, please consider using another controlled ontology database (e.g. HPO, MONDO, etc.) before using free text. Descriptive sample XML example True values sample XML example Experiment XML The experiment XML is used to describe the experimental setup, including instrument platform and model details, library preparation details, and any additional information required to correctly interpret the submitted data. Where any of these values differ between runs, a new experiment object must exist, since runs are grouped by experiments. Each experiment references a study and a sample by alias, or if previously-submitted, by accession. Pooled data must be demultiplexed by barcode for submission. Descriptive experiment ( Illumina paired read ) XML example True values experiment ( Illumina paired read ) XML example Run XML The run XML is used to associate data files with experiments and typically comprises a single data file (e.g. a FASTQ file). Please note that pooled samples should be de-multiplexed prior submission and submitted as different runs. Descriptive run XML example True values run XML example Analysis XML Given that an analysis can be used to submit any type of processed data to the EGA, we will list below an example of each of the three most common types of analysis XMLs submitted to the EGA: sequence alignments (e.g. BAM files); sequence variation (e.g. VCF files); and clinical metadata or phenotypes (e.g. phenopackets). Regardless of the type of processed data submitted in the analysis, the analysis must be associated with a Study and can reference multiple types of other objects, from samples to experiments, if they are available at the EGA. Just like with Runs, whenever a file is submitted to the EGA through an analysis object, the file MD5 checksums must be present, in order for the EGA to validate file integrity upon transfer. This also includes index files when applicable (e.g. .bai.md5 files). Ideally, any analysis that uses a reference sequence for some kind of alignment (e.g. BAM, CRAM or VCF files), would contain metadata about the alignment, such as INSDC reference assemblies and sequences, by either using accessions (e.g. CM000663.1) or common labels (e.g. GRCh37). Read alignment (BAM) Analysis XML The Analysis can be used to submit BAM alignments to EGA. Only one BAM file can be submitted in each analysis and the samples used within the BAM read groups must be associated with Samples. Descriptive bam alignments XML example True values bam alignments XML example Sequence variation (VCF) Analysis XML The Analysis can be used to submit VCF files to EGA. Only one VCF file can be submitted in each analysis and the samples used within the VCF files must be associated with Samples. Download analysis XML (VCF) Phenotype files The Analysis XML can be used to submit phenotype files to the EGA. Only one phenotype file can be submitted in each analysis and the samples used within the phenotype files must be associated with EGA Samples. Download analysis XML (Phenotype) DAC XML The DAC XML describes the Data Access Committee (DAC) affiliated to the data submission. The DAC may consist of a group or a single individual and is responsible for the data access decisions based on the application procedure described in the POLICY.XML. As with any other object, if it was already submitted to the EGA, there is no need to submit it again: you can reference an existing object within the EGA. Hence, A DAC XML does not need to be provided if your submission is affiliated to an existing EGA DAC.. Further information on DACs can be found here, and you can always contact our Helpdesk team if you have further inquiries. Descriptive dac XML example True values dac XML example Policy XML The Policy XML describes the Data Access Agreement (DAA) to be affiliated to the named Data Access Committee. Descriptive policy XML example True values study XML example Dataset XML The dataset XML describes the data files, defined by the Run.XML and Analysis.XML, that make up the dataset and links the collection of data files to a specified Policy. The dataset xml is commonly the last metadata object to be submitted, since it references multiple other entities. Please consider the number of datasets that your submission consists of. For example, a case-control study is likely to consist of at least two datasets. In addition, we suggest that multiple datasets should be described for studies using the same samples but different sequence technologies. Descriptive dataset XML example True values dataset XML example Validating and submitting your EGA Validating EGA's XMLs through Webin After you have ensured that the XMLs are properly formatted and contain all the required information. You can proceed to validate and submit your data. Use the curl command to validate your XML file: Once you have prepared your XML file and asserted you have access to Webin, you can validate your XML file programmatically against EGA's schemas using the curl command. There are multiple ways in which you can validate your XMLs. This variety has to do with the fact that: (1) there are 2 instances of Webin (test and production); and (2) that validation is a default step during submission. In other words, any time that you submit your data through Webin, it will be validated automatically before being accepted. This allows for 4 possible routes of validation, all having the same validation result: validating or submitting to either the production service or the test service of Webin. For example, directly validating a "study" object XML in the testing service (wwwdev…) would look like the following: curl -u <USERNAME>:<PASSWORD> -F "ACTION=VALIDATE" "https://wwwdev.ebi.ac.uk/ena/submit/drop-box/submit/" -F "STUDY=@study.xml" In this command, you would need to replace <USERNAME> and <PASSWORD> with your EGA account username and password, respectively. You would also replace <INPUT_FILE> with the path to your XML file. A mock example would look like the following: curl -u ega-test-data@ebi.ac.uk:egarocks -F "ACTION=VALIDATE" "https://wwwdev.ebi.ac.uk/ena/submit/drop-box/submit/" -F "STUDY=@study.xml" The validation attempt can have different results depending on the given arguments: If your XML file is valid according to EGA's schemas, you will see a message indicating that your XML file is compliant. For example, see below for our mock example, where the "success" was "true" (i.e. no validation errors found). Nevertheless, notice how the "<STUDY accession=" is empty: it is because we were simply validating, so the study did not get an accession or ID. <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="receipt.xsl"?> <RECEIPT receiptDate="2023-04-11T15:19:28.850+01:00" submissionFile="submission-EBI-TEST_1681222768850.xml" success="true"> <STUDY accession="" alias="Mock example" status="PRIVATE"/> <SUBMISSION accession="" alias="SUBMISSION-11-04-2023-15:19:28:840"/> <MESSAGES> <INFO>VALIDATE action has been specified.</INFO> <INFO>Submission has been rolled back.</INFO> <INFO>This submission is a TEST submission and will be discarded within 24 hours</INFO> </MESSAGES> <ACTIONS>VALIDATE</ACTIONS> <ACTIONS>PROTECT</ACTIONS> If there are any errors or warnings, the tool will display them, allowing you to correct them before submitting your data to EGA. For example, in the following response, it is said that the object we were trying to submit was already existing, and therefore the "success" was "false". <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="receipt.xsl"?> <RECEIPT receiptDate="2023-04-11T15:12:35.609+01:00" submissionFile="submission-EBI-TEST_1681222355609.xml" success="false"> <STUDY alias="Example!_Human Microbiome Project SP56J" status="PRIVATE" holdUntilDate="2023-03-11Z"/> <SUBMISSION alias="SUBMISSION-11-04-2023-15:12:35:576"/> <MESSAGES> <ERROR>In study, alias: "Example!_Human Microbiome Project SP56J". The object being added already exists in the submission account with accession: "ERP127584".</ERROR> <INFO>VALIDATE action has been specified.</INFO> <INFO>Submission has been rolled back.</INFO> <INFO>This submission is a TEST submission and will be discarded within 24 hours</INFO> </MESSAGES> <ACTIONS>VALIDATE</ACTIONS> <ACTIONS>PROTECT</ACTIONS> If the curl command retrieves no response at all, please double check if your username and password are correctly provided. Also notice the "ACTION=..." argument passed to the Curl command. This specifies the action to take during the call to Webin, so we do not need a "Submission" XML just for a validation attempt. See more at submission actions without submission XML. Furthermore, validation of multiple files or objects (e.g. sample, experiment, study…) can be done in a single command by adding more arguments (i.e. '-F'). For example: curl -u <USERNAME>:<PASSWORD> -F "ACTION=VALIDATE" "https://wwwdev.ebi.ac.uk/ena/submit/drop-box/submit/" -F "STUDY=@study.xml" -F "SAMPLE=@sample.xml" -F "DATASET=@dataset.xml" As mentioned above, beside "validate" action in the test environment, you can also validate your metadata by three other methods: "Validate" in the production server. From our example above, you simply need to take the "dev" away from the URL. curl -u <USERNAME>:<PASSWORD> -F "ACTION=VALIDATE" "https://www.ebi.ac.uk/ena/submit/drop-box/submit/" -F "STUDY=@study.xml" "Add" in the development server. From our example above, you would simply need to replace the action: from "validate" to "add". Whatever is submitted to this service will be discarded in 24h, so whether something gets submitted or not would not matter in the long run. curl -u <USERNAME>:<PASSWORD> -F "ACTION=ADD" "https://wwwdev.ebi.ac.uk/ena/submit/drop-box/submit/" -F "STUDY=@study.xml" "Add" in the productionserver. A combination of the previous two methods, which would render this attempt into a submission. This path is just to be taken when you are sure your metadata is compliant and what you want to submit. curl -u <USERNAME>:<PASSWORD> -F "ACTION=ADD" "https://www.ebi.ac.uk/ena/submit/drop-box/submit/" -F "STUDY=@study.xml" What happens after the submission of a dataset XML? Once you have completed the registration of your dataset/s please contact our Helpdesk Team to provide a release date for your study. Please note that all datasets affiliated to unreleased studies are automatically placed on hold until the authorised submitter or DAC contact contact the EGA Helpdesk for the study to be released. We strongly advise you not to delete your data until EGA Helpdesk confirms that your data has been successfully archived.
The Demographically Diverse Substance Use Disorder Cohorts of Dr. Stanley H. Weiss, which constitute the Epidemiology of the Weiss Cohort Projects, consist of a series of inter-connected projects, building upon a set of cohort projects of various groups, mainly drug users from medication-assisted treatment programs, that Dr. Stanley H. Weiss first developed in the 1980’s plus several newer initiatives, each with an array of collaborators. Beginning in the 1980’s, Dr. Stanley H. Weiss started several long-term studies of persons who inject drugs (PWID) across the United States, ultimately enrolling over 10,000 participants through the early 1990’s with an average age then in their 30’s. About a quarter were enrolled from sites in New Jersey (NJ). These studies included the first testing of PWID for the human immunodeficiency virus (HIV) and the human T-cell lymphotropic viruses (HTLV I and HTLV II). Cumulative past support (initiation thru ~ 1999) for these cohort studies included ~ $20 million from intramural resources from the National Cancer Institute (NCI) and the National Institute on Drug Abuse (NIDA), plus multiple grants and in-kind support from the New Jersey Department of Health (NJDOH) totaling ~ $1 million. The Weiss Cohort Projects include the first large AIDS-era cohorts to include women at high risk for HIV. A high percentage of subjects in these studies are black or Latino. Thus, this is an ethnically diverse US cohort, with a high proportion of women included. These subjects are at high risk of parenteral and sexual infection from both drug use and sexual practices. Samples from other studies conducted by Dr. Weiss, in which detailed interviews were conducted, are included as controls (persons documented by us not to have a history of opioid drug use). As one of our groups of subjects have many persons of Haitian ancestry, we specifically included some Haitians who had never used opioids as controls. Our documentation includes such ancestry. These cohorts demonstrated high rates of HIV and HTLV-II infection in PWIDs, including one study initiated in 1981 with confirmation in the later cohorts. In the first two decades of these studies, among numerous publications was the first study showing a very high rate of hepatitis C infection among PWIDs. An example of how the studies’ long-time horizon proved essential was that it first became possible to test whether a person had ever been infected with hepatitis C virus (HCV), as well as how much HCV was in each person’s blood, many years after the specimens were collected. This allowed HCV amounts in blood to be compared for subjects who had died of liver disease early in the study versus those who survived. Then a sequence of published papers culminated in demonstrating, using a nested case-control design, that a high baseline HCV titer was predictive of early progression to death from end-stage liver failure. Outcomes related to HCV (end stage liver disease and hepatocellular carcinoma) remain under study. In the original cohort studies, the mean age at enrollment was ~ 33 years old, so that those still alive in 2022 are mainly now ~ 60 - 75 years old. Many participants have already died. The tincture of time has led to subjects reaching ages when many more are dying from a wide array of outcomes, including from many chronic diseases (including cancer) as well as from infectious agents (especially HIV, HCV) or drug overdose. Renewed collaboration with local drug treatment programs has led to new field-based studies, including examination of some currently evolving problems among drug users. Dr. Weiss joined the National Institute on Drug Abuse (NIDA) Genetics Consortium (NGC) in 2017, and through the NIDA project officer has had access to NGC contract resources (see below). NIH Certificate of Confidentiality, CC-DA-16-214 (attached) protects these studies. Past arrangements related to data on our subjects leads to restrictions on the use of data emanating from our study, such as potential commercialization and restrictions on whom may access and use these data. NIDA Genetics Consortium (NGC) resources further support these endeavors and will be used as part of the NGC analyses studying the genetics of substance use. Study participants signed informed consent for the information collected from them to be used with no time limit and for biologic specimens collected from them to be used without restriction in future research. Serum samples were collected from participants, and from many also plasma, white blood cells and/or urine samples. About 100,000 vials were stored. All specimens have been continuously preserved at sufficiently cold temperatures to prevent deterioration, and many subjects separated white blood cells were processed and frozen in such a way as to maintain viability. Detailed data from the participants has been accumulated over time, and in general, linkage has been retained in each sub-study in accordance with the consent forms and protocols. For some participants, specimens were collected at multiple times (that is, sequential specimens). Multiple specimens from a single person exist in this database, and efforts at de-duplication remain ongoing. Dr. Weiss should be contacted if an investigator requires unique individuals since: • Multiple phases of enrollment occurred, and as our prospective follow-up continues; Dr. Weiss may identify new instances of multiple enrollment. • Some persons are related to each other. • In general, in this dataset for dbGaP, only a single specimen/record form a given person is included. Advances in laboratory testing techniques now permit innovative new uses for our linked research biospecimen repository. The ongoing focus of an interdisciplinary research program based on these cohorts relates subjects’ diseases, behaviors, medical history, and outcomes with biological and exposure markers. Participants’ use of various substances was ascertained on study enrollments, many serially over time. Quantitative frequency of use data, also sometimes sequential over time, were ascertained. Active ascertainment of outcomes is being conducted, including matching to mortality and cancer databases. Investigators interested in collaborations on specific outcomes (which is not part of this dbGaP dataset) or in the use of our stored specimens are encouraged to contact the principal investigator, Dr. Weiss. The processing of the genomic data was done in conjunction with NIDA, and in accordance with some longstanding data cleaning steps used by NIDA in the NIDA Genetics Consortium (NGC), a group to which we shall be contributing these data for collaborative analyses. Since there is the potential for these steps to introduce certain types of potential biases, we summarize these here. Under contract from NIDA, cryopreserved sera or plasma (-80 C) or cells (in liquid nitrogen) were used, with most stored having been stored for 30 to 40 years in our biorepository. In the case of serum or plasma, in which only (largely) cell-free DNA fragments were available, DNA was extracted and restored prior to amplification. Industry standard DNA amplification techniques were done on all samples prior to genotyping in accord with established protocols of the NIDA Genetics Consortium. Our genotype data were run and processed on the Illumina Infinium OmniExpress_v_1.3 array. This array has 714,238 SNPs, and was designed many years ago. There were 628 SNPs on the array that do not correspond to any chromosome position, and these were removed. Genotype data were submitted by NIDA’s contracted genotyping laboratory in six batches over time to NIDA’s contracted dbGaP data management group, which conducted quality control (QC) analyses. QC analysis included an assessment of batch effects on for five of the six batches. (One of the batches, with only 12 samples, was too small for QC analysis of batch effects.) Standard NIDA Genetic Consortium cleaning was performed. Samples with a call rate <.85 were removed. Only one sample per person was retained. When more than one specimen was genotyped from one subject, only the sample with the higher call rate was retained (provided, of course, that that call rate was ≥ 0.85). We have retained some people we know are related, including some found to have been related through genotyping; the pedigree file describes those relationships. In summary, key cleaning steps include: 1. Using PLINK to check gender discrepancy. 2. Using PREST-PLUS and KING (Kinship-based Inference for GWAS) to check relatedness. 3. Using PEDCHECK and PLINK to check/zero-out Mendelian error. 4. Using PLINK to perform sample QC, SNP QC, along with KING to perform chromosome X and chromosome Y QC. 5. SNP-QC: Batch-effect: 5 Batches were compared (one batch, with few samples, was not). These five batches were compared to each other in all ten possible pairs, one batch vs. another batch, examining SNP allele frequency discrepancies by population (from GRAF), Fisher Exact Allelic test, with the criterion of p<5e-8 for removal. 6. SNP-QC: discordant SNPs in QC duplicates. Compared 25 QC duplicated samples with call rate > 0.95, removed SNPs with 3+ discordance. 7. There were 1,056 SNPs that were monomorphic; these have been retained so they can be included in analyses in which our dbGaP data are combined with those from other cohorts (in the latter of which those SNPs may not be monomorphic). The final cleaned dataset submitted has 8,898 samples and 606,793 SNPs.
Cancer is driven by mutation. Worldwide, tobacco smoking is the major lifestyle exposure that causes cancer, exerting carcinogenicity through 60 chemicals that bind and mutate DNA. Using massively parallel sequencing technology, we sequenced a small cell lung cancer cell line, NCI-H209, to explore the mutational burden associated with tobacco smoking. 22,910 somatic substitutions were identified, including 132 in coding exons. Multiple mutation signatures testify to the cocktail of carcinogens in tobacco smoke and their proclivities for particular bases and surrounding sequence context. Effects of transcription-coupled repair and a second, more general expression-linked repair pathway were evident. We identified a tandem duplication that duplicates exons 3-8 of CHD7 in-frame, and another two lines carrying PVT1-CHD7 fusion genes, suggesting that CHD7 may be recurrently rearranged in this disease. These findings illustrate the potential for next-generation sequencing to provide unprecedented insights into mutational processes, cellular repair pathways and gene networks associated with cancer.