In designing clinical studies, much thought is given to controlling for clinical and demographic variability. For example, a well-designed study will carefully enroll patients of a particular disease type, stage, and treatment regimen. In addition, for each of these variables there will be determinations made as to how many patients of what age, sex, and ethnicity should be included in each of the clinical groups. By carefully considering the sample size needed for each variable, clinicians target specific and meaningful data when analyzing the results.
Unfortunately, the same care is not necessarily given to the targeting and planning of meaningful data from the biological samples that are obtained from patients during the course of these studies. The clinicians directing the study are often somewhat removed from the laboratory that is doing the processing and testing. And frequently, the staff members in the lab may be unaware of the many logistical issues in a biological sample’s journey from the time it is drawn from the patient until it is processed and analyzed in the laboratory. And these variables in logistical and other processes can have a significant impact on the quality of the sample, and limit the type of analyses that can be performed downstream.
These process variables may include how the biological sample is obtained (for example, how the blood is drawn, using what gauge needle, into what tube type) as well as differences in shipping time and temperature, which can have a major impact on the cell's viability. The influence of process does not end with a sample's arrival at the laboratory: what happens after the sample has been processed and the plasma, serum, urine, DNA, and/or RNA has been extracted? What is the optimal storage volume for these constituents?
Not surprisingly, the answer is that it varies depending on the intended downstream use of the sample in question. However, it is clear that in any kind of discovery work (genomics, proteomics, metabolomics) one needs to keep in mind the number of times the sample will be requested for use, that is, pulled from the freezer (or LN2 tank) thawed, an aliquot removed, and the remainder re-frozen. While some analytes appear to be relatively stable over multiple freeze/thaw cycles—generally the smaller the molecule, the more stable—one cannot make any assumptions when using the samples for discovery research. In particular, when comparisons will be made from one patient to the next, it is very important to limit freeze/thaw cycles to the extent possible.
This is especially critical for larger constituents, such as antibodies, large proteins, and of course DNA, which tend to be more susceptible to degradation over multiple freeze/thaw events. Where these analytes are concerned, it is of major importance to both 1) limit the number of these events and 2) record each event as it happens to the sample.
For planning purposes, those designing studies that include sample collection should consider a phased approach to managing primary aliquots of the sample. Once the processed sample has been obtained, it should be aliquoted into much smaller volumes before the initial freezing, that may be removed and sent for further testing "as is," without thawing and additional aliquoting. As an example, when processing 10 ml of EDTA-treated blood, one might obtain 6 ml of plasma. Freezing the 6 ml of plasma in one tube means that to remove at most 200 microliters of the sample, the entire sample must be thawed and re-frozen. However, distributing the larger volume into labeled tubes with a total volume of 0.5 ml each prior to freezing preserves the analytes in a way that is of far greater value to research. In this scenario, very important and irreplaceable samples can also be split into more than one collection, which are then stored in either a) separate freezers or b) separate freezers in separate locations, thus mitigating risk in case of a severe, adverse event at one of the storage facilities.
In addition, if the anticipated use of the sample involves a large high throughput screening program with robotics, it may be even more worthwhile to aliquot the samples into “robot ready” plates for testing. When working with DNA (or RNA) this usually means normalization, that is to say equalizing each sample to a given concentration in a standard final volume, for example 50 ng/ml in 100 microliters across all DNA samples. By processing and storing samples in a "research-ready" format, the number of freeze/thaw cycles can be minimized and sample integrity ensured.
Our Director of Commercial Laboratory, Abdul Ally, walks you through how a sample travels from processing to subsequent data retrieval in our recent video, The Journey of a Sample....in the Biobank Laboratory. Click on the thumbnail below to view this video.