# 9.7E: Exercises

## Practice Makes Perfect

Simplify Expressions with Higher Roots

In the following exercises, simplify.

Example (PageIndex{46})

1. (sqrt[3]{216})
2. (sqrt[4]{256})
3. (sqrt[5]{32})

Example (PageIndex{47})

1. (sqrt[3]{27})
2. (sqrt[4]{16})
3. (sqrt[5]{243})
1. 3
2. 2
3. 3

Example (PageIndex{48})

1. (sqrt[3]{512})
2. (sqrt[4]{81})
3. (sqrt[5]{1})

Example (PageIndex{49})

1. (sqrt[5]{125})
2. (sqrt[4]{1296})
3. (sqrt[5]{1024})
1. 5
2. 6
3. 4

Example (PageIndex{50})

1. (sqrt[3]{−8})
2. (sqrt[4]{−81})
3. (sqrt[5]{−32})

Example (PageIndex{51})

1. (sqrt[3]{−64})
2. (sqrt[4]{−16})
3. (sqrt[5]{−243})
1. −4
2. not real
3. −3

Example (PageIndex{52})

1. (sqrt[3]{−125})
2. (sqrt[4]{−1296})
3. (sqrt[5]{−1024})

Example (PageIndex{53})

1. (sqrt[3]{−512})
2. (sqrt[4]{−81})
3. (sqrt[5]{−1})
1. −8
2. not a real number
3. −1

Example (PageIndex{54})

1. (sqrt[5]{u^5})
2. (sqrt[8]{v^8})

Example (PageIndex{55})

1. (sqrt[3]{a^3})

1. a
2. |b|

Example (PageIndex{56})

1. (sqrt[4]{y^4})
2. (sqrt[7]{m^7})

Example (PageIndex{57})

1. (sqrt[8]{k^8})
2. (sqrt[6]{p^6})
1. |k|
2. ∣p∣

Example (PageIndex{58})

1. (sqrt[3]{x^9})
2. (sqrt[4]{y^{12}})

Example (PageIndex{59})

1. (sqrt[5]{a^{10}})
2. (sqrt[3]{b^{27}})
1. (a^2)
2. (b^9)

Example (PageIndex{60})

1. (sqrt[4]{m^8})
2. (sqrt[5]{n^{20}})

Example (PageIndex{61})

1. (sqrt[6]{r^{12}})
2. (sqrt[3]{s^{30}})
1. (r^2)
2. (s^{10})

Example (PageIndex{62})

1. (sqrt[4]{16x^8})
2. (sqrt[6]{64y^{12}})

Example (PageIndex{63})

1. (sqrt[3]{−8c^9})
2. (sqrt[3]{125d^{15}})
1. (−2c^3)
2. (5d^5)

Example (PageIndex{64})

1. (sqrt[3]{216a^6})
2. (sqrt[5]{32b^{20}})

Example (PageIndex{65})

1. (sqrt[7]{128r^{14}})
2. (sqrt[4]{81s^{24}})
1. (2r^2)
2. (3s^6)

Use the Product Property to Simplify Expressions with Higher Roots

In the following exercises, simplify.

Exercise (PageIndex{66})

1. (sqrt[3]{r^5})
2. (sqrt[4]{s^{10}})

Example (PageIndex{67})

1. (sqrt[5]{u^7})
2. (sqrt[6]{v^{11}})
1. (usqrt[5]{u^2})
2. (vsqrt[6]{v^5})

Example (PageIndex{68})

1. (sqrt[4]{m^5})
2. (sqrt[8]{n^{10}})

Example (PageIndex{69})

1. (sqrt[5]{p^8})
2. (sqrt[3]{q^8})
1. (psqrt[5]{p^3})
2. (q^2sqrt[3]{q^2})

Example (PageIndex{70})

1. (sqrt[4]{32})
2. (sqrt[5]{64})

Example (PageIndex{71})

1. (sqrt[3]{625})
2. (sqrt[6]{128})
1. (5sqrt[3]{5})
2. (2sqrt[6]{2})

Example (PageIndex{72})

1. (sqrt[6]{64})
2. (sqrt[3]{256})

Example (PageIndex{73})

1. (sqrt[4]{3125})
2. (sqrt[3]{81})
1. (5sqrt[4]{5})
2. (3sqrt[3]{3})

Example (PageIndex{74})

1. (sqrt[3]{108x^5})
2. (sqrt[4]{48y^6})

Example (PageIndex{75})

1. (sqrt[5]{96a^7})
2. (sqrt[3]{375b^4})
1. (2asqrt[5]{3a^2})
2. (5bsqrt[3]{3b})

Example (PageIndex{76})

1. (sqrt[4]{405m^{10}})
2. (sqrt[5]{160n^8})

Example (PageIndex{77})

1. (sqrt[3]{512p^5})
2. (sqrt[4]{324q^7})
1. (8psqrt[3]{p^2})
2. (3qsqrt[4]{4q^3})

Example (PageIndex{78})

1. (sqrt[3]{−864})
2. (sqrt[4]{−256})

Example (PageIndex{79})

1. (sqrt[5]{−486})
2. (sqrt[6]{−64})
1. (−3sqrt[5]{2})
2. not real

Example (PageIndex{80})

1. (sqrt[5]{−32})
2. (sqrt[8]{−1})

Example (PageIndex{81})

1. (sqrt[3]{−8})
2. (sqrt[4]{−16})
1. −2
2. not real

​​​​​​Use the Quotient Property to Simplify Expressions with Higher Roots

In the following exercises, simplify.

Example (PageIndex{82})

1. (sqrt[3]{frac{p^{11}}{p^2}})
2. (sqrt[4]{frac{q^{17}}{q^{13}}})

Example (PageIndex{83})

1. (sqrt[5]{frac{d^{12}}{d^7}})
2. (sqrt[8]{frac{m^{12}}{m^4}})
1. d
2. |m|

Example (PageIndex{84})

1. (sqrt[5]{frac{u^{21}}{u^{11}}})
2. (sqrt[6]{frac{v^{30}}{v^{12}}})

Example (PageIndex{85})

1. (sqrt[3]{frac{r^{14}}{r^5}})
2. (sqrt[4]{frac{c^{21}}{c^9}})
1. (r^2)
2. (∣c^3∣)

Example (PageIndex{86})

1. (frac{sqrt[4]{64}}{sqrt[4]{2}})
2. (frac{sqrt[5]{128x^8}}{sqrt[5]{2x^2}})

Example (PageIndex{87})

1. (frac{sqrt[3]{−625}}{sqrt[3]{5}})
2. (frac{sqrt[4]{80m^7}}{sqrt[4]{5m}})
1. −5
2. (4msqrt[4]{m^2})

Example (PageIndex{88})

1. (sqrt[3]{frac{1050}{2}})
2. (sqrt[4]{frac{486y^9}{2y^3}})

Example (PageIndex{89})

1. (sqrt[3]{frac{162}{6}})
2. (sqrt[4]{frac{160r^{10}}{5r^3}})
1. (3sqrt[3]{6})
2. (2|r|sqrt[4]{2r^3})

Example (PageIndex{90})

1. (sqrt[3]{frac{54a^8}{b^3}})
2. (sqrt[4]{frac{64c^5}{d^2}})

Example (PageIndex{91})

1. (sqrt[5]{frac{96r^{11}}{s^{3}}})
2. (sqrt[6]{frac{128u^7}{v^3}})
1. (frac{2r^2sqrt[5]{3r}}{s^3})
2. (frac{2usqrt[6]{2uv^3}}{v})

Example (PageIndex{92})

1. (sqrt[3]{frac{81s^8}{t^3}})
2. (sqrt[4]{frac{64p^{15}}{q^{12}}})

Example (PageIndex{93})

1. (sqrt[3]{frac{625u^{10}}{v^3}})
2. (sqrt[4]{frac{729c^{21}}{d^8}})
1. (frac{5u^3sqrt[3]{5u}}{v})
2. (frac{3c^5sqrt[4]{9c}}{d^2})

In the following exercises, simplify.

Example (PageIndex{94})

1. (sqrt[7]{8p}+sqrt[7]{8p})
2. (3sqrt[3]{25}−sqrt[3]{25})

Example (PageIndex{95})

1. (sqrt[3]{15q}+sqrt[3]{15q})
2. (2sqrt[4]{27}−6sqrt[4]{27})
1. (2sqrt[3]{15q})
2. (−4sqrt[4]{27})

Example (PageIndex{96})

1. (3sqrt[5]{9x}+7sqrt[5]{9x})
2. (8sqrt[7]{3q}−2sqrt[7]{3q})

Example (PageIndex{97})

1.

2.

1.

2.

Example (PageIndex{98})

1. (sqrt[3]{81}−sqrt[3]{192})
2. (sqrt[4]{512}−sqrt[4]{32})​​​​​​​

Example (PageIndex{99})

1. (sqrt[3]{250}−sqrt[3]{54})
2. (sqrt[4]{243}−sqrt[4]{1875})
1. (5sqrt[3]{5}−3sqrt[3]{2})
2. (−2sqrt[4]{3})

Example (PageIndex{100})

1. (sqrt[3]{128}+sqrt[3]{250})
2. (sqrt[5]{729}+sqrt[5]{96})

Example (PageIndex{101})

1. (sqrt[4]{243}+sqrt[4]{1250})
2. (sqrt[3]{2000}+sqrt[3]{54})
1. (3sqrt[4]{3}+5sqrt[4]{2})
2. (13sqrt[3]{2})

Example (PageIndex{102})

1. (sqrt[3]{64a^{10}}−sqrt[3]{−216a^{12}})
2. (sqrt[4]{486u^7}+sqrt[4]{768u^3})

Example (PageIndex{103})

1. (sqrt[3]{80b^5}−sqrt[3]{−270b^3})
2. (sqrt[4]{160v^{10}}−sqrt[4]{1280v^3})
1. (2bsqrt[3]{10b^2}+3bsqrt[3]{10})
2. (2v^2sqrt[4]{10v^2}−4sqrt[4]{5v^3})

​​​​​​Mixed Practice

In the following exercises, simplify.

Example (PageIndex{104})

(sqrt[4]{16})

Example (PageIndex{105})

(sqrt[6]{64})

2

Example (PageIndex{106})

(sqrt[3]{a^3})

Example (PageIndex{107})

|b|

Example (PageIndex{108})

(sqrt[3]{−8c^9})

Example (PageIndex{109})

(sqrt[3]{125d^{15}})

(5d^5)

Example (PageIndex{110})

(sqrt[3]{r^5})

Example (PageIndex{111})

(sqrt[4]{s^{10}})

(s^2sqrt[4]{s^2})

Example (PageIndex{112})

(sqrt[3]{108x^5})

Example (PageIndex{113})

(sqrt[4]{48y^6})

(2ysqrt[4]{3y^2})

Example (PageIndex{114})

(sqrt[5]{−486})

Example (PageIndex{115})

(sqrt[6]{−64})

not real

Example (PageIndex{116})

(frac{sqrt[4]{64}}{sqrt[4]{2}})

Example (PageIndex{117})

(frac{sqrt[5]{128x^8}}{sqrt[5]{2x^2}})

(2xsqrt[5]{2x})

Example (PageIndex{118})

(sqrt[5]{frac{96r^{11}}{s^3}})

Example (PageIndex{119})

(sqrt[6]{frac{128u^7}{v^3}})

(frac{2u^3sqrt[6]{2uv^3}}{v})

Example (PageIndex{120})

(sqrt[3]{81}−sqrt[3]{192})

Example (PageIndex{121})

(sqrt[4]{512}−sqrt[4]{32})

(4sqrt[4]{2})

Example (PageIndex{122})

(sqrt[3]{64a^{10}}−sqrt[3]{−216a^{12}})

Example (PageIndex{123})

(sqrt[4]{486u^7}+sqrt[4]{768u^3})

(3usqrt[4]{6u^3}+4sqrt[4]{3u^3})

## Everyday Math

Example (PageIndex{124})

Population growth The expression (10·x^n) models the growth of a mold population after n generations. There were 10 spores at the start, and each had x offspring. So (10·x^n) is the number of offspring at the fifth generation. At the fifth generation there were 10,240 offspring. Simplify the expression (sqrt[5]{frac{10,240}{10}}) to determine the number of offspring of each spore.

Example (PageIndex{125})

Spread of a virus The expression (3·x^n) models the spread of a virus after n cycles. There were three people originally infected with the virus, and each of them infected x people. So (3·x^4) is the number of people infected on the fourth cycle. At the fourth cycle 1875 people were infected. Simplify the expression (sqrt[4]{frac{1875}{3}}) to determine the number of people each person infected.

5

## Writing Exercises

Example (PageIndex{126})

Explain how you know that (sqrt[5]{x^{10}}=x^2).

Example (PageIndex{127})

Explain why (sqrt[4]{−64}) is not a real number but (sqrt[3]{−64}) is.

## Self Check

ⓐ After completing the exercises, use this checklist to evaluate your mastery of the objectives of this section.

ⓑ What does this checklist tell you about your mastery of this section? What steps will you take to improve?

In the game data, rooms and doors are designated as follows:

 2n 1 2 3 1e 2e 1s 2s 3s 4 5 6 4w 4e 5e 6e 4s 5s 6s 7 8 9 7e 8e 8s

Initially, all doors are closed except the outer ones - 2n, 4w, 6e and 8s.

The computers open and close the following doors:

# Open Close
1 1s 2s 3s 1e 2e 7e
2 1e 2e 4e 7e 2n 4w 6e 8s
3 3s 4e 4s 3s 4w 4e 6e
4 4w 4e 5e 6e 1s 2s 3s 4s 5s 6s
5 3s 4w 5e 1s 2s 4e
6 2s 4w 5e 6e 3s 4e 4s 5s 6s
7 4w 5s 7e 1s 2s 5e
8 6s 6e 8e 3s 5s 5e 7e
9 2n 4w 8s 6e None

Logic for computer 3 both opens and closes 3s and 4e. In practice, this results in it toggling them every time it's used. So, it's the only computer that has an effect when used repeatedly.

### Quick solution

The fastest sequence to get through is 2-3-1-7-[visit left side room]-8-9-[visit right side room].

Optionally, after that, 6-2-3-3-9 opens up a complete route through the maze and access to all rooms.

2500 XP is granted upon exiting the map.

### Analysis

• 2 is the only computer accessible initially. It starts the puzzle by closing the maze exit doors, 2n and 8s, that are only opened with 9, thus trapping the character in the area until they solve the puzzle.
• Likewise, 9, opening the outer doors and closing none, is the final step.
• 3 is the only way that grants access to the 3rd row from the previous ones - via 4s and, incidentally, room 4.
• 4 grants access to both side rooms but closes all inter-row doors, forcing one to start over after that.

### Easter Egg

It is possible to repair one of the computers (randomly selected) with a successful Science check with -20 modifier. Success will result in obtaining an reward based on one's passive and diplomatic skills (internally, this is referred to as finding an "easter egg" but has nothing to do with the eponymous item). If a skill is tagged and over 99%, the reward will be (in this order):

: Opens all the doors in the maze : Restores power if the wires are cut so they can be cut them again for 1000 XP, 1 magic 8-ball : 1 pulse rifle, 10 microfusion cells : $2000-$5000, 1 pack of marked cards, 1 loaded dice : 10-20 plasma grenades : 1 Psycho, 1 Buffout, 1 first aid kit, 1 Mentats, 1 stimpak : 2-4 Buffout, 1 medical supplies, 1 poison, 2-4 jet, 2-4 super stimpaks, 2-4 Psycho, 1-2 doctor's bags, 2-4 stimpaks : 2-4 plastic explosives, 2-4 dynamite, 2-4 electronic lock pick Mk II, 1 motion sensor : 1 alien blaster, 30 small energy cells

### Electrified floor

The electrified floor does 20-40 points of electric damage every 10 seconds. Surprisingly, rubber boots do not help here.

In room 9, there are some wires sticking from the wall. They can be cut with multipurpose tool's pliers by passing a Repair skill check with -30 modifier. That will stop the damage and grant 1000 XP.

## The OpenSSL CLI (command line interpreter)

1. to prove the underlying routines and libraries work properly (the developers of OpenSSL use this feature all the time)
2. creating certificate signing requests (and associated keys), self-signed certificates, etc.
3. Testing client-server connectivity
1. can you connect at all?
2. is there an SSL handshake after you connect?
3. is there something wrong with the far-end server certificate?
4. is the far-end server requesting a client certificate (this is usually optional) but would be an error if you have not defined one
5. are near-end certificates usable?
6. is your client trusted certificate chain usable?
7. is a cipher missing?
1. Browsers will hide error information only presenting an error icon. Using OpenSSL CLI as a test tool will present details for you to begin failure-mode analysis.
2. Modern OpenVMS servers have no modern browsers so the OpenSSL CLI will be the only tool to get you out of most jams.
p.s. one of our systems sits behind a firewall with the server's IP address gen'd into a firewall ACL (Access Control List). If we wanted to use a PC to debug some weird connectivity problem we would need to unplug the server from the network then replace it with a properly configured PC. Since we run 24/7 this is not possible
• use s_client to connect to Google or Twitter (they won't mind)
• use s_client to connect to your Apache web server
• Hacker Heaven: connect any two platforms end-to-end one running s_server while the other running s_client
• if you are on a really tight budget, run the client and server sessions from two different command-line sessions on the same computer system

### Official OpenSSL Documentation

Apps and Tools

### HP OpenSSL Documentation for OpenVMS

#### Example 1: OpenSSL introductory stuff

Like most security software, OpenSSL seems deliberately unfriendly so does not include verbose help. You must read the official CLI docs or type something illegal then hope for the best.

## How it works

Like the other solutions mentioned, my solution is based on continued fraction. Other solutions like the one from Eppstein or solutions based on repeating decimals proved to be slower and/or give suboptimal results.

Continued fraction
Solutions based on continued fraction are mostly based on two algorithms, both described in an article by Ian Richards published here in 1981. He called them the “slow continued fraction algorithm” and the “fast continued fraction algorithm”. The first is known as the the Stern-Brocot algorithm while the latter is known as Richards’ algorithm.

My algorithm (short explanation)
To fully understand my algorithm, you need to have read the article by Ian Richards or at least understand what a Farey pair is. Furthermore, read the algorithm with comments at the end of this article.

The algorithm is using a Farey pair, containing a left and a right fraction. By repeatedly taking the mediant it is closing in on the target value. This is just like the slow algorithm but there are two major differences:

1. Multiple iterations are performed at once as long as the mediant stay on one side of the target value.
2. The left and right fraction cannot come closer to the target value than the given accuracy.

Alternately the right and left side of the target value are checked. If the algorithm cannot produce a result closer to the target value, the process ends. The resulting mediant is the optimal solution.

## Current challenges and future directions

In recent years, there has been enormous interest in radiomic modeling of tumor immune biology and immunotherapy response. Imaging features have been associated with similar immune phenotypes across studies and cancer types ( Fig. 2 , Fig. 3 , Fig. 4 ). These associations not only involved first order, shape, and size features but higher order texture features as well. For example, sphericity (roundness) and sharper borders positively associated with immune activity both at a gene expression level, and on histology [55], [63], and predicted response to ICI. Of note, while first-order features recapitulated well across cancers ( Fig. 2 ), texture feature associations did not demonstrate similar consistency. For example, measures of heterogeneity (GLCM information measure of correlation 1, GLRLM run length nonuniformity, GLSZM gray level nonuniformity) associated negatively with expression of immune gene signatures in head and neck cancer [73], whereas the opposite association appeared to be true of breast cancer and glioma [53], [62] ( Fig. 3 ). While this may be due to tumor type-specific biological differences when measuring more complex, textural qualities of tumor images, it also raises concerns about the reproducibility and biological relevance of the calculated textures themselves. In general, PET features associations were consistent between studies both within and between cancers ( Fig. 4 ). FDG uptake features like SUVmax associated positively with expression of PD-L1 in esophageal [77] and lung cancer [29], [30], [34], [36], [40] SUVmax, MTV and TLG associated with ICI response and survival outcomes in melanoma [70], [71] and lung cancer [30], [34], [35], [42]. Furthermore, many of the radiomic features and models discussed were independent predictive variables for response to immunotherapy, and outperformed established biomarkers like PD-L1 [42], [43]. Overall, radiogenomics studies have yielded many promising results. However, there are numerous issues that the field faces. Whether radiogenomic models will be effective independently or only as an adjunct to histology and genomic profiling will depend on how these challenges, as discussed below, are met in the future.

### Tumor cohort size and validation

Among the major limiting factors for radiogenomic associations and modeling are the size of tumor cohorts and the lack of validation. Only 11 of 42 studies of radiogenomic associations and 8 of 18 studies of imaging feature associations with immunotherapy response had validation cohorts ( Fig. 5 ). Median cohort sizes were 56 (IQR: 42.25�.5) and 84 (IQR: 58.5�.5) for primary and validation cohorts respectively for radiogenomic studies. Median cohort sizes were 56.5 (IQR: 36.75�.5) and 79.5 (IQR: 23.25�.75) for primary and validation cohorts respectively for immunotherapy response studies. Smaller study sizes and lack of external validation increased the risk that many of the reported associations were either false positives or relevant only to the training cohort (poor generalizability). The lack of overlap between most associations analyzed in these studies may be a consequence not only of the limited number of studies specifically focused on immune phenotypes but also the lack of effort to reproduce findings of existing studies (both internally and by outside groups). Issues of reproducibility may improve over time with better access to data via shared resources like the Imaging Data Commons as well as a community effort to make validation a standard part of radiogenomic studies.

Size distributions of primary and validation cohorts for studies reporting radiogenomic associations and associations between imaging features and immunotherapy response.

### Biomarker selection

The types of immune phenotypes associated with radiomic features should also be scrutinized carefully. While many of these associations will help elucidate the biological basis for otherwise complex imaging phenotypes, it is unclear how to interpret certain associations. For instance, the outputs of several studies were radiogenomic heatmaps, in which modules represented associations between subsets of features and genes. In these cases, it was uncertain which modular features were most robust and driving the associations, hence more likely to be validated. Furthermore, not all immune phenotypes have a pre-defined clinical significance in terms of prognosis and response prediction. Even widely validated biomarkers like PD-L1 do not always predict response to immune checkpoint inhibition [89]. Therefore, while radiomic models can predict TIL status and expression of important immune genes and proteins, they might be better used as direct biomarkers [42]. Rather than being limited to use as proxies of existing molecular markers, the future of radiogenomic modeling may lie in its longitudinal applications.

Currently, few biomarkers exist to measure the temporal response of tumors to immunotherapy. Recent studies have demonstrated the importance of radiomic measurements before and after treatment in predicting response to immunotherapy, specifically for checkpoint inhibition and dendritic cell therapy [34], [42], [43], [54]. Changes to FDG uptake features, rADCmin, and perinodular Gabor were shown to be associated with response and prognosis, and ought to be validated in future studies. Delta radiomic features have also provided new insights into pseudoprogression [69]. Moreover, interval radiomic biomarkers have unexplored utility for understanding therapeutic resistance. In tumors that are recalcitrant to therapy, changes to the tumor and surrounding TME may underlie potential mechanisms of resistance. Few studies have captured the biology of these changes as they are occurring, as this would typically entail repeated invasive procedures like tumor biopsies. Non-invasive imaging taken sequentially during the course of treatment may help identify radiomic biomarkers of resistance early on, thereby allowing for real-time decision support and changes to clinical management.

### Feature reproducibility

Feature reproducibility has been an ongoing concern in radiomics. For semantic features like “necrosis” and �ma,” variability of subjective assessment can result in opposing observations. For example in GBM, “necrosis” was reported to be both positively [46] and negatively [38] correlated with expression of immune pathways. The automation of radiomic feature extraction bypasses concerns of clinician-reader bias but is not without issues. Variation in imaging acquisition parameters, and pre-processing techniques have shown a significant effect on imaging feature calculations, not only affecting their reproducibility, but also making the features inconsistent within a single dataset. Where appropriate, methods for intensity standardization, such as through referencing of healthy tissue, or more robust methods based on statistical learning, should be considered [90].

Subsequently, after image acquisition and processing, there are two major sources of bias that can lead to lack of uniform reproducibility in radiomic feature extraction. The first is implementation of the mathematical feature definitions to quantify information within ROIs. Concerns have been raised regarding the actual implementation even when the same feature definitions were used [91], which has been evident amongst open source frameworks [92], [93], [94], [95], [96]. Efforts like MITK Phenotyping [97] and the Image Biomarker Standardization Initiative [91] have been developed to standardize test data and centralize these platforms, showing promise for unified feature extraction. The second, and arguably larger source of radiomic features bias, arises from the manual tumor segmentations. While manual contours are often performed by individuals with expert domain knowledge and are often seen as gold-standards for radiomic analysis, they are nonetheless prone to inter- and intra- observer variability [98]. Moreover, tumors can be notoriously difficult to contour due to their unclear borders. This bias should be kept in consideration during radiogenomic analysis interpretation [99], [100]. Ongoing efforts aim to auto-segment the tumor may alleviated these issues [101], but there are concerns over the accuracy of these techniques.

Recently, there has been growing support for using deep learning approaches that mitigate many of the aforementioned biases and reproducibility issues [102]. Deep learning allows the algorithm to define its own features instead of relying on pre-defined features, which can obviate the need for manual segmentation. However, a major draw-back of these deep learning methods is their need for copious training data. Additionally, the underlying meaning behind these self-defined features derived from deep learning algorithms are unclear and the subject of active investigation [103], [104]. There have been several approaches that attempt to bridge the gap between traditional radiomics and deep learning [105], and these may be important studies for the future of radiogenomic approaches that integrate deep learning algorithms into their workflows.

### Data analysis and modeling

Current use and integration of large scale “omics” data is primarily retrospective and aimed at hypothesis generation. This underscores the importance of taking a rigorous approach to exploratory analysis and model generation. Some of the goals are to reduce user bias and also to mitigate issues of multiple hypothesis testing (including p-hacking) and overfitting. A study by Chalkidou et al. demonstrated an alarming number of published radiomic studies at the time had high type 1 error probabilities and did not reach statistical significance [106]. Therefore, appropriate statistical methodology, such as correcting for multiple comparisons, is crucial in radiomic analysis. Feature selection is also a key issue for model generation, as radiomics often leads to the creation of high-dimensional feature spaces that can lead to overfitting. Therefore, proper dimensionality reduction techniques should be employed [107].

An increasing number of radiomic and radiogenomic studies are utilizing machine learning in their workflows. While machine learning avoids certain errors common to conventional statistics, it is prone to its own pitfalls. This is particularly true when considering relatively small data sets, on the order of hundreds of training samples or smaller, which forms the bulk of current imaging studies. Improper use of training datasets and data leakage from training to evaluation datasets are factors that have led to overly optimistic results in many studies. It is crucial to use hold-out data sets for evaluation of models and to prevent any data set leakage. Finally, it is highly encouraged to use validation sets of data that are independent from the training set and ideally from multi-institutional sources so as to maximize model generalizability.

As described previously, there is considerable optimism regarding the advent of deep learning in quantitative imaging. Deep learning has made significant strides in the field of genomics and radiomics independently, so it is natural to believe an artificial intelligence framework rooted in deep learning will play a considerable role in radiogenomics as well. However, the key issues of interpretability and overall model transparency have been at the forefront of whether these techniques are suitable for clinical implementation. Fortunately, the deep learning interpretability issue is becoming an increasingly studied problem domain [108]. Solutions can include the analysis of the algorithm itself [109], as well as utilization of novel algorithmic structures that inherently lend themselves to higher levels of interpretability [110]. Therefore, future radiogenomic studies that implement deep learning in their analysis should keep these interpretability considerations in mind.

### Building composite models and data integration

One of the key takeaways from current radiogenomic studies is the value of composite models, which combine radiomic models with other covariates, such as clinical and molecular features. Currently, composite models utilizing imaging data are limited by data completeness. Certain efforts have been directed at standardizing multi-omic and imaging data, including development of public data repositories through the support of the NCI Moonshot program. The Imaging Analysis Working Group has compiled high-resolution hematoxylin and eosin (Hɮ) imaging to allow for quantization of lymphocytic infiltration patterns across 13 of 33 cancer types in TCGA [111]. Progress in machine learning in digital tumor pathology are being accompanied by advances in multiplexed IHC as well as immunostaining approaches like mass cytometry (CyTOF), which provide in-depth spatial characterization of tumor immune composition.

Building better composite models requires integrating the next generation of multi-omic data. In areas where genomic and transcriptomic data do not sufficiently capture tumor biology by linking genotype to tumor phenotype, these next generation “omic” approaches frequently provide better insight [112], [113], [114], [115]. The integration of these large, disparate data sources will necessitate better software and workflow management. Platforms like MultiAssayExperiment have been developed with the express purpose of providing data objects and structures for this type of integrative analysis [116].

### Data transparency, reporting and best practices

To foster better study design and data transparency, there has been a recent trend toward pre-registering studies in databases such as the Open Science Framework (OSF) to promote data use integrity and encourage standard practices [117]. Additionally, the utilization of curated public repositories for open-sourcing of computational analysis tools (e.g. GitHub) and datasets (e.g. NCI Cancer Research Data Commons) [118] will further foster transparency and reproducibility of radiogenomic studies.

Radiogenomics has much to gain in consolidating and standardizing methods of analysis in an effort to accurately compare studies, as has been previously described for quantitative imaging biomarkers [119]. Success in this arena will depend on developing good practice guidelines. Some of the factors that ought to be considered as part of good practices have been discussed previously and agglomerated into a radiomics quality score (RQS) [1], [4]. The reporting of radiogenomic studies aimed at model development and validation can benefit from broader guidelines, such as those recommended by the TRIPOD group [120].

Moving forward, the utility of radiomic features and radiogenomic models with regards to tumor immunology will continue to be twofold: 1) predicting response to immunotherapy and 2) comprehending and modeling immune biology. Based on these considerations and the major challenges discussed above, our recommendations for best practice guidelines for future studies are summarized in ( Table 3 ) [1], [120], [121], [122].

### Table 3

Recommendations for conducting and reporting studies that investigate radiogenomic associations with tumor immune phenotypes.

ProcessConsiderationsRecommendations
Study designStudy registrationPre-register studies in databases such as the Open Science Framework (OSF)
Cohort selectionFocus on specific molecular subtypes or subclasses of cancers may enable more accurate radiogenomic modelsMeta-analysis of multiple cohorts can be used to achieve more generalizable models
Study designProspective study design to enable longitudinal feature assessment may be ideal for generating models to predict immunotherapy response and identify biomarkers of resistanceFor retrospective study design, statistical and modeling approaches should be decided a priori
Evaluating molecular dataTumor and TME gene expression data procurement and processingRNA-seq for assessing gene expression, refer to Conesa et al. 2016 for a review of good data practices [121]
RNA-seq may be eventually supplanted by single-cell RNA-seq, which can improve the ability to
distinguish tumor versus immune cell gene expression
Pathway and immune infiltration analysisSoftware like Gene Set Enrichment Analysis (GSEA), Ingenuity Pathway Analysis, DAVID, Metascape are standard for pathway enrichment analysis
Approaches including single sample GSEA (ssGSEA), CIBERSORT, and Immunoscore useful for more specific quantification of types of tumor immune cell infiltration
Cell markers by IHCSpecific staining of cell surface markers remains the gold standard for quantifying immune cell infiltration
To increase staining throughput, consider using tissue microarrays and multiplexed IHC
Quantifying TILs by HɮHɮ allows for good quantitation of TILs, but is often subject to clinician-reader bias
Best clinical practices are outlined in Salgado et al. 2015 [122]
Image acquisition, processing, and extractionImage acquisition parametersUse standardized acquisition parameters
Image pre-processingNormalize voxel intensities of images, particularly MRI, to more accurately and reproducibly extract
Feature definition and extractionUse feature standardization platforms, such as MITK Phenotyping and the Image Biomarker
Standardization Initiative
Tumor segmentationUse multiple independent observers if segmenting manually or consider semi-automatic/automatic
approaches to maximize reproducibility
Deep learningUtilize algorithm visualization methodology, such as saliency maps, to increase
interpretability/explainability/transparency
Modeling and data analysisFeature selectionReduce feature dimensionality such as through regression modeling (e.g. LASSO Cox, Elastic Net) or using intra-class feature similarity measures (e.g. intra-class correlation coeffcient) to prevent
overfitting and improve feature reliability
Model designBest performing models for predicting prognosis and immunotherapy response are likely achieved by combining radiogenomics models with other covariates into composite models
Correct for multiple hypothesis testing where appropriate
Machine learningUse hold-out data sets for evaluation of models and to prevent any data leakage from training to evaluation sets
Validate on data that are independent from the training set and ideally from multi-institutional sources
Data transparency and reportingPublic data and code repositoriesShare code in open-source repositories like GitHub
Share imaging data in public repositories like the Imaging Data Commons (IDC)
Radiomics quality score (RQS)Report RQS score (out of 36) developed by Sanduleanu et al. 2018 [1]
Study reporting checklistsUse of TRIPOD 22-item checklist for model development and validation [120]

Legend: DAVID: database for annotation, visualization, and integrated discovery, IHC: immunohistochemistry, TIL: tumor infiltrating lymphocyte, Hɮ: hematoxylin and eosin, MITK: medical imaging interaction toolkit, LASSO: least absolute shrinkage and selection operator, TRIPOD: Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis.

## Results

### Discovery meta-analysis

A Q-Q plot of P values from the fixed-effects meta-analysis of study-specific, adjusted logistic regression model results shows the rank-ordered observed −log10(P value) plotted against the rank-ordered expected −log10(P value) ( Figure 3, Panel A). The P values above the diagonal line in the upper-right quadrant of the plot demonstrate that there are a number of SNPs with associations more statistically significant than expected by chance alone, assuming a uniform distribution of P values. Further, the associated genomic control lambda value of 1.033 suggests that the selected covariates and PCs provide reasonable control for population stratification. The Manhattan plot displays summary results from the meta-analysis by chromosomal position and highlights a peak on chromosome 4 with six variants in tight LD (pairwise R 2 ≥ 0.975 for five SNPs in 1000GP June 2011 Europeans) reaching genome-wide significance at P < 5 × 10 − 8 ( Figure 3, Panel B).

Rs17042479 was the SNP with the most statistically significant P value on chromosome 4 (risk allele = G OR per risk allele = 1.67 P value = 1.5 × 10 − 8 ). The directions of effect for MECC and CFR were consistent, with CFR exhibiting a slightly attenuated effect ( Table II). Study-specific estimates demonstrate that the result was not heavily driven by either MECC or CFR findings, and the average minor allele frequency across studies was

9%. This SNP was directly measured in the MECC discovery samples, CFR Set1 and MECC replication samples (imputed only in CFR Set 2). The SNP is located

800kb gene FSTL5 (follistatin-like 5) and

720kb downstream of NAF1 [nuclear assembly factor 1 homolog (Saccharomycescerevisiae)]. The complete list of six discovery stage genome-wide significant association findings (P ≤ 5×10 − 8 ) between effect allele dosage and CRC status, visually indicated by the inflated tail of observed −log10(P values) in the Q-Q plot ( Figure 3, Panel A) and as SNPs above the blue line in the Manhattan plot ( Figure 3, Panel B) are summarized in Table II. Further, we also demonstrated that 14 out of 29 previously identified CRC risk alleles that were imputed with high quality and analyzed in this meta-analysis had nominally significant associations with P < 0.05 ( Supplementary Table 2 , available at Carcinogenesis Online). Twenty six out of 29 known susceptibility markers had a consistent risk allele and direction of effect with the previously published result. The most statistically significant risk locus was located at chromosomal region 8q24, as described from the same source population ( 9).

Summary of genetic variants with combined P < 5×10 −8 in the combined discovery (MECC + CFR) + replication (MECC) meta-analysis

SNP . Chromosome . Position . Average Freq . Effect allele . Alt allele . MECC discovery OR (SE a ) . CFR discovery
OR (SE a ) .
Discovery OR (SE a ) . Discovery P . Replication OR (SE a ) . Replication P . Meta OR (SE a ) . Meta P . I 2 statistic .
rs17042479 b 4 163325411 0.09 G A 2.08 (0.22) 1.59 (0.10) 1.67 (0.09) 1.5E-08 1.25 (0.12) 6.6E-02 1.52 (0.07) 1.7E-08 57.5
rs79783178 c 4 163325957 0.09 C CAT 2.08 (0.22) 1.59 (0.10) 1.67 (0.09) 1.8E-08 1.25 (0.12) 6.8E-02 1.52 (0.07) 2.0E-08 57.3
rs35509282 c 4 163333405 0.09 A T 1.89 (0.22) 1.61 (0.10) 1.67 (0.09) 3.0E-08 1.32 (0.13) 3.3E-02 1.53 (0.07) 8.2E-09 12.2
rs9998942 c 4 163340404 0.09 T C 1.82 (0.22) 1.61 (0.10) 1.64 (0.09) 3.9E-08 1.32 (0.13) 3.1E-02 1.53 (0.07) 9.7E-09 0.7
rs57336275 c 4 163341215 0.09 C T 1.82 (0.21) 1.61 (0.10) 1.64 (0.09) 4.2E-08 1.32 (0.13) 3.5E-02 1.54 (0.07) 1.9E-08 3.2
rs11736440 c 4 163336693 0.09 A G 1.85 (0.22) 1.61 (0.10) 1.65 (0.09) 4.5E-08 1.33 (0.13) 2.5E-02 1.53 (0.07) 8.3E-09 0.0
SNP . Chromosome . Position . Average Freq . Effect allele . Alt allele . MECC discovery OR (SE a ) . CFR discovery
OR (SE a ) .
Discovery OR (SE a ) . Discovery P . Replication OR (SE a ) . Replication P . Meta OR (SE a ) . Meta P . I 2 statistic .
rs17042479 b 4 163325411 0.09 G A 2.08 (0.22) 1.59 (0.10) 1.67 (0.09) 1.5E-08 1.25 (0.12) 6.6E-02 1.52 (0.07) 1.7E-08 57.5
rs79783178 c 4 163325957 0.09 C CAT 2.08 (0.22) 1.59 (0.10) 1.67 (0.09) 1.8E-08 1.25 (0.12) 6.8E-02 1.52 (0.07) 2.0E-08 57.3
rs35509282 c 4 163333405 0.09 A T 1.89 (0.22) 1.61 (0.10) 1.67 (0.09) 3.0E-08 1.32 (0.13) 3.3E-02 1.53 (0.07) 8.2E-09 12.2
rs9998942 c 4 163340404 0.09 T C 1.82 (0.22) 1.61 (0.10) 1.64 (0.09) 3.9E-08 1.32 (0.13) 3.1E-02 1.53 (0.07) 9.7E-09 0.7
rs57336275 c 4 163341215 0.09 C T 1.82 (0.21) 1.61 (0.10) 1.64 (0.09) 4.2E-08 1.32 (0.13) 3.5E-02 1.54 (0.07) 1.9E-08 3.2
rs11736440 c 4 163336693 0.09 A G 1.85 (0.22) 1.61 (0.10) 1.65 (0.09) 4.5E-08 1.33 (0.13) 2.5E-02 1.53 (0.07) 8.3E-09 0.0

All variants fall upstream of FSTL5 and represent a newly identified genetic susceptibility locus for colorectal cancer.

a SE, standard error of the beta estimate.

b Directly genotyped in the MECC discovery samples, CFR Set1, and MECC replication samples (imputed only in CFR Set 2 with info = 0.975).

c Imputed with imputation quality info score ≥ 0.976 in all sample sets.

Summary of genetic variants with combined P < 5×10 −8 in the combined discovery (MECC + CFR) + replication (MECC) meta-analysis

SNP . Chromosome . Position . Average Freq . Effect allele . Alt allele . MECC discovery OR (SE a ) . CFR discovery
OR (SE a ) .
Discovery OR (SE a ) . Discovery P . Replication OR (SE a ) . Replication P . Meta OR (SE a ) . Meta P . I 2 statistic .
rs17042479 b 4 163325411 0.09 G A 2.08 (0.22) 1.59 (0.10) 1.67 (0.09) 1.5E-08 1.25 (0.12) 6.6E-02 1.52 (0.07) 1.7E-08 57.5
rs79783178 c 4 163325957 0.09 C CAT 2.08 (0.22) 1.59 (0.10) 1.67 (0.09) 1.8E-08 1.25 (0.12) 6.8E-02 1.52 (0.07) 2.0E-08 57.3
rs35509282 c 4 163333405 0.09 A T 1.89 (0.22) 1.61 (0.10) 1.67 (0.09) 3.0E-08 1.32 (0.13) 3.3E-02 1.53 (0.07) 8.2E-09 12.2
rs9998942 c 4 163340404 0.09 T C 1.82 (0.22) 1.61 (0.10) 1.64 (0.09) 3.9E-08 1.32 (0.13) 3.1E-02 1.53 (0.07) 9.7E-09 0.7
rs57336275 c 4 163341215 0.09 C T 1.82 (0.21) 1.61 (0.10) 1.64 (0.09) 4.2E-08 1.32 (0.13) 3.5E-02 1.54 (0.07) 1.9E-08 3.2
rs11736440 c 4 163336693 0.09 A G 1.85 (0.22) 1.61 (0.10) 1.65 (0.09) 4.5E-08 1.33 (0.13) 2.5E-02 1.53 (0.07) 8.3E-09 0.0
SNP . Chromosome . Position . Average Freq . Effect allele . Alt allele . MECC discovery OR (SE a ) . CFR discovery
OR (SE a ) .
Discovery OR (SE a ) . Discovery P . Replication OR (SE a ) . Replication P . Meta OR (SE a ) . Meta P . I 2 statistic .
rs17042479 b 4 163325411 0.09 G A 2.08 (0.22) 1.59 (0.10) 1.67 (0.09) 1.5E-08 1.25 (0.12) 6.6E-02 1.52 (0.07) 1.7E-08 57.5
rs79783178 c 4 163325957 0.09 C CAT 2.08 (0.22) 1.59 (0.10) 1.67 (0.09) 1.8E-08 1.25 (0.12) 6.8E-02 1.52 (0.07) 2.0E-08 57.3
rs35509282 c 4 163333405 0.09 A T 1.89 (0.22) 1.61 (0.10) 1.67 (0.09) 3.0E-08 1.32 (0.13) 3.3E-02 1.53 (0.07) 8.2E-09 12.2
rs9998942 c 4 163340404 0.09 T C 1.82 (0.22) 1.61 (0.10) 1.64 (0.09) 3.9E-08 1.32 (0.13) 3.1E-02 1.53 (0.07) 9.7E-09 0.7
rs57336275 c 4 163341215 0.09 C T 1.82 (0.21) 1.61 (0.10) 1.64 (0.09) 4.2E-08 1.32 (0.13) 3.5E-02 1.54 (0.07) 1.9E-08 3.2
rs11736440 c 4 163336693 0.09 A G 1.85 (0.22) 1.61 (0.10) 1.65 (0.09) 4.5E-08 1.33 (0.13) 2.5E-02 1.53 (0.07) 8.3E-09 0.0

All variants fall upstream of FSTL5 and represent a newly identified genetic susceptibility locus for colorectal cancer.

a SE, standard error of the beta estimate.

b Directly genotyped in the MECC discovery samples, CFR Set1, and MECC replication samples (imputed only in CFR Set 2 with info = 0.975).

c Imputed with imputation quality info score ≥ 0.976 in all sample sets.

### Replication and joint meta-analysis

The six genetic markers from the MECC+CFR discovery meta-analysis with P < 5 × 10 − 8 were carried forward into this stage. Four out of six variants replicated in the independent set of MECC samples with P < 0.05 ( Table II). Further, the combined meta-analysis of MECC discovery and CFR discovery samples together with MECC replication samples demonstrated that the region on chromosome 4q32.2 remains statistically significant at a genome-wide threshold. In the combined analysis, rs35509282 was the most strongly associated meta-analysis finding (risk allele = A OR per risk allele = 1.53 P value = 8.2 × 10 − 9 ), with the MECC replication-specific result consistent in direction with a P value of 0.033 ( Table II). All findings with P < 5 × 10 − 8 were located within this same region on chromosome 4, and the OR estimates and average allele frequencies indicate strong LD among all top SNPs. Because several chromosome 4 associated SNPs reached genome-wide significance upon replication and combined meta-analysis, we removed the discovery P-value filter of <5 × 10 − 8 and examined the combined three-study meta-analysis results in this chromosomal location. A regional LocusZoom plot summarizes the fine mapping that is accomplishable via 1000GP imputation. The association finding at 4q32.2 localizes to an

250kb region upstream of FSTL5 ( Figure 4).

LocusZoom plot of regional association results for the novel 4q32.2 genome-wide significant locus (rs17042479±1Mb). The x-axis represents chromosomal position, and the y-axis shows the −log10(P value) from the meta-analysis of MECC discovery + CFR discovery + MECC replication. Each circle represents one SNP’s association with CRC. Purple = index SNP. Correlation (r 2 ) between the index SNP and each other SNP was calculated based on 1000GP Phase I March 2012 European samples.

LocusZoom plot of regional association results for the novel 4q32.2 genome-wide significant locus (rs17042479±1Mb). The x-axis represents chromosomal position, and the y-axis shows the −log10(P value) from the meta-analysis of MECC discovery + CFR discovery + MECC replication. Each circle represents one SNP’s association with CRC. Purple = index SNP. Correlation (r 2 ) between the index SNP and each other SNP was calculated based on 1000GP Phase I March 2012 European samples.

We also conducted colon- and rectum-specific analyses for our top findings. The combined discovery-replication meta-analysis effect sizes were comparable with the overall CRC ORs for both colon and rectum (data not shown). However, the sample sizes for rectal cancers were quite limited (MECC discovery: 456 colon, 29 rectum CFR discovery: 1248 colon, 729 rectum MECC replication: 793 colon, 296 rectum).

UPDATE 2021

I've modified the benchmark code as follows:

• std::chrono used for timing measurements instead of boost
• C++11 <random> used instead of rand()
• Avoid repeated operations that can get hoisted out. The base parameter is ever-changing.

I get the following results with GCC 10 -O2 (in seconds):

GCC 10 -O3 is almost identical to GCC 10 -O2.

Clang 12 -O3 is almost identical to Clang 12 -O2.

With Clang 12 -O2 -ffast-math:

Clang 12 -O3 -ffast-math is almost identical to Clang 12 -O2 -ffast-math.

Machine is Intel Core i7-7700K on Linux 5.4.0-73-generic x86_64.

• With GCC 10 (no -ffast-math), x*x*x. is always faster
• With GCC 10 -O2 -ffast-math, std::pow is as fast as x*x*x. for odd exponents
• With GCC 10 -O3 -ffast-math, std::pow is as fast as x*x*x. for all test cases, and is around twice as fast as -O2.
• With GCC 10, C's pow(double, double) is always much slower
• With Clang 12 (no -ffast-math), x*x*x. is faster for exponents greater than 2
• With Clang 12 -ffast-math, all methods produce similar results
• With Clang 12, pow(double, double) is as fast as std::pow for integral exponents
• Writing benchmarks without having the compiler outsmart you is hard.

I'll eventually get around to installing a more recent version of GCC on my machine and will update my results when I do so.

Here's the updated benchmark code:

I tested the performance difference between x*x*. vs pow(x,i) for small i using this code:

Note that I accumulate the result of every pow calculation to make sure the compiler doesn't optimize it away.

If I use the std::pow(double, double) version, and loops = 1000000l , I get:

This is on an Intel Core Duo running Ubuntu 9.10 64bit. Compiled using gcc 4.4.1 with -o2 optimization.

So in C, yes x*x*x will be faster than pow(x, 3) , because there is no pow(double, int) overload. In C++, it will be the roughly same. (Assuming the methodology in my testing is correct.)

This is in response to the comment made by An Markm:

Even if a using namespace std directive was issued, if the second parameter to pow is an int , then the std::pow(double, int) overload from <cmath> will be called instead of ::pow(double, double) from <math.h> .

This test code confirms that behavior:

That's the wrong kind of question. The right question would be: "Which one is easier to understand for human readers of my code?"

If speed matters (later), don't ask, but measure. (And before that, measure whether optimizing this actually will make any noticeable difference.) Until then, write the code so that it is easiest to read.

Edit
Just to make this clear (although it already should have been): Breakthrough speedups usually come from things like using better algorithms, improving locality of data, reducing the use of dynamic memory, pre-computing results, etc. They rarely ever come from micro-optimizing single function calls, and where they do, they do so in very few places, which would only be found by careful (and time-consuming) profiling, more often than never they can be sped up by doing very non-intuitive things (like inserting noop statements), and what's an optimization for one platform is sometimes a pessimization for another (which is why you need to measure, instead of asking, because we don't fully know/have your environment).

Let me underline this again: Even in the few applications where such things matter, they don't matter in most places they're used, and it is very unlikely that you will find the places where they matter by looking at the code. You really do need to identify the hot spots first, because otherwise optimizing code is just a waste of time.

Even if a single operation (like computing the square of some value) takes up 10% of the application's execution time (which IME is quite rare), and even if optimizing it saves 50% of the time necessary for that operation (which IME is even much, much rarer), you still made the application take only 5% less time.
Your users will need a stopwatch to even notice that. (I guess in most cases anything under 20% speedup goes unnoticed for most users. And that is four such spots you need to find.)

## An Introduction to the OpenSSL command line tool

OpenSSL is a C library that implements the main cryptographic operations like symmetric encryption, public-key encryption, digital signature, hash functions and so on. OpenSSL also implements obviously the famous Secure Socket Layer (SSL) protocol. OpenSSL is avaible for a wide variety of platforms. The source code can be downloaded from www.openssl.org. A windows distribution can be found here. This tutorial shows some basics funcionalities of the OpenSSL command line tool. After the installation has been completed you should able to check for the version.

OpenSSL has got many commands. Here is the way to list them:

Let’s see a brief description of each command:

• ca To create certificate authorities.
• dgst To compute hash functions.
• enc To encrypt/decrypt using secret key algorithms. It is possible to generate using a password or directly a secret key stored in a file.
• genrsa This command permits to generate a pair of public/private key for the RSA algorithm.
• pkcs12 Tools to manage information according to the PKCS #12 standard.
• pkcs7 Tools to manage information according to the PKCS #7 standard.
• rand Generation of pseudo-random bit strings.
• rsa RSA data management.
• rsautl To encrypt/decrypt or sign/verify signature with RSA.
• verify Checkings for X509.
• x509 Data managing for X509.

## 2  Secret key encryption algorithms

OpenSSL implements numerous secret key algorithms. To see the complete list:

The list contains the algorithm base64 which is a way to code binary information with alphanumeric characters. It is not really a secret key algorithm as there is no secret key! Let’s see an example:

But indeed we really want to use secret key algorithm to protect our information, don’t we? So, if I want for example to encrypt the text “I love OpenSSL!” with the AES algorithm using CBC mode and a key of 256 bits, I simply write:

The secret key of 256 bits is computed from the password. Note that of course the choice of password “hello” is really INSECURE! Please take the time to choose a better password to protect your privacy! The output file encrypted.bin is binary.If I want to decrypt this file I write:

## 3  Public Key Cryptography

To illustrate how OpenSSL manages public key algorithms we are going to use the famous RSA algorithm. Other algorithms exist of course, but the principle remains the same.

### 3.1  Key generation

First we need to generate a pair of public/private key. In this example we create a pair of RSA key of 1024 bits.

The generated file has got both public and private key. Obviously the private key must be kept in a secure place, or better must be encrypted. But before let’s have a look at the file key.pem . The private key is coded using the Privacy Enhanced Email (PEM) standard.

The next line allows to see the details of the RSA key pair (modulus, public and private exponent between others).

The -noout option allows to avoid the display of the key in base 64 format. Numbers in hexadecimal format can be seen (except the public exponent by default is always 65537 for 1024 bit keys): the modulus, the public exponent, the private, the two primes that compose the modules and three other numbers that are use to optimize the algorithm.

So now it’s time to encrypt the private key:

The key file will be encrypted using a secret key algorithm which secret key will be generated by a password provided by the user. In this example the secret key algorithm is triple des ( 3-des ). The private key alone is not of much interest as other users need the public key to be able to send you encrypted messages (or check if a piece of information has been signed by you). So let’s extract the public from the file key.pem

### 3.2 ਎ncryption

We are ready to perform encryption or produce digital signature.

• input_file is the file to encrypt. This file must no be longer that 116 bytes =928 bits because RSA is a block cipher, and this command is low level command, i.e. it does not do the work of cutting your text in piece of 1024 bits (less indeed because a few bits are used for special purposes.)
• key File that contains the public key. If this file contains only the public key (not both private and public), then the option -pubin must be used.
• output_file the encrypted file.

To decrypt only replace -encrypt by -decrypt , and invert the input / output file as for decryption the input is the encrypted text, and the output the plain text.

### 3.3 ਍igital signatures

The next step is to be create a digital signature and to verify it. It is not very efficient to sign a big file using directly a public key algorithm. That is why first we compute the digest of the information to sign. Note that in practice things are a bit more complex. The security provided by this scheme (hashing and then signing directly using RSA) is not the same (is less in fact) than signing directly the whole document with the RSA algorithm. The scheme used in real application is called RSA-PSS which is efficient and proven to keep the best level of security.

• hash_algorithm is the hash algorithm used to compute the digest. Among the available algorithm there are: SHA-1 (option -sha1 which computes a 160 bits digests), MD5 (option -md5 ) with 128 bits output length and RIPEMD160 (option -ripemd160 ) with 160 bits output length.
• digest is the file that contains the result of the hash application on input_file .
• input_file file that contains the data to be hashed.

This command can be used to check the hash values of some archive files like the openssl source code for example. To compute the signature of the digest:

To check to validity of a given signature:

-pubin is used like before when the key is the public one, which is natural as we are verifying a signature.To complete the verification, one needs to compute the digest of the input file and to compare it to the digest obtained in the verification of the digital signature.

## 4  Public Key Infrastructure

### 4.1  What is a PKI? (in short)

#### 4.1.1  The Problem: Man in the Middle Attack

One of the major breakthrough of public key cryptography is to solve the problem of key distribution. Secret key cryptography supposes the participants already agreed on a common secret. But how do they manage this in practice? Sending the key through an encrypted channel seems the more natural and practical solution but once again we need a common secret key to do this. With public key cryptography things are a lot simpler: if I want to send a message to Bob, I only need to find Bob’s public key (on his homepage, on a public key directory . ) encrypt the message using this key and send the result to Bob. Then Bob using his own private key can recover the plain text. However a big problem remains. What happens if a malicious person called The Ugly makes me believe that the public key he owns is in fact Bob’s one? Simply I will send an encrypted message using The Ugly’s public key thinking I’m communicating with Bob. The Ugly will receive the message, decrypt it, and will then encrypt the plaintext with Bob’s (real) public key. Bob will receive the encrypted message, will answer probably with another encrypted message using The Ugly’s public key (who once again managed to convince Bob, this public key belongs to me). Afterwards The Ugly will decrypt the message, reencrypt it with my public key, so I will really receive the Bob’s answer. Indeed I will be communicating with Bob, but without confidentiality. This attack is called “Man in the middle Attack”, where the man is of course The Ugly of our little story. So we need a mechanism to associate in a trustworthy way a public key to the identity of a person (name, identity card number . ). One of this mechanism is implemented in PGP. The idea is that every one builds his own net of trust, by having a list of trusted public keys, and by sharing these keys. The other solution is the use of a PKI.

#### 4.1.2 ਊ solution: Public Key Infrastructure

Public Key Infrastructure is a centralized solution to the problem of trust. The idea is to have a trusted entity (organization, corporation) that will do the job of certifying that a given public key belongs really to a given person. This person must be identified by his name, address and other useful information that may allow to know who this person is. Once this work his done, the PKI emits a public certificate for this person. This certificate contains between others:

• All the information needed to identify this person (name, birth date. ).
• The public key of this person.
• The date of revocation of the certificate (a certificate is valid during 1 or 3 years in practice).
• The digital signature of all this previous information emitted by the PKI.

So now, if I want to send a private message to Bob, I can ask for his certificate. When I received the certificate, I must check the signature of the PKI who emitted it and for the date of revocation. If verifications pass then I can safely use the public key of the certificate to communicate with Bob. Indeed, in practice the way a PKI works is much more complicated. For example sometimes a certificate may be revocated before the date of end of validity has been reached. So a kind of list of revocated certificated has to be maintained and accessed every time you want to use a certificate. The problem of certificate revocation is really difficult in practice.

### 4.2  My first PKI with OpenSSL

This section will show how to create your own small PKI. Obviously this is only a tutorial and you SHOULD NOT base a real application only on the information contained in this page!

openssl.cnf : let’s configure a few things-->

#### 4.2.1   openssl.cnf : let’s configure a few things

Before starting to create certificates it is necesarry to configure a few parameters. That can be done editing the file openssl.cnf the is usually located in the bin directory of OpenSSL. This file looks like this:

If you want to simplify your work you should use the default openssl.cnf file with the demoCA directory (also in the bin directory of OpenSSL) that contains all the necesarry files. You should ensure that all the directories are valid ones, and that the private key that will be created in the next section ( cakey.pem ) is well linked. Also check of the presence of a file .rand or .rnd that will bee created with cakey.pem . For the certificates database you can create an empty file index.txt . Also create a serial file serial with the text for example 011E . 011 E is the serial number for the next certificate.

#### 4.2.2  PKI creation

First we must create a certificate for the PKI that will contain a pair of public / private key. The private key will be used to sign the certificates.

The pair of keys will be in cakey.pem and the certificate (which does NOT contain the private key, only the public) is saved in cacert.pem . During the execution you will be asked for many informations about your organization (name, country, and so on . ). The private key contained in cakey.pem is encrypted with a password. This file should be put in a very secure place (although it is encrypted). -x509 refers to a standard that defines how information of the certificate is coded. It can be useful to export the certificate of the PKI in DER format as to be able to load it into your browser.

#### 4.2.3 ਌reation of a user certificate

Now the PKI has got its own pair of keys and certificate, let’s suppose a user wants to get a certificate from the PKI. To do so he must create a certificate request , that will contain all the information needed for the certificate (name, country, . and the public key of the user of course). This certificate request is sent to the PKI.

## Code availability

Grün, D. et al. Single-cell messenger RNA sequencing reveals rare intestinal cell types. Nature 525, 251–255 (2015).

Villani, A.-C. et al. Single-cell RNA-seq reveals new types of human blood dendritic cells, monocytes, and progenitors. Science 356, eaah4573 (2017).

Trapnell, C. et al. The dynamics and regulators of cell fate decisions are revealed by pseudotemporal ordering of single cells. Nat. Biotechnol. 32, 381–386 (2014).

Treutlein, B. et al. Reconstructing lineage hierarchies of the distal lung epithelium using single-cell RNA-seq. Nature 509, 371–375 (2014).

Aibar, S. et al. SCENIC: single-cell regulatory network inference and clustering. Nat. Methods 14, 1083–1086 (2017).

Qiu, X. et al. Reversed graph embedding resolves complex single-cell trajectories. Nat. Methods 14, 979–982 (2017).

Chen, X., Teichmann, S. A. & Meyer, K. B. From tissues to cell types and back: single-cell gene expression analysis of tissue architecture. Annu. Rev. Biomed. Data Sci 1, 29–51 (2018).

Rozenblatt-Rosen, O., Stubbington, M. J. T., Regev, A. & Teichmann, S. A. The Human Cell Atlas: from vision to reality. Nature 550, 451–453 (2017).

Haghverdi, L., Lun, A., Morgan, M. & Marioni, J. Batch effects in single-cell RNA-sequencing data are corrected by matching mutual nearest neighbors. Nat. Biotechnol. 36, 421–427 (2018).

Butler, A., Hoffman, P., Smibert, P., Papalexi, E. & Satija, R. Integrating single-cell transcriptomic data across different conditions, technologies, and species. Nat. Biotechnol. 36, 411–420 (2018).

Brown, M. & Lowe, D. G. Automatic panoramic image stitching using invariant features. Int. J. Comput. Vis. 74, 59–73 (2007).

Dekel, T., Oron, S., Rubinstein, M., Avidan, S. & Freeman, W. T. Best-Buddies Similarity for robust template matching. in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (eds. Grauman, K. et al.) 2021–2029 (IEEE, 2015).

Halko, N., Martinsson, P.-G. & Tropp, J. Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev. 53, 217–288 (2011).

Charikar, M. S. Similarity estimation techniques from rounding algorithms. in Proc. Thirty-Fourth Annual ACM Symposium on Theory of Computing (ed. Reif, J.) 380–388 (ACM, 2002).

Dasgupta, S. & Freund, Y. Random projection trees and low dimensional manifolds. in Proc. Fourtieth Annual ACM Symposium on Theory of Computing (ed. Ladner, R. & Dwork, C.) 537–546 (ACM, 2008).

Zappia, L., Phipson, B. & Oshlack, A. Splatter: simulation of single-cell RNA sequencing data. Genome Biol. 18, 147 (2017).

Zheng, G. X. Y. et al. Massively parallel digital transcriptional profiling of single cells. Nat. Commun. 8, 14049 (2017).

Paul, F. et al. Transcriptional heterogeneity and lineage commitment in myeloid progenitors. Cell 163, 1663–1677 (2015).

Nestorowa, S. et al. A single-cell resolution map of mouse hematopoietic stem and progenitor cell differentiation. Blood 128, e20–e31 (2016).

Baron, M. et al. A single-cell transcriptomic map of the human and mouse pancreas reveals inter- and intra-cell population structure. Cell Syst. 3, 346–360 (2016).

Muraro, M. J. et al. A single-cell transcriptome atlas of the human pancreas. Cell Syst. 3, 385–394 (2016).

Grün, D. et al. De novo prediction of stem cell identity using single-cell transcriptome data. Cell Stem Cell 19, 266–277 (2016).

Lawlor, N. et al. Single-cell transcriptomes identify human islet cell signatures and reveal cell-type-specific expression changes in type 2 diabetes. Genome Res. 27, 208–222 (2017).

Segerstolpe, Å. et al. Single-cell transcriptome profiling of human pancreatic islets in health and type 2 diabetes. Cell Metab. 24, 593–607 (2016).

Rousseeuw, P. J. Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math. 20, 53–65 (1987).

Saunders, A. et al. Molecular diversity and specializations among the cells of the adult mouse brain. Cell 174, 1015–1030.e16 (2018).

Rosenberg, A. B. et al. Single-cell profiling of the developing mouse brain and spinal cord with split-pool barcoding. Science 360, 176–182 (2018).

Shalek, A. K. et al. Single-cell RNA seq reveals dynamic paracrine control of cellular variation. Nature 510, 363–369 (2014).

Davie, K. et al. A single-cell transcriptome atlas of the aging Drosophila brain. Cell 174, 982–998.e20 (2018).

Li, W. V. & Li, J. J. An accurate and robust imputation method scImpute for single-cell RNA-seq data. Nat. Commun. 9, 997 (2018).

Ronen, J. & Akalin, A. netSmooth: Network-smoothing based imputation for single cell RNA-seq. F1000Research 7, 8 (2018).

Yip, S. H., Sham, P. C. & Wang, J. Evaluation of tools for highly variable gene discovery from single-cell RNA-seq data. Brief. Bioinform. https://doi.org/10.1093/bib/bby011 (2018).

Tung, P. Y. et al. Batch effects and the effective design of single-cell gene expression studies. Sci. Rep. 7, 39921 (2017).

Stegle, O., Teichmann, S. A. & Marioni, J. C. Computational and analytical challenges in single-cell transcriptomics. Nat. Rev. Genet. 16, 133–145 (2015).

Kiselev, V. Y., Yiu, A. & Hemberg, M. scmap: projection of single-cell RNA-seq data across datasets. Nat. Methods 15, 359–362 (2018).

Kiselev, V. Y. et al. SC3: consensus clustering of single-cell RNA-seq data. Nat. Methods 14, 483–486 (2017).

Zhang, J. M., Fan, J., Fan, H. C., Rosenfeld, D. & Tse, D. N. An interpretable framework for clustering single-cell RNA-Seq datasets. BMC Bioinformatics 19, 93 (2018).

Cho, H., Berger, B. & Peng, J. Generalizable and scalable visualization of single-cell data using neural networks. Cell Syst. 7, 185–191 (2018).

Van Dijk, D. et al. Recovering gene interactions from single-cell data using data diffusion. Cell 174, 716–729.e27 (2018).

Ding, J., Condon, A. & Shah, S. P. Interpretable dimensionality reduction of single cell transcriptome data with deep generative models. Nat. Commun. 9, 2002 (2018).

Satija, R., Farrell, J. A., Gennert, D., Schier, A. F. & Regev, A. Spatial reconstruction of single-cell gene expression data. Nat. Biotechnol. 33, 495–502 (2015).

Wolf, F. A., Angerer, P. & Theis, F. J. SCANPY: large-scale single-cell gene expression data analysis. Genome Biol. 19, 15 (2018).

Soneson, C. & Robinson, M. D. Bias, robustness and scalability in single-cell differential expression analysis. Nat. Methods 15, 255–261 (2018).

Cleary, B., Cong, L., Cheung, A., Lander, E. S. & Regev, A. Efficient generation of transcriptomic profiles by random composite measurements. Cell 171, 1424–1436.e18 (2017).

Crow, M., Paul, A., Ballouz, S., Huang, Z. J. & Gillis, J. Characterizing the replicability of cell types defined by single cell RNA-sequencing data using MetaNeighbor. Nat. Commun. 9, 884 (2018).

Hie, B., Cho, H., DeMeo, B., Bryson, B. & Berger, B. Geometric sketching compactly summarizes the single-cell transcriptomic landscape. Cell Syst. (in the press) preprint at https://doi.org/10.1101/536730

Allaire, J., Ushey, K., Tang, Y. & Eddelbuettel, D. Reticulate: R interface to Python (RStudio, 2017).

Gierahn, T. M. et al. Seq-Well: portable, low-cost RNA sequencing of single cells at high throughput. Nat. Methods 14, 395–398 (2017).

Kang, H. M. et al. Multiplexed droplet single-cell RNA-sequencing using natural genetic variation. Nat. Biotechnol. 36, 89–94 (2018).

Oliphant, T. E. SciPy: open source scientific tools for Python. Comput. Sci. Eng. 9, 10–20 (2007).

Loh, P. R., Baym, M. & Berger, B. Compressive genomics. Nature Biotech. 30, 627–630 (2012).

Van Der Maaten, L. J. P. & Hinton, G. E. Visualizing high-dimensional data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008).

Pedregosa F. & Varoquaux G. Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011).

Buttner, M., Miao, Z., Wolf, A., Teichmann, S. A. & Theis, F. J. A test metric for assessing single-cell RNA-seq batch correction. Nat. Methods 16, 43–49 (2017).

Macosko, E. Z. et al. Highly parallel genome-wide expression profiling of individual cells using nanoliter droplets. Cell 161, 1202–1214 (2015).

Hagberg, A. A., Schult, D. A. & Swart, P. J. Exploring network structure, dynamics, and function using NetworkX. in Proc. 7th Python Sci. Conf. (ed. Varoquaux, G. et al.) 11–15 (SciPy, 2008).

Eden, E., Navon, R., Steinfeld, I., Lipson, D. & Yakhini, Z. GOrilla: a tool for discovery and visualization of enriched GO terms in ranked gene lists. BMC Bioinformatics 10, 48 (2009).

Skipper, S. & Perktold, J. Statsmodels: econometric and statistical modeling with Python. in Proc. 9th Python Sci. Conf. (eds. van der Walt, S. & Millman, J.) 57–61 (SciPy, 2010).

Hunter, J. D. Matplotlib: a 2D graphics environment. Comput. Sci. Eng. 9, 90–95 (2007).

## Claims

1. A nutritional supplement for an adult human for supporting endogenous systems associated with increasing life span, the nutritional supplement comprising:

an antioxidant mixture that includes each of the following ingredients: between 24 mg and 26 mg of alpha lipoic acid between 9 mg and 11 mg of resveratrol between 17 mg and 19 mg of curcumin between 16.5 mg and 18.5 mg of epigallocatechin gallate (EGCG) between 6.5 mg and 8.5 mg of olive fruit extract between 9 mg and 11 mg of rutin between 14 mg and 16 mg of quercetin and 10 mg of hesperidin.

2. The nutritional supplement of claim 1, wherein the antioxidant mixture includes:

25 mg of alpha lipoic acid 10 mg of resveratrol 18.06 mg of curcumin 17.5 mg of EGCG 7.5 mg of olive fruit extract 10 mg of rutin 15 mg of quercetin and 10 mg of hesperidin.

3. The nutritional supplement of claim 1, wherein the antioxidant mixture further includes each of the following ingredients:

between 0.01 and 1 mg of mixed carotenoids between 0.01 and 3 mg of beta carotene between 0.01 and 1 mg of retinyl acetate between 10 mg and 200 mg of vitamin C between 0.001 and 1 mg of vitamin D3 between 10 and 100 mg of vitamin E between 0.01 and 1 mg of vitamin K1 between 0.0001 and 1 mg of vitamin K2 between 1 and 20 mg of vitamin B1 between 1 and 20 mg of vitamin B2 between 1 and 20 mg of niacin between 1 and 20 mg of niacinamide between 1 and 20 mg of vitamin B6 between 0.01 and 2 mg of folic acid between 0.001 and 2 mg of vitamin B12 between 0.001 and 2 mg of biotin between 1 and 50 mg of pantothenic acid between 1 and 50 mg of mixed tocopherols between 1 and 100 mg of inositol between 1 and 200 mg of choline bitartrate between 0.1 and 20 mg of coenzyme Q10 between 0.01 and 2 mg of lutein and between 0.01 and 2 mg of lycopene.

4. The nutritional supplement of claim 3, further comprising:

a mineral mixture that includes each of the following ingredients: between 10 and 200 mg of calcium between 0.001 and 10 mg of iodine between 1 and 200 mg of magnesium between 0.1 and 50 mg of zinc between 0.001 and 2 mg of selenium between 0.01 and 10 mg of copper between 0.01 and 10 mg of manganese between 0.001 and 1 mg of chromium between 0.001 and 1 mg of molybdenum between 0.01 and 10 mg of boron between 0.1 and 10 mg of silicon between 0.001 and 1 mg of vanadium between 0.01 and 10 mg of ultra-trace minerals and between 1 and 100 mg of N-acetyl L-cysteine.

5. The nutritional supplement of claim 4, wherein the antioxidant mixture is contained in one or more first tablets and the mineral mixture is contained in one or more second tablets.

6. The nutritional supplement of claim 4, wherein the antioxidant mixture is contained in a single first tablet and the mineral mixture is contained in a single second tablet.

7. The nutritional supplement of claim 1, further comprising:

a mineral mixture that includes each of the following ingredients: between 10 and 200 mg of calcium between 0.001 and 10 mg of iodine between 1 and 200 mg of magnesium between 0.1 and 50 mg of zinc between 0.001 and 2 mg of selenium between 0.01 and 10 mg of copper between 0.01 and 10 mg of manganese between 0.001 and 1 mg of chromium between 0.001 and 1 mg of molybdenum between 0.01 and 10 mg of boron between 0.1 and 10 mg of silicon between 0.001 and 1 mg of vanadium between 0.01 and 10 mg of ultra-trace minerals and between 1 and 100 mg of N-acetyl L-cysteine.

8. The nutritional supplement of claim 7, wherein the antioxidant mixture is contained in one or more first tablets and the mineral mixture is contained in one or more second tablets.

9. The nutritional supplement of claim 7, wherein the antioxidant mixture is contained in a single first tablet and the mineral mixture is contained in a single second tablet.

10. The nutritional supplement of claim 9, wherein the antioxidant mixture includes:

25 mg of alpha lipoic acid 10 mg of resveratrol 18.06 mg of curcumin 17.5 mg of EGCG 7.5 mg of olive fruit extract 10 mg of rutin 15 mg of quercetin and 10 mg of hesperidin.

11. The nutritional supplement of claim 1, wherein the antioxidant mixture is contained in a first tablet, the antioxidant mixture further including each of the following ingredients:

mixed carotenoids beta carotene retinyl acetate vitamin C vitamin D3 vitamin E vitamin K1 vitamin K2 vitamin B1 vitamin B2 niacin vitamin B6 folic acid vitamin B12 biotin pantothenic acid mixed tocopherols inositol choline bitartrate coenzyme Q10 lutein and lycopene the nutritional supplement further comprising a second tablet that contains a mineral mixture, the mineral mixture including each of the following ingredients: calcium iodine magnesium zinc selenium copper manganese chromium molybdenum boron silicon vanadium ultra-trace minerals and N-acetyl L-cysteine.

12. A nutritional supplement for an adult human comprising:

a first tablet that includes an antioxidant mixture, the antioxidant mixture including each of the following ingredients: between 24 mg and 26 mg of alpha lipoic acid between 9 mg and 11 mg of resveratrol between 17 mg and 19 mg of curcumin between 16.5 mg and 18.5 mg of EGCG between 6.5 mg and 8.5 mg of olive fruit extract between 9 mg and 11 mg of rutin between 14 mg and 16 mg of quercetin 10 mg of hesperidin mixed carotenoids beta carotene retinyl acetate vitamin C vitamin D3 vitamin E vitamin K1 vitamin K2 vitamin B1 vitamin B2 niacin niacinamide vitamin B6 folic acid vitamin B12 biotin pantothenic acid mixed tocopherols inositol choline bitartrate coenzyme Q10 lutein and lycopene and a second tablet that includes a mineral mixture, the mineral mixture including each of the following ingredients: calcium iodine magnesium zinc selenium copper manganese chromium molybdenum boron silicon vanadium ultra-trace minerals and N-acetyl L-cysteine.

13. The nutritional supplement of claim 12, wherein the first tablet includes:

25 mg of alpha lipoic acid 10 mg of resveratrol 18.06 mg of curcumin 17.5 mg of epigallocatechin gallate (EGCG) 7.5 mg of olive fruit extract 10 mg of rutin 15 mg of quercetin and 10 mg of hesperidin.

14. The nutritional supplement of claim 12, wherein the first tablet includes:

between 0.01 and 1 mg of mixed carotenoids between 0.01 and 3 mg of beta carotene between 0.01 and 1 mg of retinyl acetate between 10 mg and 200 mg of vitamin C between 0.001 and 1 mg of vitamin D3 between 10 and 100 mg of vitamin E between 0.01 and 1 mg of vitamin K1 between 0.0001 and 1 mg of vitamin K2 between 1 and 20 mg of vitamin B1 between 1 and 20 mg of vitamin B2 between 1 and 20 mg of niacin between 1 and 20 mg of niacinamide between 1 and 20 mg of vitamin B6 between 0.01 and 2 mg of folic acid between 0.001 and 2 mg of vitamin B12 between 0.001 and 2 mg of biotin between 1 and 50 mg of pantothenic acid between 1 and 50 mg of mixed tocopherols between 1 and 100 mg of inositol between 1 and 200 mg of choline bitartrate between 0.1 and 20 mg of coenzyme Q10 between 0.01 and 2 mg of lutein and between 0.01 and 2 mg of lycopene.

15. The nutritional supplement of claim 14, wherein the second tablet includes:

between 10 and 200 mg of calcium between 0.001 and 10 mg of iodine between 1 and 200 mg of magnesium between 0.1 and 50 mg of zinc between 0.001 and 2 mg of selenium between 0.01 and 10 mg of copper between 0.01 and 10 mg of manganese between 0.001 and 1 mg of chromium between 0.001 and 1 mg of molybdenum between 0.01 and 10 mg of boron between 0.1 and 10 mg of silicon between 0.001 and 1 mg of vanadium between 0.01 and 10 mg of ultra-trace minerals and between 1 and 100 mg of N-acetyl L-cysteine.

16. A nutritional supplement for an adult human for supporting endogenous systems associated with increasing life span, the nutritional supplement consisting essentially of:

an antioxidant mixture that includes each of the following ingredients: 25 mg of alpha lipoic acid 10 mg of resveratrol 18.06 mg of curcumin 17.5 mg of EGCG 7.5 mg of olive fruit extract 10 mg of rutin 15 mg of quercetin 10 mg of hesperidin between 0.01 and 1 mg of mixed carotenoids between 0.01 and 3 mg of beta carotene between 0.01 and 1 mg of retinyl acetate between 10 mg and 200 mg of vitamin C between 0.001 and 1 mg of vitamin D3 between 10 and 100 mg of vitamin E between 0.01 and 1 mg of vitamin K1 between 0.0001 and 1 mg of vitamin K2 between 1 and 20 mg of vitamin B1 between 1 and 20 mg of vitamin B2 between 1 and 20 mg of niacin between 1 and 20 mg of niacinamide between 1 and 20 mg of vitamin B6 between 0.01 and 2 mg of folic acid between 0.001 and 2 mg of vitamin B12 between 0.001 and 2 mg of biotin between 1 and 50 mg of pantothenic acid between 1 and 50 mg of mixed tocopherols between 1 and 100 mg of inositol between 1 and 200 mg of choline bitartrate between 0.1 and 20 mg of coenzyme Q10 between 0.01 and 2 mg of lutein and between 0.01 and 2 mg of lycopene and a mineral mixture.

17. The nutritional supplement of claim 16, wherein the mineral mixture includes calcium, magnesium, potassium and zinc.