Articles

1.3: Rates of Change and Behavior of Graphs


Skills to Develop

  • Find the average rate of change of a function.
  • Use a graph to determine where a function is increasing, decreasing, or constant.
  • Use a graph to locate local maxima and local minima.
  • Use a graph to locate the absolute maximum and absolute minimum.

Gasoline costs have experienced some wild fluctuations over the last several decades. Table (PageIndex{1}) lists the average cost, in dollars, of a gallon of gasoline for the years 2005–2012. The cost of gasoline can be considered as a function of year.

Table (PageIndex{1})
(y)20052006200720082009201020112012
(C(y))2.312.622.843.302.412.843.583.68

If we were interested only in how the gasoline prices changed between 2005 and 2012, we could compute that the cost per gallon had increased from $2.31 to $3.68, an increase of $1.37. While this is interesting, it might be more useful to look at how much the price changed per year. In this section, we will investigate changes such as these.

The price change per year is a rate of change because it describes how an output quantity changes relative to the change in the input quantity. We can see that the price of gasoline in Table (PageIndex{1}) did not change by the same amount each year, so the rate of change was not constant. If we use only the beginning and ending data, we would be finding the average rate of change over the specified period of time. To find the average rate of change, we divide the change in the output value by the change in the input value.

[egin{align*} ext{Average rate of change}&=dfrac{ ext{Change in output}}{ ext{Change in input}} [5pt] &=dfrac{Delta y}{Delta x}[5pt] &=dfrac{y_2-y_1}{x_2-x_1}[5pt] &=dfrac{f(x_2)-f(x_1)}{x_2-x_1}end{align*} label{1.3.1}]

The Greek letter (Delta) (delta) signifies the change in a quantity; we read the ratio as “delta-(y) over delta-(x)” or “the change in (y) divided by the change in (x).” Occasionally we write (Delta f) instead of (Delta y), which still represents the change in the function’s output value resulting from a change to its input value. It does not mean we are changing the function into some other function.

In our example, the gasoline price increased by $1.37 from 2005 to 2012. Over 7 years, the average rate of change was

[dfrac{Delta y}{Delta x}=dfrac{$1.37}{7 ext{years}}approx ext{0.196 dollars per year.} label{1.3.2}]

On average, the price of gas increased by about 19.6¢ each year. Other examples of rates of change include:

  • A population of rats increasing by 40 rats per week
  • A car traveling 68 miles per hour (distance traveled changes by 68 miles each hour as time passes)
  • A car driving 27 miles per gallon (distance traveled changes by 27 miles for each gallon)
  • The current through an electrical circuit increasing by 0.125 amperes for every volt of increased voltage
  • The amount of money in a college account decreasing by $4,000 per quarter

Definition: Rate of Change

A rate of change describes how an output quantity changes relative to the change in the input quantity. The units on a rate of change are “output units per input units.”

The average rate of change between two input values is the total change of the function values (output values) divided by the change in the input values.

[dfrac{Delta y}{Delta x}=dfrac{f(x_2)-f(x_1)}{x_2-x_1}]

Given the value of a function at different points, calculate the average rate of change of a function for the interval between two values (x_1) and (x_2).

  1. Calculate the difference (y_2−y_1=Delta y).
  2. Calculate the difference (x_2−x_1=Delta x).
  3. Find the ratio (dfrac{Delta y}{Delta x}).

Example (PageIndex{1}): Computing an Average Rate of Change

Using the data in Table (PageIndex{1}), find the average rate of change of the price of gasoline between 2007 and 2009.

Solution

In 2007, the price of gasoline was $2.84. In 2009, the cost was $2.41. The average rate of change is

[egin{align*} dfrac{Delta y}{Delta x}&=dfrac{y_2−y_1}{x_2−x_1} [5pt] &=dfrac{$2.41−$2.84}{2009−2007} [5pt] &=dfrac{−$0.43}{2 ext{ years}} [5pt] &=−$0.22 ext{ per year} end{align*}]

Analysis

Note that a decrease is expressed by a negative change or “negative increase.” A rate of change is negative when the output decreases as the input increases or when the output increases as the input decreases.

(PageIndex{1})

Using the data in Table (PageIndex{1}), find the average rate of change between 2005 and 2010.

Solution

(dfrac{$2.84−$2.315}{5 ext{ years}} =dfrac{$0.535}{5 ext{ years}} =$0.106 ext{per year.})

Example (PageIndex{2}): Computing Average Rate of Change from a Graph

Given the function (g(t)) shown in Figure (PageIndex{1}), find the average rate of change on the interval ([−1,2]).

Figure (PageIndex{1}): Graph of a parabola.

Solution

At (t=−1), Figure (PageIndex{2}) shows (g(−1)=4). At (t=2),the graph shows (g(2)=1).

Figure (PageIndex{2}): Graph of a parabola with a line from points (-1, 4) and (2, 1) to show the changes for g(t) and t.

The horizontal change (Delta t=3) is shown by the red arrow, and the vertical change (Delta g(t)=−3) is shown by the turquoise arrow. The output changes by –3 while the input changes by 3, giving an average rate of change of

[dfrac{1−4}{2−(−1)}=dfrac{−3}{3}=−1]

Analysis

Note that the order we choose is very important. If, for example, we use (dfrac{y_2−y_1}{x_1−x_2}), we will not get the correct answer. Decide which point will be 1 and which point will be 2, and keep the coordinates fixed as ((x_1,y_1)) and ((x_2,y_2)).

Example (PageIndex{3}): Computing Average Rate of Change from a Table

After picking up a friend who lives 10 miles away, Anna records her distance from home over time. The values are shown in Table (PageIndex{2}). Find her average speed over the first 6 hours.

Table (PageIndex{2})

t (hours)

01234567
D(t)(miles)
105590153214240292300

Solution

Here, the average speed is the average rate of change. She traveled 282 miles in 6 hours, for an average speed of

[egin{align*}dfrac{292−10}{6−0}&=dfrac{282}{6}[5pt] &=47end{align*}]

The average speed is 47 miles per hour.

Analysis

Because the speed is not constant, the average speed depends on the interval chosen. For the interval ([2,3]), the average speed is 63 miles per hour.

Example (PageIndex{4}): Computing Average Rate of Change for a Function Expressed as a Formula

Compute the average rate of change of (f(x)=x^2−frac{1}{x}) on the interval ([2, 4]).

Solution

We can start by computing the function values at each endpoint of the interval.

[egin{align*}f(2)&=2^2−frac{1}{2} f(4)&=4^2−frac{1}{4} [5pt] &=4−frac{1}{2} &=16−frac{1}{4} [5pt] &=72 &=frac{63}{4}end{align*}]

Now we compute the average rate of change.

[egin{align*} ext{Average rate of change} &=dfrac{f(4)−f(2)}{4−2} [5pt] &=dfrac{frac{63}{4}-frac{7}{2}}{4-2} [5pt] &=dfrac{frac{49}{4}}{2} [5pt] &= dfrac{49}{8}end{align*}]

(PageIndex{2})

Find the average rate of change of (f(x)=x−2sqrt{x}) on the interval ([1, 9]).

Solution

(frac{1}{2})

Example (PageIndex{5}): Finding the Average Rate of Change of a Force

The electrostatic force (F),measured in newtons, between two charged particles can be related to the distance between the particles (d),in centimeters, by the formula (F(d)=frac{2}{d^2}). Find the average rate of change of force if the distance between the particles is increased from 2 cm to 6 cm.

Solution

We are computing the average rate of change of (F(d)=dfrac{2}{d^2}) on the interval ([2,6]).

[egin{align*} ext{Average rate of change }&=dfrac{F(6)−F(2)}{6−2} [5pt] &=dfrac{frac{2}{6^2}-frac{2}{2^2}}{6-2} & ext{Simplify} [5pt] &=dfrac{frac{2}{36}-frac{2}{4}}{4} [5pt] &=dfrac{-frac{16}{36}}{4} & ext{Combine numerator terms.} [5pt] &=−dfrac{1}{9} & ext{Simplify}end{align*}]

The average rate of change is (−frac{1}{9}) newton per centimeter.

Example (PageIndex{6}): Finding an Average Rate of Change as an Expression

Find the average rate of change of (g(t)=t^2+3t+1) on the interval ([0, a]). The answer will be an expression involving (a).

Solution

We use the average rate of change formula.

(egin{align*} ext{Average rate of change} &=dfrac{g(a)−g(0)}{a−0} & ext{Evaluate.} [5pt] &=dfrac{(a^2+3a+1)−(0^2+3(0)+1)}{a−0} & ext{Simplify.} [5pt] &=dfrac{a^2+3a+1−1}{a} & ext{Simplify and factor.}[5pt] &= dfrac{a(a+3)}{a} & ext{Divide by the common factor a.}[5pt] &= a+3 end{align*})

This result tells us the average rate of change in terms of a between (t=0) and any other point (t=a). For example, on the interval ([0,5]), the average rate of change would be (5+3=8).

(PageIndex{3})

Find the average rate of change of (f(x)=x^2+2x−8) on the interval ([5, a]).

Solution

(a+7)

As part of exploring how functions change, we can identify intervals over which the function is changing in specific ways. We say that a function is increasing on an interval if the function values increase as the input values increase within that interval. Similarly, a function is decreasing on an interval if the function values decrease as the input values increase over that interval. The average rate of change of an increasing function is positive, and the average rate of change of a decreasing function is negative. Figure (PageIndex{3}) shows examples of increasing and decreasing intervals on a function.

Figure (PageIndex{3}): The function (f(x)=x^3−12x) is increasing on ((−infty, −2)cup (2,infty)) and is decreasing on ((−2, 2)).

While some functions are increasing (or decreasing) over their entire domain, many others are not. A value of the input where a function changes from increasing to decreasing (as we go from left to right, that is, as the input variable increases) is called a local maximum. If a function has more than one, we say it has local maxima. Similarly, a value of the input where a function changes from decreasing to increasing as the input variable increases is called a local minimum. The plural form is “local minima.” Together, local maxima and minima are called local extrema, or local extreme values, of the function. (The singular form is “extremum.”) Often, the term local is replaced by the term relative. In this text, we will use the term local.

Clearly, a function is neither increasing nor decreasing on an interval where it is constant. A function is also neither increasing nor decreasing at extrema. Note that we have to speak of local extrema, because any given local extremum as defined here is not necessarily the highest maximum or lowest minimum in the function’s entire domain.

For the function whose graph is shown in Figure (PageIndex{4}), the local maximum is 16, and it occurs at (x=−2). The local minimum is −16 and it occurs at (x=2).

Figure (PageIndex{4}): Graph of a polynomial that shows the increasing and decreasing intervals and local maximum.maximum

To locate the local maxima and minima from a graph, we need to observe the graph to determine where the graph attains its highest and lowest points, respectively, within an open interval. Like the summit of a roller coaster, the graph of a function is higher at a local maximum than at nearby points on both sides. The graph will also be lower at a local minimum than at neighboring points. Figure (PageIndex{5}) illustrates these ideas for a local maximum.

Figure (PageIndex{5}): Definition of a local maximum

These observations lead us to a formal definition of local extrema.

Local Minima and Local Maxima

  • A function (f) is an increasing function on an open interval if (f(b)>f(a)) for every (a), (b) interval where (b>a).
  • A function (f) is a decreasing function on an open interval if (f(b)a).

A function (f) has a local maximum at a point (b) in an open interval ((a,c)) if (f(b)) is greater than or equal to (f(x)) for every point (x) ((x) does not equal (b)) in the interval. Likewise, (f) has a local minimum at a point (b) in ((a,c)) if (f(b)) is less than or equal to (f(x)) for every (x) ((x) does not equal (b)) in the interval.

Example (PageIndex{7}) Finding Increasing and Decreasing Intervals on a Graph

Given the function (p(t)) in Figure (PageIndex{6}), identify the intervals on which the function appears to be increasing.

Figure (PageIndex{6}): Graph of a polynomial.

Solution

We see that the function is not constant on any interval. The function is increasing where it slants upward as we move to the right and decreasing where it slants downward as we move to the right. The function appears to be increasing from (t=1) to (t=3) and from (t=4) on.

In interval notation, we would say the function appears to be increasing on the interval ((1,3)) and the interval ((4,infty)).

Analysis

Notice in this example that we used open intervals (intervals that do not include the endpoints), because the function is neither increasing nor decreasing at (t=1), (t=3), and (t=4). These points are the local extrema (two minima and a maximum).

Example (PageIndex{8}): Finding Local Extrema from a Graph

Graph the function (f(x)=frac{2}{x}+frac{x}{3}). Then use the graph to estimate the local extrema of the function and to determine the intervals on which the function is increasing.

Solution

Using technology, we find that the graph of the function looks like that in Figure (PageIndex{7}). It appears there is a low point, or local minimum, between (x=2) and (x=3), and a mirror-image high point, or local maximum, somewhere between (x=−3) and (x=−2)

.

Figure (PageIndex{7}): Graph of a reciprocal function.

Analysis

Most graphing calculators and graphing utilities can estimate the location of maxima and minima. Figure (PageIndex{8}) provides screen images from two different technologies, showing the estimate for the local maximum and minimum.

Figure (PageIndex{8}): Graph of the reciprocal function on a graphing calculator.

Based on these estimates, the function is increasing on the interval ((−infty,−2.449)) and ((2.449,infty)). Notice that, while we expect the extrema to be symmetric, the two different technologies agree only up to four decimals due to the differing approximation algorithms used by each. (The exact location of the extrema is at (pmsqrt{6}), but determining this requires calculus.)

(PageIndex{8})

Graph the function (f(x)=x^3−6x^2−15x+20) to estimate the local extrema of the function. Use these to determine the intervals on which the function is increasing and decreasing.

Solution

The local maximum appears to occur at ((−1,28)), and the local minimum occurs at ((5,−80)). The function is increasing on ((−infty,−1)cup(5,infty)) and decreasing on ((−1,5)).

Graph of a polynomial with a local maximum at (-1, 28) and local minimum at (5, -80).

Example (PageIndex{9}): Finding Local Maxima and Minima from a Graph

For the function f whose graph is shown in Figure (PageIndex{9}), find all local maxima and minima.

Figure (PageIndex{9}): Graph of a polynomial.

Solution

Observe the graph of (f). The graph attains a local maximum at (x=1) because it is the highest point in an open interval around (x=1).The local maximum is the y-coordinate at (x=1), which is 2.

The graph attains a local minimum at (x=−1) because it is the lowest point in an open interval around (x=−1). The local minimum is the y-coordinate at (x=−1), which is −2.

We will now return to our toolkit functions and discuss their graphical behavior in Figure (PageIndex{10}), Figure (PageIndex{11}), and Figure (PageIndex{12}).

Figure (PageIndex{10})

.

Figure (PageIndex{11})


Figure (PageIndex{12})

There is a difference between locating the highest and lowest points on a graph in a region around an open interval (locally) and locating the highest and lowest points on the graph for the entire domain. The y-coordinates (output) at the highest and lowest points are called the absolute maximum and absolute minimum, respectively. To locate absolute maxima and minima from a graph, we need to observe the graph to determine where the graph attains it highest and lowest points on the domain of the function (Figure (PageIndex{13})).

Figure (PageIndex{13}): Graph of a segment of a parabola with an absolute minimum at (0, -2) and absolute maximum at (2, 2).

Not every function has an absolute maximum or minimum value. The toolkit function (f(x)=x^3) is one such function.

Absolute Maxima and Minima

  • The absolute maximum of (f) at (x=c) is (f(c)) where (f(c)≥f(x)) for all (x) in the domain of (f).
  • The absolute minimum of (f) at (x=d) is (f(d)) where (f(d)≤f(x)) for all (x) in the domain of (f).

Example (PageIndex{10}): Finding Absolute Maxima and Minima from a Graph

For the function f shown in Figure (PageIndex{14}), find all absolute maxima and minima.

Figure (PageIndex{14}): Graph of a polynomial.

Solution

Observe the graph of (f). The graph attains an absolute maximum in two locations, (x=−2) and (x=2), because at these locations, the graph attains its highest point on the domain of the function. The absolute maximum is the y-coordinate at (x=−2) and (x=2), which is 16.

The graph attains an absolute minimum at x=3, because it is the lowest point on the domain of the function’s graph. The absolute minimum is the y-coordinate at x=3,which is−10.

  • Average rate of change: (dfrac{Delta y}{Delta x}=dfrac{f(x_2)-f(x_1)}{x_2-x_1})
  • A rate of change relates a change in an output quantity to a change in an input quantity. The average rate of change is determined using only the beginning and ending data. See Example.
  • Identifying points that mark the interval on a graph can be used to find the average rate of change. See Example.
  • Comparing pairs of input and output values in a table can also be used to find the average rate of change. See Example.
  • An average rate of change can also be computed by determining the function values at the endpoints of an interval described by a formula. See Example and Example.
  • The average rate of change can sometimes be determined as an expression. See Example.
  • A function is increasing where its rate of change is positive and decreasing where its rate of change is negative. See Example.
  • A local maximum is where a function changes from increasing to decreasing and has an output value larger (more positive or less negative) than output values at neighboring input values.
  • A local minimum is where the function changes from decreasing to increasing (as the input increases) and has an output value smaller (more negative or less positive) than output values at neighboring input values.
  • Minima and maxima are also called extrema.
  • We can find local extrema from a graph. See Example and Example.
  • The highest and lowest points on a graph indicate the maxima and minima. See Example.

Types of Data

There are different types of data that can be collected in an experiment. Typically, we try to design experiments that collect objective, quantitative data.

Objective data is fact-based, measurable, and observable. This means that if two people made the same measurement with the same tool, they would get the same answer. The measurement is determined by the object that is being measured. The length of a worm measured with a ruler is an objective measurement. The observation that a chemical reaction in a test tube changed color is an objective measurement. Both of these are observable facts.

Subjective data is based on opinions, points of view, or emotional judgment. Subjective data might give two different answers when collected by two different people. The measurement is determined by the subject who is doing the measuring. Surveying people about which of two chemicals smells worse is a subjective measurement. Grading the quality of a presentation is a subjective measurement. Rating your relative happiness on a scale of 1-5 is a subjective measurement. All of these depend on the person who is making the observation – someone else might make these measurements differently.

Quantitative measurements gather numerical data. For example, measuring a worm as being 5cm in length is a quantitative measurement.

Qualitative measurements describe a quality, rather than a numerical value. Saying that one worm is longer than another worm is a qualitative measurement.

Quantitative Qualitative
Objective The chemical reaction has produced 5cm of bubbles. The chemical reaction has produced a lot of bubbles.
Subjective I give the amount of bubbles a score of 7 on a scale of 1-10. I think the bubbles are pretty.

After you have collected data in an experiment, you need to figure out the best way to present that data in a meaningful way. Depending on the type of data, and the story that you are trying to tell using that data, you may present your data in different ways.


Why do you need to use charts, graphs, and diagrams

A lot of presentations are focused on data and numbers. Sounds boring, right? Apart from essential business presentation phrases , charts, graphs, and diagrams can also help you draw and keep the attention of your listeners. Add them to your presentation, and you will have a profound evidence-based work.

When it comes to presenting and explaining data charts, graphs, and diagrams , you should help people understand and memorize at least the main points from them. As to the use cases, diagrams and other visuals perfectly fit for describing trends, making a comparison or showing relationships between two or more items. In other words, you take your data and give it a visual comprehensible form.

What is better to choose

There are so many different types of charts, diagrams, and graphs that it becomes difficult to choose the right one. The chart options in your spreadsheet program can also greatly puzzle.

When should you use a flow chart? Can you apply a diagram to presenting a trend? Is a bar chart useful for showing sales data? To figure out what to select, you must have a good understanding of the specific features of each type.

The rest of this article will show examples of different types of presentation visuals and explain in detail how to describe charts and diagrams .


The electrostatic force &thinsp F , measured in newtons, between two charged particles can be related to the distance between the particles &thinsp d , in centimeters, by the formula &thinsp F ( d ) = 2 d 2 . Find the average rate of change of force if the distance between the particles is increased from 2 cm to 6 cm.

We are computing the average rate of change of &thinsp F ( d ) = 2 d 2 &thinsp on the interval &thinsp [ 2 , 6 ] .

The average rate of change is &thinsp &minus 1 9 &thinsp newton per centimeter.


KEY FACTORS THAT EXPLAIN INCREASED OBESITY WITH POTENTIAL TO BE CONSIDERED FOR FUTURE PROGRAMMATIC AND POLICY CHANGES?

Ultimately, obesity reflects energy imbalance, so the major areas for intervention relate to dietary intake and energy expenditure, for which the main modifiable component is physical activity. It is clear that large shifts in access to technology have reduced energy expenditure at work in the more labor-intensive occupations, such as farming and mining, as well as in the less energy-intensive service and manufacturing sectors 23 . Changes in transportation 24 , leisure, and home production 25 relate to reduced physical activity. In addition the complex interplay between biological factors operating during fetal and infant development and these energy imbalances exacerbates many health problems 26 . Such changes have been well documented for China and are also found in varying manifestations in many countries.

Finding ways to increase physical activity across all age groups is important for public health, but options for increasing energy expenditure through physical activity may be limited in low- and middle-income countries. For instance, to offset any increase of about 110 kcal of food or beverage in average daily energy intake, a women weighing 54 kg must walk moderately fast for 30 minutes and a man weighing 82 kg for about 25 minutes. Such levels of physical activity may be too much to expect, and so diet modification is a key approach to lower obesity prevalence, particularly with the ongoing decline in physical activity and increase in sedentary time (unpublished data). The dietary dynamics represent a major set of complex issues. On the global level, new access to technologies (e.g., cheap edible oils, foods with excessive 𠆎mpty calories’, modern supermarkets, and food distribution and marketing) and regulatory environments (e.g., the World Trade Organization [WTO] and freer flow of goods, services, and technologies) are changing diets in low- and middle-income countries. Accompanying this are all the critical issues of food security and global access to adequate levels of intake. Many populations focus on basic grain and legume food supplies, while the overall transition has shifted the structure of prices and food availability and created a nutrition transition linked with obesity as well as hunger. We have used detailed time use data along with energy expenditures and other data to examine past patterns and trends and predit until 2020 and 2030 patterns of physical activity and sedentary time in the US, UK, Brazil, China, and India (unpublished data).

Prior to exploring the dietary dimension, we consider an important biological factor affecting obesity and chronic diseases in rapidly developing countries in Asia and Africa. This factor is the biological insults suffered during fetal and infant development that may influence susceptibility to the changes described above, thus influencing the development and severity of chronic disease trends for these countries.

Developmental origins of health and disease: Special concerns for low- and middle-income countries

The patterns of change in dietary intake and energy expenditure related to the global nutrition transition are particularly important in the context of current theories of the developmental origins of adult disease. Based on three decades of research, we now recognize that susceptibility to obesity and chronic diseases is influenced by environmental exposures from the time of conception to adulthood. An extensive literature demonstrates that fetal nutritional insufficiency triggers a set of anatomical, hormonal, and physiological changes that enhance survival in a “resource poor” environment 27 . However, in a postnatal environment with plentiful resources, these developmental adaptations may contribute to the development of disease. Some of the strongest evidence on the long-term effects of moderate to severe nutrition restriction during pregnancy comes from follow-up of infants born after maternal exposure to famine conditions, such as those experienced in parts of Europe during World War II. For example, A. C. Ravelli and colleagues 28, 29 found higher rates of obesity in 50-year-old men and women whose mothers were exposed to the Dutch famine in the first half of their pregnancies, and G. P. Ravelli and colleagues (Ravelli, Stein et al. 1976) found obesity in 19-year-old men whose mothers experienced famine during their pregnancies. Similarly, a follow-up of Hmong refugee immigrants shows higher rates of central obesity among those raised in a war zone, with effects amplified in those who migrated to the United States compared to those living in a traditional rural setting 30 .

The developomental origins theory of mismatch fits closely with the broader issues of mismatch discussed by us below, issues that emerged in our early research 1, 2 and later work. 31� This theory of “mismatch,” that is, early nutritional deficits followed by excesses 34 , may be particularly important in low- and middle-income countries undergoing rapid social and economic changes, because economic progress amplifies mismatch 35 . Much of the literature on developmental origins of health and disease (DOHAD) focuses on chronic diseases. However, given the strong association of chronic diseases with obesity and in particular with central obesity, this evidence is highly relevant and provides a strong rationale for obesity prevention in populations that have experienced dramatic changes in the nutritional environment as a consequence of the nutrition transition.

Mechanisms are varied but may include affects on the number of nephrons in the kidney 36 , glucocorticoid exposure subsequent to maternal stress or poor nutritional status may program the insulin and hypothalamic-pituitary axes for high levels of metabolic efficiency 27 , and epigenetic changes. Maternal stress and specific aspects of diet (for example, intake of folate and other methyl donors) can affect DNA methylation and gene expression 37, 38,39 . Ongoing studies in places such as India are examining the role of maternal micronutrient intake on epigenetic changes that affect child adiposity 39, 40 . Research in India has provided other important insights. Indian infants with poorly nourished mothers are born with weight deficits, but in relative terms the deficits in lean mass are greater than those in adiposity. In later life, when consuming modern high-energy and high-fat diets, the previously “thin-fat” babies also have greater central adiposity 41, 42 .

It is apparent from studies of developmental origins of disease that there is a strong intergenerational component to health. While much of the literature on early origins of obesity and associated risk has focused on undernutrition, there is also substantial evidence that maternal overweight and obesity in pregnancy influence disease risk among offspring. For example, gestational diabetes is related to offspring body composition and increased risk of insulin resistance and diabetes in offspring 43, 44 . Thus there is concern about an intergenerational amplification of diabetes risk. Women who were malnourished as children are at increased risk of being centrally obese and having impaired glucose tolerance as adults. If these conditions affect a woman’s pregnancy, her offspring are now at increased risk of early development of obesity and diabetes. As obesity develops at younger and younger ages, the likelihood that adolescents and young women will experience pregnancy complications associated with gestational diabetes and hypertension will increase dramatically. There is growing evidence that maternal obesity, even without gestational diabetes, is a risk factor for child obesity through a pathway related to fetal overnutrition (see the review by CH Fall 45 ).

On the other end of the nutrition spectrum, short maternal stature acts as a physical constraint on fetal growth 46, 47 . And vitamin and mineral deficiencies and stunting may in turn relate to increased obesity risk 48 .

Beyond the fetal period, nutrition and other input to health in infancy, childhood, and adolescence are important determinants of adult body composition and obesity risk. In light of the large increases in overweight and obesity in children as well as adults, attempts have been made to determine the ages at which faster weight gain relates to later obesity. A large literature relates “rapid growth” during infancy to risk of obesity in later childhood and into adulthood 49 . In addition, rapid weight gain, particularly from mid-childhood on, is related to increased risk of elevated blood pressure or impaired fasting glucose in young adulthood in low- and middle-income countries 50 . Concerns have been raised about the promotion of rapid weight gain in children who are malnourished. In low-income countries catch-up or compensatory growth following a period of faltering growth is desirable, because it is associated with reduced morbidity and improved survival 51,52, 53 and better cognitive development 54 . A key concern is whether the benefits of faster growth in these settings outweigh the possible long-term risks. Based on the COHORTS analysis of children from five low- and middle-income countries, faster weight gain in the first two years of life has a number of benefits. It is associated with the development of lean body mass but not with increased risk of impaired fasting glucose or diabetes in young adulthood (papers of team under journal review: e.g., Kuzawa et al 55 ). Given observations that patterns of child growth have important consequences for the development of obesity and chronic diseases, another line of research focuses on factors that contribute to or protect against early development of adiposity. In this regard, the potential programming roles of early diet have been explored, including the roles of breast-feeding and high intake of dietary protein, fat, and sodium. These topics are important in light of the dramatic changes in diet composition that characterize many populations in the developing world.

Of course, early feeding issues are important. Some studies show a protective effect of breast-feeding on later development of obesity and chronic diseases 56, 57 , while other studies show no effects 58 . Similarly consistently high protein intake during complementary feeding in the first two years of life has been associated with a higher mean BMI and percentage body fat at age seven in cohort studies of German children 59 , and other researchers have suggested a strong link between high protein intake and obesity 60 .

Dietary fat may also play a role in the development of NCDs in terms of both the amount of fat and the composition of fats. The STRIP study in Finland demonstrates that lower total and saturated dietary fat intake in infancy results in lower serum cholesterol, LDL-c, and triglyerides (as well as lower blood pressure) in children up to age 14, even without effects on height, weight, or BMI 61, 62 . Worldwide the increase in plant oil consumption has increased the intake of n-6 fatty acids and the ratio of n-6 to n-3 fatty acids. This is a concern, because high intake of n-6 fatty acids is associated with altered immune function, differentiation of preadipocytes into mature fat cells, and changes in fat deposition patterns. Another study relates high sodium intake from infant formula and weaning foods to increased blood pressure in adulthood 63 .

Dietary changes

The knowledge emerging with the developmental origins research provides only one dimension of the shift toward greater obesity. While early life exposures and biological insults appear to enhance the adverse effects of dietary change, in the end shifts in energy balance and the entire structure of the diet have played major concomitant and separate roles. We speak first of broad trends and then return to the issues of poverty and availability. These link the set of dynamic changes in our food supply with food security.

It is useful to understand how vastly diets have changed across the low- and medium-income world to converge on what we often term the “Western diet.” This is broadly defined by high intake of refined carbohydrates, added sugars, fats, and animal-source foods. Data available for low- and middle-income countries document this trend in all urban areas and increasingly in rural areas. Diets rich in legumes, other vegetables, and coarse grains are disappearing in all regions and countries. Some major global developments in technology have been behind this shift.

Edible oil–vegetable oil revolution

Fats have major benefits in improving flavor. Some scientists suggest that the selection of fat- as opposed to carbohydrate-rich foods is primarily determined by brain mechanisms that may include central levels of neurotransmitters, hormones, or neuropeptides 1 . In the 1950s and 1960s in the United States and Japan, technology was developed to cheaply remove oils from oilseeds (corn, soybean, cottonseed, red palm seeds, etc.) 1 . Breeding techniques to increase the oil content of these seeds accompanied the shifts, and higher-income countries saw a large increase in the availability of cheap vegetable oils. This was followed by removal of the erucic acid from rapeseed oil to create healthier canola oil accompanied by extensive research on the good and bad components of each edible oil (e.g., trans fats and specific fatty acids). By 2010 inexpensive oils were available throughout the developing world. Between 1985 and 2010 individual intake of vegetable oils increased threefold to sixfold, depending on the subpopulation studied. In China, which has moderate but not high vegetable oil intake, persons age two and older now consume on average almost 300 calories and more than 30 grams of vegetable oil daily 64 .

Caloric sweeteners

The globe’s diet is much sweeter today than heretofore 65 . For example, 75 percent of foods and beverages bought in the US contain added caloric sweeteners and the average American aged 2 and older consumes about 375kcal/day 66, 67 . In the United States, one of the few countries where the added sugar in the diet is estimated 68 , research has shown a remarkable stability of added sugar intake from food over the last 30 years, while added sugar from beverages has increased significantly 66 . In 1977� two-thirds of added sugar in the US diet came from food, but today two-thirds comes from beverages. However, this may be an underestimate, as the USDA added-sugar estimate excludes fruit juice concentrate, a source of sugar that has seen major increases in consumption in the last decade and is now found in over 10 percent of US foods (unpublished data). Mexico, which experienced a doubling of caloric beverage intake to more than 21 percent of the kilocalories/day for all age groups from 1996 to 2002 is one of the few developing countries with data on caloric beverage patterns and trends 31, 69, 70 . While individual dietary intake data are not available for most low-income countries, national aggregate data on sugar available for consumption (food disappearance or food balance data) suggest that this is a major concern in all regions of the world 65 .

Shift toward increased animal-source food intake

Earlier research by C. L. Delgado and others at the International Food Policy Research Institute (IFPRI) found the beginning of a livestock revolution in the developing world 71 . Subsequent research by Popkin and others has shown major increases in production of beef, pork, dairy products, eggs, and poultry across low-and middle-income countries 72, 73 . Most of the global increases in animal-source foods have been in low- and middle-income countries. For example, India has had a major increase in consumption of dairy products and China in pork and eggs, among others.

The increase in animal-source food products has both positive and adverse health effects. On the one hand, for poor individuals throughout the developing world a few extra grams of animal-source foods can significantly improve the micronutrient profile of food consumed. On the other hand, excessive consumption of animal-source foods is linked with excessive saturated fat intake and increased mortality 74,75 .

Reduced intake of legumes, coarse grains, and other vegetables

While significant systematic research on the reduced consumption of these nutritionally important foods has not been undertaken, it is clear from case studies that consumption of beans, a vast array of bean products, and what we term often 𠆌oarse’ grains such as sorghum and millet has declined significantly 6, 76, 77 . This occurred from the 1960s through the 1980s in the United States and more recently across Asia and the rest of the Americas 78 .

Understanding the reasons for the trend toward increased consumption of animal-source food, oils, and caloric sweeteners and reduced consumption of legumes, coarse grains, and other vegetables begins with understanding the relative price structure shifts since World War II. Most of these changes are purposeful and relate to agricultural policies across the globe 6, 79 .

Food system changes

In the past 10 to 15 years, several factors have influenced the food supply of each country. The food system characterizing most urban and an increasing proportion of rural areas across low- and middle-income countries has changed drastically with globalized distribution of technology related to food production, transportation and marketing, mass media, and the flow of capital and services. Access to many new empty calorie foods and beverages relates to current economic and social development. Modern food technology has provided enormous benefits in reducing food waste, enhancing sanitation, and reducing many adverse effects of seasonality, among the myriad benefits. Similarly the same is true for the modern supermarket. Here we highlight some of the potential adverse effects of these important changes while acknowledging critical benefits to producers and consumers.

A key component is modern food distribution and sales. This reflects the enormous penetration of super- and mega-market companies throughout the developing world 80 . Most countries also have large convenience store chains. The fresh market (wet or open public market) is disappearing as the major source of food in the developing world. These markets are being replaced by large regional and local supermarkets, which are usually part of multinational chains (e.g., Carrefour or Walmart) or, in countries such as South Africa and China, by domestic chains that function and look like the global chains. Increasingly, hypermarkets (megastores) are the major force driving changing food expenditures in a country or a region. For example, in Latin America supermarkets’ share of all retail food sales increased from 15 percent in 1990 to 60 percent by 2000. In comparison, supermarkets accounted for 80 percent of retail food sales in the United States in 2000. This process is also occurring at varying rates in Asia, Eastern Europe, the Middle East, and all urban areas of Africa. We will undertake a national survey of diet and related factors in India in 2012.

One study suggests that the shifts in the food environment might enhance intake of processed, lower-quality foods 81 . Carlos Monteiro has been particularly clear in his concern that this modern food environment has impacted diets 4, 5, 82 . Indeed his concern regarding processing meshes well with the vast shift away from consumption of legumes and coarse grains to consumption of refined grains purchased at modern supermarkets and convenience stores, which have penetrated urban Africa and Asia and most of the Middle East and Latin America.

The potential adverse effects of these trends are increased access to cheaper processed, high-fat, added-sugar, and salt-laden foods in developing countries. At the same time, they are the purveyors of some good. For example, supermarkets were instrumental in the development of ultra-heat treatment (UHT) for pasteurization of milk, giving it a long shelf life (not requiring refrigeration) and providing a safe source of milk for all income groups. Supermarkets were also key players in establishing food safety standards 83 . Most importantly, they solved the cold chain problem and in many instances have brought higher-quality produce to the urban consumer throughout the year. Other factors include the liberalization of direct foreign investment, trade liberalization, and the saturation of Western markets that has pushed growing companies into other locales. Improvements in the logistics and procurement systems used by supermarkets have allowed them to compete, on cost, with the more typical outlets in developing countries—the small mom-and-pop stores and wet markets (fresh or open public markets) for fruits, vegetables, and all other products.

Another result of the global changes in food consumption is the freer flow in food trade linked with the WTO. For instance, barriers to edible oil imports have been reduced, and vegetable oil production has been centralized to compete with imports and to significantly lower prices of vegetable oil in countries such as China.

These changes along with global investments in agriculture over the last half century have produced a large shift in relative prices to favor animal-source foods, edible oils, and other key global commodities, including sugar 79 . Supplemental Figure 3, reproduced from research at the IFPRI, highlights some of the global trends that have resulted from the vast investment in the animal foods sector and feed crops across the globe 71, 79 . Supplemental Figure 4 highlights the real shifts in China in relative costs of selected foods based on data from 330 communities and their food markets 84 .

Food security and the dual burden of undernutrition and obesity

This rapid transition in income and diet and the large shift toward animal-source food consumption creates major demands for basic grains to feed livestock, disregarding the needs of the poor for the same food supply. While drought, climate change, and increased demand for ethanol have contributed to global food prices, the longer-term structural shift relates to demand for animal-source food and its impact on corn, rice, and wheat prices. In the face of the need for basic foods for the poor, the marketing, desirability, and availability of low-cost edible oils, empty calorie foods, and such have encouraged urban poor people to consume lower-quality foods that are obesogenic (most likely more processed foods, but this has not yet been documented). These complex changes are reflected in the emergence of obesity alongside hunger even in the same households.

Families faced with an inability to grow food or inadequate income to purchase food will likely opt for the cheapest cost per calorie from the available choices. When food prices for basic grains double or triple, the pressures to adjust food purchases increase. Among the most salient issues are the vulnerability of poor female-headed households 85 and the combination of price increases and volatility in global food markets (linked also with climate change issues). It is also important to note that the relative price changes matter most. If prices of fatty foods, oils, sugar, and animal-source foods go down relative to legumes, fruits, and other vegetables, the latter items become less attractive.

Despite substantial economic growth, large inequalities remain in many low- and middle-income countries, and it is common to see problems of underweight, stunting, and micronutrient deficiencies side by side with increasing rates of obesity. This 𠇍ual burden” of undernutrition and obesity exists not only in countries and communities 86 but in households 87, 88 and even in individuals, who may have excess adiposity along with micronutrient deficiencies, such as iron deficiency anemia 87� , or stunting and overweight. Dual burden households are most common in countries undergoing the nutrition transition 87, 88 and may reflect gender or generation differences in food allocation related to social norms. For example, high-quality foods may be given preferentially to adult males rather than to children. But other patterns may exist. In China it is common to indulge children in the wake of the one-child population control strategy 91, 92 . Individuals of different generations may also respond differently to social and economic changes, with the younger generation adopting new dietary patterns more quickly while the elderly continue to eat in more traditional (and sometimes healthier) ways.

A challenge for programs and policies is the need to address food insecurity and hunger without adding to the burden of overweight and obesity. This is particularly challenging given the relatively low cost and high availability of energy-dense but low-micronutrient-content foods. Again it is relative prices that matter. The lack of focus on coarse grains, legumes, and other vegetables and the vast attention to sugar crops, oilseeds, vegetable oil technologies, and cheaper animal-source foods have contributed to the global shift in diets.

In countries such as Mexico, Brazil, Chile, and China, where great strides have been made to minimize acute malnutrition through programs targeting vulnerable subpopulations, hunger and malnutrition have been reduced. An example is Oportunidades in Mexico, the conditional cash transfer program that provides a stipend and complementary food for preschoolers 93, 94 . These countries recognize that the programs must be tailored to address malnutrition while not accelerating energy imbalance and obesity among the recipients, as has occurred in some programs 93, 95 . For instance, Chile continued to feed young children in its various feeding programs even when most were adequately nourished and did not revise the programs to deal with energy imbalance issues for some time after they reduced undernutrition 95 . The Mexican government found a need to reduce the fat content of the milk along with other changes in its feeding programs to address problems of child obesity.


1.3: Rates of Change and Behavior of Graphs

Chemical kinetics

Kinetics is the study of the rates of chemical processes.

The rate of a reaction is defined at the change in concentration over time:

Rate Expressions describe reactions in terms of the change in reactant or product concentrations over the change in time. The rate of a reaction can be expressed by any one of the reactants or products in the reaction.

There are a couple of rules to writing rate expressions:

1) Expressions for reactants are given a negative sign. This is because the reactant is being used up or decreasing.

2) Expressions for products are positive. This is because they are increasing.

3) All of the rate expressions for the various reactants and products must equal each other to be correct. (This means that the stoichiometry of the reaction must be compensated for in the expression)

Example: In an equation that is written: 2X + 3Y —> 5Z

The Rate Expression would be:

The Mathematical way of looking at it: The rate may vary with time (and concentration), so it is usual to define the rate over a very small time, &Deltat. We think of the rate as the derivative of concentration with respect to time: This derivative is the slope of a graph of concentration against time, taken at a particular time. On the graph, an exponential fit is used to create a best fit line that will allow you to calculate the rate at any point.

We have already established that a change in concentration can affect the rate at which a reaction proceeds (collision theory). As a reaction progresses,the concentrations of both reactants and products change and thus the rate of the reaction changes. This also means that the rate of a reaction can be expressed in terms of the diminishing concentrations of its reactants or the increasing concentrations of its products. The expressions used to describe these relationships are called Rate Laws or Rate Equations.

Three ways to quantitatively determine rate:

Initial Rate: The Method of Initial Rates involves measuring the rate of reaction, r, at very short times before any significant changes in concentration occur. A + 2B --> 3C

While the form of the differential rate law might be very complicated, many reactions have a rate law of the following form: r = k [A] a [B] b

The initial concentrations of A and B are known therefore, if the initial reaction rate is measured, the only unknowns in the rate law are the rate constant, k, and the exponents a and b. One typically measures the initial rate for several different sets of concentrations and then compares the initial rates.


Recommended Reading

Mroczek, D.K., & Spiro, A., III. (2003). (See References). A representative study that illustrates original research on growth modeling and personality-trait change.

Mroczek, D.K., & Spiro, A., III. (2007). (See References). The first study to show that individual differences in change in neuroticism predicts mortality.

Nesselroade, J.R. (1991). Interindividual differences in intraindividual change. In L.M. Collins & J.L. Horn (Eds.), Best methods for the analysis of change (pp. 92�). Washington, DC: American Psychological Association. A classic paper defining individual differences in change, their importance, and the multiple ways of conceptualizing and measuring them.

Roberts, B.W.,Walton, K., & Viechtbauer, W. (2006). (See References). A comprehensive overview of mean-level change in personality traits.

Helson, R., Mitchell, V., & Moane, G. (1984). Personality and patterns of adherence and nonadherence to the social clock. Journal of Personality and Social Psychology, 46, 1079�. A historical classic—one of the first papers to raise attention about the relation between life experiences and personality-trait development in adulthood.


Substitution Effect

It follows from the law of demand that the quantity demanded of a product increases if the product price decreases and vice versa. One reason for this phenomenon is substitution i.e. when consumers discontinue consumption of the product whose price increases and switch over to other similar products. It happens when the increase in price renders the product more expensive than its substitutes and rational consumers decide that it is not worthwhile to continue consuming the product at its increased price.

To determine the magnitude of substitution effect, we ignore the income effect i.e. the rotation of the budget line. We do this by shifting the budget line outwards such that it intersects the initial indifference curve at Point S. The decrease in quantity demanded of movies that occurs due to movement along the initial indifference curve IC1 from Point E to S represent the substitution effect.

Please note that the substitution effect is at play in changing quantity demanded when all other determinants of demand i.e. price of substitute goods, income level, etc. are constant. The rotation of budget line the current example is due to imputed change in real income and not an actual change in income. When there is an actual change in income level, it shifts the demand curve i.e. it causes a change in demand at all price levels.


Establishing a Process to Guide Decision Making at Tier 3

As noted earlier, within a multi-tier RTI approach it is important to establish a process for a) determining which students are experiencing difficulties, b) selecting intervention strategies or supports and matching these supports to students, and c) evaluating whether the intervention strategies are helpful. At each tier along the continuum, the process may vary in its intensity, yet it will always follow a consistent series of questions or steps. Practitioners can guide their decision making by adhering to a self-questioning process wherein they ask themselves the following questions:

This self-questioning process is familiar to most educators and is used formally or informally by many effective teachers as they proactively work to assess the progress of students in their classrooms. For example, teachers who are responsive to the individual needs of students in their classrooms regularly assess students’ skills and responsiveness to instructional strategies, providing additional supports and remediation at a whole-class, small-group, or individual level as necessary.

In school-wide, multi-tier approaches to RTI, a similar, but often more formalized, process is applied at a whole-school, classroom, and individual student level. Across tiers, the nature of services and support provided are differentiated on the basis of the intensity of the problems and the magnitude of need. At Tier 3, efforts focus on the needs of individual students who are experiencing significant problems in academic, social, and/or behavioral domains. Thus, the process at this level is more intensive and individualized than it is at other levels. In the sections that follow, considerations during each step of a Tier 3 self-questioning process are discussed.

Step 1: Who is experiencing a problem and what, specifically, is the problem?

The first step in the process is to define the problem, and embedded within this step is noting who is experiencing the problem and what level of support (i.e., Tier 1, Tier 2, or Tier 3) is warranted. When defining a problem, it is important to clearly describe what the problem “looks like” in objective, observable terms, so that all persons involved know they are talking about the same thing. Measurement of a problem should be direct and occur within the context (e.g., classroom setting or situation) in which the problem occurs. To quantify how much of a problem exists, the problem should be described in measurement terms (e.g., frequency, rate, duration, magnitude). Furthermore, to stay focused on working toward improving problem situations, it is helpful to describe problems as discrepancies between a student’s actual or current performance (i.e., “what is”) and desired or expected performance (i.e., “what should be”). Thus, in addition to measuring a student’s actual performance, criteria regarding expected levels of performance need to be established. By quantifying problems as discrepancies, educators can use this information to determine the magnitude or severity of a problem. This information can be useful in formalizing goals (i.e., a reduction in the discrepancy) and in prioritizing problems within and across students.

To illustrate this process, consider reading as an example. One measure of “reading health” shown to be predictive of later reading fluency and comprehension is the number of words a student reads correctly per minute, or oral reading fluency (Hosp & Fuchs, 2005). The Dynamic Indicators of Basic Early Literacy Skills (DIBELS http://www.dibels.uoregon.edu) is a research-based, standardized, norm-referenced measure of pre-reading and reading skills that includes a measure of oral reading fluency for Grades 1 to 6 (Good, Gruba, & Kaminski, 2002). The DIBELS measures were designed for use as screening and evaluation tools, and scores on the DIBELS can be used to place students in categories of reading risk. Prespecified, research-based goal rates have been established for the DIBELS and are available on the Web site just mentioned. These goal rates might be used as “expected performance” standards against which to compare actual student performance in an RTI model. Specifically, students who read at or above recommended (i.e., benchmark) rates are considered to be at low risk of reading problems. In contrast, if students perform below benchmark rates, they are considered to be either at “some risk” of developing reading problems or “at risk” of developing reading problems.

The DIBELS benchmark criteria suggest, for example, that a 3rd grade student is expected to read 77 or more words correctly per minute in the beginning (fall term) of 3rd grade, 92 or more words in the middle (winter term), and 110 or more at the end (spring term). Thus, a student who reads fewer words correctly per minute than the specified benchmark amount (i.e., 77 words in the fall of Grade 3) might be viewed as experiencing a reading problem and, depending on their scores, might be viewed as in need of strategic (Tier 2) or intensive (Tier 3) reading intervention supports. To illustrate this more clearly, consider hypothetical data taken in the fall from all 3rd grade students at one elementary school. Imagine that all of the students in Grade 3 were screened for reading difficulties using the DIBELS. As with any screening device, the DIBELS is designed to be sensitive enough to identify students who may be at risk of experiencing reading problems. Thus, to determine who might be at risk of experiencing reading difficulties, the team of 3rd grade teachers would look to see which students scored below the expected goal rate of 77 words read correctly per minute. For example, let’s assume that Ben read at a rate of 67 words correctly per minute, which means he read 10 fewer words correctly per minute than the desired rate (i.e., 77 – 67 = 10). Ella, who read 30 words correctly per minute, read 47 fewer words correctly per minute than the desired rate (i.e., 77 – 30 = 47). Both children are reading at rates less than the desired rate of 77 and may be in need of additional reading supports, but the quantified problem (i.e., discrepancy between actual and expected performance) is greater for Ella. Of course, this is not to suggest that a student should be placed in a category of Tier 2 or Tier 3 support on the basis of a single score. Instead, screening devices, like the DIBELS, which can be administered repeatedly and are time-efficient measures, are useful because they can help identify students who may be in need of additional intervention supports or further assessment to determine need for support. See Jenkins and Johnson article in section of this Web site on Universal Screening for more information. [NCLD add Link to article]

One important question that schools need to consider is whether a student should receive Tier 1, 2, or 3 services. Tier 3 services are designed to address the needs of students who are experiencing significant problems and/or are unresponsive to Tier 1 and Tier 2 efforts. Schools should establish guidelines for determining how students will enter into Tier 1, 2, or 3 levels of support. Although guidelines may vary from school to school, students in need of Tier 3 services should be able to access these services in one of two ways. First, students receiving Tier 1 or Tier 2 supports who are not making adequate progress and are unresponsive to the continuum of supports available at Tier 1 or Tier 2 might be moved into Tier 3 to receive more intensive intervention supports. Second, there should be a mechanism through which students who are experiencing very severe or significant academic, behavioral, or social-emotional problems can be triaged directly into Tier 3 to receive necessary intensive and individualized intervention supports. For some students, the second option is necessary to provide needed supports in a timely fashion rather than delaying access to these supports by making students wait to go through Tier 1 and Tier 2 intervention services. Thus, in contrast to a fixed multi-gating system wherein students would only be able to receive more intensive services (i.e., Tier 3) following some time period of less intensive (i.e., Tier 1 or 2) services, the RTI approach should allow some flexibility to serve students based on their level of need in a timely and efficient manner.

Step 2: What intervention strategies can be used to reduce the magnitude or severity of the problem?

When a student has been identified as being in need of Tier 3 intervention supports, the next step in the self-questioning process is the selection and implementation of appropriate intervention supports. One option in this step is to move directly into intervention by selecting an evidence-based intervention strategy that has a standard protocol for implementation. There are many intervention strategies from which to choose. For example, several Web sites provide teacher-friendly intervention resources (e.g., http://www.interventioncentral.com http://www.free-reading.net http://ies.ed.gov/ncee/wwc/).

A second option at this stage is to collect more information before moving to intervention. To assist in the development and selection of an intervention for a specific problem, it may be important to conduct an analysis of the problem’s context and function. To do so, we must ask what factors are contributing to the problem and in what ways can we alter those factors to promote learning and reduce the magnitude or severity of the problem. One end goal of this stage in the process is to “diagnose the conditions under which students’ learning is enabled” (Tilly, 2002, p. 29). This goal is accomplished by gathering information (e.g., direct observation, interviews, rating scales, curriculum-based measures of academic skills, review of records) from a number of sources (e.g., the student, teacher, parent, peers, administrator) to answer questions helpful in furthering our understanding of why (i.e., under what conditions) the problem is occurring. Specifically, we want to know where, when, with whom, and during what activities the problem is likely or unlikely to occur.

Although many questions can be asked at this stage, it is important to stay focused on identifying the factors that we can change (i.e., instructional strategies, curriculum materials) in attempting to mitigate the problem situation. For example, when a child’s classroom performance is below our expectations, we might ask whether the problem is a skill (i.e., can’t do) or a performance (i.e., won’t do) problem (for more information on this process, see Daly Chafouleas, & Skinner, 2005 Daly, Martens, Witt, & Dool, 1997 Witt, Daly, & Noell, 2000). Another important, and related, question to ask concerning learning problems is whether the alignment between the student’s skill level, the curriculum materials, and instructional strategies is appropriate (Howell & Nolet, 2000). When the problem involves performance that falls below what is expected, it is important to ask the following types of questions about whether this is because the student a) does not want to perform the task or activity, b) would rather be doing something else, c) gets something (e.g., attention, access to a preferred activity) by not doing the task, d) does not have the prerequisite skills to perform the task, e) is given work that is too difficult or presented in a manner that the student hasn’t seen before, or f) has been given insufficient time to practice the skill to fluency.

In answering the above questions, there is a direct link between our questioning and the development of a solution. For example, if the information we collect suggests that the student has the prerequisite skills needed to decode connected text but does so slowly, one hypothesis we might have is that perhaps the student has not had sufficient time to practice reading to develop fluency. An appropriate intervention for this student might focus on building reading fluency through an intervention that involves increased reading practice, such as repeated reading (see Daly et al., 2005, for a description of repeated reading). Alternatively, if we suspect that a student’s reading problem is related to not having enough assistance to acquire the skill and/or a deficit in pre-reading skills (e.g., problems with phonemic awareness), our hypothesized intervention strategy would might focus on direct skill development of prerequisite skills, with prompting and corrective feedback. In each example, the reading problem was related to a skill issue, and the solutions were linked to the type of skill problem (e.g., acquisition, fluency).

If the information we gather suggests that the reading problem is not a skill problem, but rather a performance (i.e., won’t do) issue, then the intervention should focus on addressing the function (e.g., escape task) of the behavior. Much has been written about linking assessment to intervention through functional behavioral assessment, and when problems are performance issues, interventions can address behavior function in several ways. When a student’s behavior is maintained by escape from a task, for example, the intervention might reduce the student’s motivation to escape the task by making the task less aversive (e.g., adjusting the choice of materials to increase interest), teach the student a more appropriate way to communicate that the task is aversive (requesting a brief break), or allowing escape from the task following performance of the task for a prespecified time period.

Regardless of whether educators decide to move directly to intervention or to collect more information to analyze the problem, the focus of this step in the self-questioning process is on selecting a solution (intervention strategy) that reduces the magnitude or severity of the problem (i.e., reduces the discrepancy between the student’s current and expected performance). Interventions should be selected on the basis of their functional relevance to the problem (i.e., match to why the problem is occurring), contextual fit (i.e., match to the setting and situation in which the problem occurs), and likelihood of success (i.e., demonstrated success within the research literature). Tier 3 interventions are designed to address significant problems for which students are in need of intensive interventions. As a result, Tier 3 interventions require careful planning. Specifically, an intervention plan should describe the following:

In addition, an intervention plan should specify timelines for implementing objectives and for achieving desired goals. The end goal of this stage of the process is a clearly delineated intervention plan. (For examples of evidence-based intervention strategies, see the What Works Clearinghouse at http://www.whatworks.ed.gov, a resource developed by the Institute of Education Sciences, U.S. Department of Education.)

Step 3: Did the student’s problem get resolved as a result of the intervention?

An individual’s RTI can only be known following actual implementation of an intervention and careful (i.e., reliable and valid), repeated measurement of his or her behavior over time. Although a thorough description and analysis of the problem, why it is occurring, and what interventions are likely to be effective is important to the self-questioning process at Tier 3, the process is incomplete until practitioners ask if the student’s problem was resolved as a result of the intervention. The best way to determine whether a student is making progress toward the desired goals in RTI is to collect ongoing information regarding the integrity with which the intervention was implemented and, relative to intervention implementation, the discrepancy between desired and actual performance. The intervention process does not end until the problem (i.e., discrepancy between what is and what should be) is resolved. Thus, continuous monitoring and evaluation are essential parts of an effective RTI process. Specifically, information should be collected on targeted student outcomes (i.e., measurement of change in behavior relative to desired goals), proper implementation of the intervention (i.e., measure whether the intervention is implemented as planned), and social validity (practicality and acceptability of the intervention and outcome). When data are reviewed and analyzed, a decision should be made regarding whether the intervention plan should be revised or goals adjusted. Single-subject design methods are key to determining a student’s RTI (for further information, see Olson, Daly, Andersen, Turner, & LeClair, 2007).


1.3: Rates of Change and Behavior of Graphs

In Section 14.3 "Methods of Determining Reaction Order", you learned that the integrated rate law for each common type of reaction (zeroth, first, or second order in a single reactant) can be plotted as a straight line. Using these plots offers an alternative to the methods described for showing how reactant concentration changes with time and determining reaction order.

We will illustrate the use of these graphs by considering the thermal decomposition of NO2 gas at elevated temperatures, which occurs according to the following reaction:

Experimental data for this reaction at 330°C are listed in Table 14.5 "Concentration of NO" they are provided as [NO2], ln[NO2], and 1/[NO2] versus time to correspond to the integrated rate laws for zeroth-, first-, and second-order reactions, respectively. The actual concentrations of NO2 are plotted versus time in part (a) in Figure 14.15 "The Decomposition of NO". Because the plot of [NO2] versus t is not a straight line, we know the reaction is not zeroth order in NO2. A plot of ln[NO2] versus t (part (b) in Figure 14.15 "The Decomposition of NO") shows us that the reaction is not first order in NO2 because a first-order reaction would give a straight line. Having eliminated zeroth-order and first-order behavior, we construct a plot of 1/[NO2] versus t (part (c) in Figure 14.15 "The Decomposition of NO"). This plot is a straight line, indicating that the reaction is second order in NO2.

Table 14.5 Concentration of NO2 as a Function of Time at 330°C

Time (s) [NO2] (M) ln[NO2] 1/[NO2] (M −1 )
0 1.00 × 10 −2 −4.605 100
60 6.83 × 10 −3 −4.986 146
120 5.18 × 10 −3 −5.263 193
180 4.18 × 10 −3 −5.477 239
240 3.50 × 10 −3 −5.655 286
300 3.01 × 10 −3 −5.806 332
360 2.64 × 10 −3 −5.937 379

Figure 14.15 The Decomposition of NO2

These plots show the decomposition of a sample of NO2 at 330°C as (a) the concentration of NO2 versus t, (b) the natural logarithm of [NO2] versus t, and (c) 1/[NO2] versus t.

We have just determined the reaction order using data from a single experiment by plotting the concentration of the reactant as a function of time. Because of the characteristic shapes of the lines shown in Figure 14.16 "Properties of Reactions That Obey Zeroth-, First-, and Second-Order Rate Laws", the graphs can be used to determine the reaction order of an unknown reaction. In contrast, the method described in Section 14.3 "Methods of Determining Reaction Order" required multiple experiments at different NO2 concentrations as well as accurate initial rates of reaction, which can be difficult to obtain for rapid reactions.

Figure 14.16 Properties of Reactions That Obey Zeroth-, First-, and Second-Order Rate Laws

Example 9

Dinitrogen pentoxide (N2O5) decomposes to NO2 and O2 at relatively low temperatures in the following reaction:

This reaction is carried out in a CCl4 solution at 45°C. The concentrations of N2O5 as a function of time are listed in the following table, together with the natural logarithms and reciprocal N2O5 concentrations. Plot a graph of the concentration versus t, ln concentration versus t, and 1/concentration versus t and then determine the rate law and calculate the rate constant.

Time (s) [N2O5] (M) ln[N2O5] 1/[N2O5] (M −1 )
0 0.0365 −3.310 27.4
600 0.0274 −3.597 36.5
1200 0.0206 −3.882 48.5
1800 0.0157 −4.154 63.7
2400 0.0117 −4.448 85.5
3000 0.00860 −4.756 116
3600 0.00640 −5.051 156

Given: balanced chemical equation, reaction times, and concentrations

Asked for: graph of data, rate law, and rate constant

A Use the data in the table to separately plot concentration, the natural logarithm of the concentration, and the reciprocal of the concentration (the vertical axis) versus time (the horizontal axis). Compare the graphs with those in Figure 14.16 "Properties of Reactions That Obey Zeroth-, First-, and Second-Order Rate Laws" to determine the reaction order.

B Write the rate law for the reaction. Using the appropriate data from the table and the linear graph corresponding to the rate law for the reaction, calculate the slope of the plotted line to obtain the rate constant for the reaction.

The plot of ln[N2O5] versus t gives a straight line, whereas the plots of [N2O5] versus t and 1/[N2O5] versus t do not. This means that the decomposition of N2O5 is first order in [N2O5].

B The rate law for the reaction is therefore

Calculating the rate constant is straightforward because we know that the slope of the plot of ln[A] versus t for a first-order reaction is −k. We can calculate the slope using any two points that lie on the line in the plot of ln[N2O5] versus t. Using the points for t = 0 and 3000 s,

slope = ln [ N 2 O 5 ] 3000 − ln [ N 2 O 5 ] 0 3000 s − 0 s = ( − 4.756 ) − ( − 3.310 ) 3000 s = − 4.820 × 10 − 4 s − 1

1,3-Butadiene (CH2=CH—CH=CH2 C4H6) is a volatile and reactive organic molecule used in the production of rubber. Above room temperature, it reacts slowly to form products. Concentrations of C4H6 as a function of time at 326°C are listed in the following table along with ln[C4H6] and the reciprocal concentrations. Graph the data as concentration versus t, ln concentration versus t, and 1/concentration versus t. Then determine the reaction order in C4H6, the rate law, and the rate constant for the reaction.

Time (s) [C4H6] (M) ln[C4H6] 1/[C4H6] (M −1 )
0 1.72 × 10 −2 −4.063 58.1
900 1.43 × 10 −2 −4.247 69.9
1800 1.23 × 10 −2 −4.398 81.3
3600 9.52 × 10 −3 −4.654 105
6000 7.30 × 10 −3 −4.920 137

Summary

For a zeroth-order reaction, a plot of the concentration of any reactant versus time is a straight line with a slope of −k. For a first-order reaction, a plot of the natural logarithm of the concentration of a reactant versus time is a straight line with a slope of −k. For a second-order reaction, a plot of the inverse of the concentration of a reactant versus time is a straight line with a slope of k.

Key Takeaway

  • Plotting the concentration of a reactant as a function of time produces a graph with a characteristic shape that can be used to identify the reaction order in that reactant.

Conceptual Problems

Compare first-order differential and integrated rate laws with respect to the following. Is there any information that can be obtained from the integrated rate law that cannot be obtained from the differential rate law?

  1. the magnitude of the rate constant
  2. the information needed to determine the order
  3. the shape of the graphs

In the single-step, second-order reaction 2A → products, how would a graph of [A] versus time compare to a plot of 1/[A] versus time? Which of these would be the most similar to the same set of graphs for A during the single-step, second-order reaction A + B → products? Explain.

For reactions of the same order, what is the relationship between the magnitude of the rate constant and the reaction rate? If you were comparing reactions with different orders, could the same arguments be made? Why?

Answers

  1. For a given reaction under particular conditions, the magnitude of the first-order rate constant does not depend on whether a differential rate law or an integrated rate law is used.
  2. The differential rate law requires multiple experiments to determine reactant order the integrated rate law needs only one experiment.
  3. Using the differential rate law, a graph of concentration versus time is a curve with a slope that becomes less negative with time, whereas for the integrated rate law, a graph of ln[reactant] versus time gives a straight line with slope = −k. The integrated rate law allows you to calculate the concentration of a reactant at any time during the reaction the differential rate law does not.

The reaction rate increases as the rate constant increases. We cannot directly compare reaction rates and rate constants for reactions of different orders because they are not mathematically equivalent.

Numerical Problems

One method of using graphs to determine reaction order is to use relative rate information. Plotting the log of the relative rate versus log of relative concentration provides information about the reaction. Here is an example of data from a zeroth-order reaction:

Varying [A] does not alter the reaction rate. Using the relative rates in the table, generate plots of log(rate) versus log(concentration) for zeroth-, first- and second-order reactions. What does the slope of each line represent?

The table below follows the decomposition of N2O5 gas by examining the partial pressure of the gas as a function of time at 45°C. What is the reaction order? What is the rate constant? How long would it take for the pressure to reach 105 mmHg at 45°C?


Watch the video: Rates of Change and Behaviors of Graphs (November 2021).