Articles

17.5: Qualitative Analysis - Mathematics


Often all that we know about actors and events is simple co-presence. That is, either an actor was, or wasn't, present, and our incidence matrix is binary. This is because the various dimensional methods operate on similarity/distance matrices, and measures like correlations (as used in two-mode factor analysis) can be misleading with binary data. Even correspondence analysis, which is more friendly to binary data, can be troublesome when data are sparse.

An alternative approach is block modeling. Block modeling works directly on the binary incidence matrix by trying to permute rows and columns to fit, as closely as possible, idealized images. This approach doesn't involve any of the distributional assumptions that are made in scaling analysis.

In principle, one could fit any sort of block model to actor-by-event incidence data. We will examine two models that ask meaningful (alternative) questions about the patterns of linkage between actors and events. Both of these models can be directly calculated in UCINET. Alternative block models, of course, could be fit to incidence data using more general block-modeling algorithms.

Two-Mode Core-Periphery Analysis

The core-periphery structure is an ideal typical pattern that divides both the rows and the columns into two classes. One of the blocks on the main diagonal (the core) is a high-density block; the other block on the main diagonal (the periphery) is a low-density block. The core-periphery model is indifferent to the density of ties in the off-diagonal blocks.

When we apply the core-periphery model to actor-by-actor data (see Network>Core/Periphery), the model seeks to identify a set of actors who have high density of ties among themselves (the core) by sharing many events in common, and another set of actors who have very low density of ties among themselves (the periphery) by having few events in common. Actors in the core are able to coordinate their actions, those in the periphery are not. As a consequence, actors in the core are at a structural advantage in exchange relations with actors in the periphery.

When we apply the core-periphery model to actor-by-event data (Network>2-Mode>Categorical Core/Periphery) we are seeking the same idealized "image" of a high and a low density block along the main diagonal. But, now the meaning is rather different.

The "core" consists of a partition of actors that are closely connected to each of the events in an event partition; and simultaneously a partition of events that are closely connected to the actors in the core partition. So, the "core" is a cluster of frequently co-occurring actors and events. The "periphery" consists of a partition of actors who are not co-incident to the same events; and a partition of events that are disjoint because they have no actors in common.

Network>2-Mode>Categorical Core/Periphery uses numerical methods to search for the partition of actors and of events that comes as close as possible to the idealized image. Figure 17.16 shows a portion of the results of applying this method to participation (not partisanship) in the California donors and initiatives data.

Figure 17.16: Categorical core-periphery model of California $1M donors and ballot initiatives (truncated)

The numerical search method used by Network>2-Mode>Categorical Core/Periphery is a genetic algorithm, and the measure of goodness of fit is stated in terms of a "fitness" score (0 means bad fit, 1 means excellent fit). You can also judge the goodness of the result by examining the density matrix at the end of the output. If the block model was completely successful, the 1,1 block should have a density of one, and the 2,2 block should have a density of zero. While far from perfect, the model here is good enough to be taken seriously.

The blocked matrix shows a "core" composed of the Democratic Party, a number of major unions, and the building industry association who are all very likely to participate in a considerable number of initiatives (proposition 23 through proposition 18). The remainder of the actors are grouped into the periphery as both participating less frequently, and having few issues in common. A considerable number of issues are also grouped as "peripheral" in the sense that they attract few donors, and these donors have little in common. We also see (upper right) that core actors do participate to some degree (0.179) in peripheral issues. In the lower left, we see that peripheral actors participate somewhat more heavily (0.260) in core issues.

Two-Mode Factions Analysis

An alternative block model is that of "factions". Factions are groupings that have high density within the group, and low density of ties between groups. Network>Subgroups>Factions fits this block model to one-mode data (for any user-specified number of factions). Network>2-Mode>2-Mode Factions fits the same type of model to two-mode data (but for only two factions).

When we apply the factions model to one-mode actor data, we are trying to identify two clusters of actors who are closely tied to one another by attending all of the same events, but very loosely connected to members of other factions and the events that tie them together. If we were to apply the idea of factions to events in a one-mode analysis, we would be seeking to identify events that were closely tied by having exactly the same participants.

Network>2-Mode>2-Mode Factions applies the same approach to the rectangular actor-by-event matrix. In doing this,we are trying to locate joint groupings of actors and events that are as mutually exclusive as possible. In principle, there could be more than two such factions. Figure 17.17 shows the results of the two-mode factions block model to the participation of top donors in political initiatives.

Figure 17.17: Two-mode factions model of California $1M donors and ballot initiatives (truncated)

Two measures of goodness-of-fit are available. First we have our "fitness" score, which is the correlation between the observed scores (0 or 1) and the scores that "should" be present in each block. The densities in the blocks also informs us about goodness of fit. For a factions analysis, an ideal pattern would be dense 1-blocks along the diagonal (many ties within groups) and zero-blocks off the diagonal (ties between groups).

The fit of the two factions model is not as impressive as the fit of the core-periphery model. This suggests that an "image" of California politics as one of two separate and largely disjoint issue-actor spaces is not as useful as an image of a high intensity core of actors and issues coupled with an otherwise disjoint set of issues and participants.

The blocking itself also is not very appealing, placing most of the actors in one faction (with modest density of 0.401). The second faction is small, and has a density (0.299) that is not very different from the off-diagonal blocks. As before, the blocking of actors by events is grouping together sets of actors and events that define one another.


Qualitative Analysis

Very often it is almost impossible to find explicitly of implicitly the solutions of a system (specially nonlinear ones). The qualitative approach as well as numerical one are important since they allow us to make conclusions regardless whether we know or not the solutions.

Recall what we did for autonomous equations. First we looked for the equilibrium points and then, in conjunction with the existence and uniqueness theorem, we concluded that non-equilibrium solutions are either increasing or decreasing. This is the result of looking at the sign of the derivative. So what happened for autonomous systems? First recall that the components of the velocity vectors are and . These vectors give the direction of the motion along the trajectories. We have the four natural directions (left-down, left-up, right-down, and right-up) and the other four directions (left, right, up, and down). These directions are obtained by looking at the signs of and and whether they are equal to 0. If both are zero, then we have an equilibrium point.

Example. Consider the model describing two species competing for the same prey

Let us only focus on the first quadrant and . First, we look for the equilibrium points. We must have

Algebraic manipulations imply

The equilibrium points are (0,0), (0,2), (1,0), and .
Consider the region R delimited by the x-axis, the y-axis, the line 1- x - y =0, and the line 2-3 x - y =0.

In fact, looking at the first-quadrant, we have three more regions to add to the above one. The direction of the motion depends on what region we are in (see the picture below)

The boundaries of these regions are very important in determining the direction of the motion along the trajectories. In fact, it helps to visualize the trajectories as slope-field did for autonomous equations. These boundaries are called nullclines .

Consider the autonomous system

The x-nullcline is the set of points where and y-nullcline is the set of points where . Clearly the points of intersection between x-nullcline and y-nullcline are exactly the equilibrium points. Note that along the x-nullcline the velocity vectors are vertical while along the y-nullcline the velocity vectors are horizontal. Note that as long as we are traveling along a nullcline without crossing an equilibrium point, then the direction of the velocity vector must be the same. Once we cross an equilibrium point, then we may have a change in the direction (from up to down, or right to left, and vice-versa).

Example. Draw the nullclines for the autonomous system and the velocity vectors along them.

The x-nullcline are given by

while the y-nullcline are given by

In order to find the direction of the velocity vectors along the nullclines, we pick a point on the nullcline and find the direction of the velocity vector at that point. The velocity vector along the segment of the nullcline delimited by equilibrium points which contains the given point will have the same direction. For example, consider the point (2,0). The velocity vector at this point is (-1,0). Therefore the velocity vector at any point ( x ,0), with x > 1, is horizontal (we are on the y-nullcline) and points to the left. The picture below gives the nullclines and the velocity vectors along them.

In this example, the nullclines are lines. In general we may have any kind of curves.

Example. Draw the nullclines for the autonomous system

The x-nullcline are given by

while the y-nullcline are given by

Hence the y-nullcline is the union of a line with the ellipse

Information from the nullclines

For most of the nonlinear autonomous systems, it is impossible to find explicitly the solutions. We may use numerical techniques to have an idea about the solutions, but qualitative analysis may be able to answer some questions with a low cost and faster than the numerical technique will do. For example, questions related to the long term behavior of solutions. The nullclines plays a central role in the qualitative approach. Let us illustrate this on the following example.

Example. Discuss the behavior of the solutions of the autonomous system

We have already found the nullclines and the direction of the velocity vectors along these nullclines.

These nullclines give the birth to four regions in which the direction of the motion is constant. Let us discuss the region bordered by the x-axis, the y-axis, the line 1- x - y =0, and the line 2-3 x - y =0. Then the direction of the motion is left-down. So a moving object starting at a position in this region, will follow a path going left-down. We have three choices First choice: the trajectory dies at the equilibrium point . Second choice: the starting point is above the trajectory which dies at the equilibrium point . Then the trajectory will hit the triangle defined by the points , (0,1), and (0,2). Then it will go up-left and dies at the equilibrium point (0,2). Third choice: the starting point is below the trajectory which dies at the equilibrium point . Then the trajectory will hit the triangle defined by the points , (1,0), and . Then it will go down-right and dies at the equilibrium point (1,0).

For the other regions, look at the picture below. We included some solutions for every region.

Remarks. We see from this example that the trajectories which dye at the equilibrium point are crucial to predicting the behavior of the solutions. These two trajectories are called separatrix because they separate the regions into different subregions with a specific behavior. To find them is a very difficult problem. Notice also that the equilibrium points (0,2) and (1,0) behave like sinks. The classification of equilibrium points will be discussed using the approximation by linear systems.


What is quantitative analysis?

Quantitative analysis is often associated with numerical analysis where data is collected, classified, and then computed for certain findings using a set of statistical methods. Data is chosen randomly in large samples and then analyzed. The advantage of quantitative analysis the findings can be applied in a general population using research patterns developed in the sample. This is a shortcoming of qualitative data analysis because of limited generalization of findings.

Quantitative analysis is more objective in nature. It seeks to understand the occurrence of events and then describe them using statistical methods. However, more clarity can be obtained by concurrently using qualitative and quantitative methods. Quantitative analysis normally leaves the random and scarce events in research results whereas qualitative analysis considers them.

Quantitative analysis is generally concerned with measurable quantities such as weight, length, temperature, speed, width, and many more. The data can be expressed in a tabular form or any diagrammatic representation using graphs or charts. Quantitative data can be classified as continuous or discrete, and it is often obtained using surveys, observations, experiments or interviews.

There are, however, limitations in quantitative analysis. For instance, it can be challenging to uncover relatively new concepts using quantitative analysis and that is where qualitative analysis comes into the equation to find out “why” a certain phenomenon occurs. That is why the methods are often used simultaneously.


Understanding People and Qualitative Analysis

Qualitative analysis can sound almost like "listening to your gut," and indeed many qualitative analysts would argue that gut feelings have their place in the process. That does not mean, however, that it is not a rigorous approach. Indeed, it can consume much more time and energy than quantitative analysis.

People are central to qualitative analysis. An investor might start by getting to know a company's management, including their educational and professional backgrounds. One of the most important factors is their experience in the industry. More abstractly, do they have a record of hard work and prudent decision-making, or are they better at knowing—or being related to—the right people? Their reputations are also key: do their colleagues and peers respect them? Their relationships with business partners are also worth exploring since these can have a direct impact on operations.


Qualitative Data Analysis

Qualitative data refers to non-numeric information such as interview transcripts, notes, video and audio recordings, images and text documents. Qualitative data analysis can be divided into the following five categories:

1. Content analysis. This refers to the process of categorizing verbal or behavioural data to classify, summarize and tabulate the data.

2. Narrative analysis. This method involves the reformulation of stories presented by respondents taking into account context of each case and different experiences of each respondent. In other words, narrative analysis is the revision of primary qualitative data by researcher.

3. Discourse analysis. A method of analysis of naturally occurring talk and all types of written text.

4. Framework analysis. This is more advanced method that consists of several stages such as familiarization, identifying a thematic framework, coding, charting, mapping and interpretation.

5. Grounded theory. This method of qualitative data analysis starts with an analysis of a single case to formulate a theory. Then, additional cases are examined to see if they contribute to the theory.

Qualitative data analysis can be conducted through the following three steps:

Step 1: Developing and Applying Codes. Coding can be explained as categorization of data. A ‘code’ can be a word or a short phrase that represents a theme or an idea. All codes need to be assigned meaningful titles. A wide range of non-quantifiable elements such as events, behaviours, activities, meanings etc. can be coded.

There are three types of coding:

  1. Open coding. The initial organization of raw data to try to make sense of it.
  2. Axial coding. Interconnecting and linking the categories of codes.
  3. Selective coding. Formulating the story through connecting the categories.

Coding can be done manually or using qualitative data analysis software such as

NVivo, Atlas ti 6.0, HyperRESEARCH 2.8, Max QDA and others.

When using manual coding you can use folders, filing cabinets, wallets etc. to gather together materials that are examples of similar themes or analytic ideas. Manual method of coding in qualitative data analysis is rightly considered as labour-intensive, time-consuming and outdated.

In computer-based coding, on the other hand, physical files and cabinets are replaced with computer based directories and files. When choosing software for qualitative data analysis you need to consider a wide range of factors such as the type and amount of data you need to analyse, time required to master the software and cost considerations.

Moreover, it is important to get confirmation from your dissertation supervisor prior to application of any specific qualitative data analysis software.

The following table contains examples of research titles, elements to be coded and identification of relevant codes:

Supporting charitable courses

Qualitative data coding

Step 2: Identifying themes, patterns and relationships. Unlike quantitative methods, in qualitative data analysis there are no universally applicable techniques that can be applied to generate findings. Analytical and critical thinking skills of researcher plays significant role in data analysis in qualitative studies. Therefore, no qualitative study can be repeated to generate the same results.

Nevertheless, there is a set of techniques that you can use to identify common themes, patterns and relationships within responses of sample group members in relation to codes that have been specified in the previous stage.

Specifically, the most popular and effective methods of qualitative data interpretation include the following:

  • Word and phrase repetitions – scanning primary data for words and phrases most commonly used by respondents, as well as, words and phrases used with unusual emotions
  • Primary and secondary data comparisons – comparing the findings of interview/focus group/observation/any other qualitative data collection method with the findings of literature review and discussing differences between them
  • Search for missing information – discussions about which aspects of the issue was not mentioned by respondents, although you expected them to be mentioned
  • Metaphors and analogues – comparing primary research findings to phenomena from a different area and discussing similarities and differences.

Step 3: Summarizing the data. At this last stage you need to link research findings to hypotheses or research aim and objectives. When writing data analysis chapter, you can use noteworthy quotations from the transcript in order to highlight major themes within findings and possible contradictions.

It is important to note that the process of qualitative data analysis described above is general and different types of qualitative studies may require slightly different methods of data analysis.

My e-book, The Ultimate Guide to Writing a Dissertation in Business Studies: a step by step approach contains a detailed, yet simple explanation of qualitative data analysis methods. The e-book explains all stages of the research process starting from the selection of the research area to writing personal reflection. Important elements of dissertations such as research philosophy, research approach, research design, methods of data collection and data analysis are explained in simple words. John Dudovskiy


17.5: Qualitative Analysis - Mathematics

Learning Objective

Differentiate between qualitative and quantitative approaches.

Hong is a physical therapist who teaches injury assessment classes at the University of Utah. With the recent change to online for the remainder of the semester, Hong is interested in the impact on students’ skills acquisition for injury assessment. He wants to utilize both quantitative and qualitative approaches—he plans to compare previous student test scores to current student test scores. He also plans to interview current students about their experiences practicing injury assessment skills virtually. What specific study design methods will Hong use?

Making sense of the evidence

hen conducting a literature search and reviewing research articles, it is important to have a general understanding of the types of research and data you anticipate from different types of studies.

In this article, we review two broad categories of study methods, quantitative and qualitative, and discuss some of their subtypes, or designs, and the type of data that they generate.

Quantitative vs. qualitative approaches

Quantitative

Qualitative

Quantitative is measurable. It is often associated with a more traditional scientific method of gathering data in an organized, objective manner so that findings can be generalized to other persons or populations. Quantitative designs are based on probabilities or likelihood—it utilizes ‘p’ values, power analysis, and other scientific methods to ensure the rigor and reproducibility of the results to other populations. Quantitative designs can be experimental, quasi-experimental, descriptive, or correlational.

Qualitative is usually more subjective, although like quantitative research, it also uses a systematic approach. Qualitative research is generally preferred when the clinical question centers around life experiences or meaning. Qualitative research explores the complexity, depth, and richness of a particular situation from the perspective of the informants—referring to the person or persons providing the information. This may be the patient, the patient’s caregivers, the patient’s family members, etc. The information may also come from the investigator’s or researcher’s observations. At the heart of qualitative research is the belief that reality is based on perceptions and can be different for each person, often changing over time.

Study design differences

Quantitative

Qualitative

  • Experimental – cause and effect (if A, then B)
  • Quasi-experimental – also examines cause, used when not all variables can be controlled
  • Descriptive – examine characteristics of a particular situation or group
  • Correlational – examine relationships between two or more variables
  • Phenomenological – examines the lived experience within a particular condition or situation
  • Ethnographic – examine the culture of a group of people
  • Grounded theory – using a research problem to discover and develop a theory

Quantitative design methods

Quantitative designs typically fall into four categories: experimental, quasi-experimental, descriptive, or correlational. Let’s talk about these different types. But before we begin, we need to briefly review the difference between independent and dependent variables.

The independent variable is the variable that is being manipulated, or the one that varies. It is sometimes called the ‘predictor’ or ‘treatment’ variable.

The dependent variable is the outcome (or response) variable. Changes in the dependent variables are presumed to be caused or influenced by the independent variable.

Experimental

In experimental designs, there are often treatment groups and control groups. This study design looks for cause and effect (if A, then B), so it requires having control over at least one of the independent, or treatment variables. Experimental design administers the treatment to some of the subjects (called the ‘experimental group’) and not to others (called the ‘control group’). Subjects are randomly assigned—meaning that they would have an equal chance of being assigned to the control group or the experimental group. This is the strongest design for testing cause and effect relationships because randomization reduces bias. In fact, most researchers believe that a randomized controlled trail is the only kind of research study where we can infer cause (if A, then B). The difficulty with a randomized controlled trial is that the results may not be generalizable in all circumstances with all patient populations, so as with any research study, you need to consider the application of the findings to your patients in your setting.

Quasi-experimental

Quasi-Experimental studies also seek to identify a cause and effect (causal) relationship, although they are less powerful than experimental designs. This is because they lack one or more characteristics of a true experiment. For instance, they may not include random assignment or they may not have a control group. As is often the case in the ‘real world’, clinical care variables often cannot be controlled due to ethical, practical, or fiscal concerns. So, the quasi experimental approach is utilized when a randomized controlled trial is not possible. For example, if it was found that the new treatment stopped disease progression, it would no longer be ethical to withhold it from others by establishing a control group.

Descriptive

Descriptive studies give us an accurate account of the characteristics of a particular situation or group. They are often used to determine how often something occurs, the likelihood of something occurring, or to provide a way to categorize information. For example, let’s say we wanted to look at the visiting policy in the ICU and describe how implementing an open-visiting policy affected nurse satisfaction. We could use a research tool, such as a Likert scale (5 = very satisfied and 1 = very dissatisfied), to help us gain an understanding of how satisfied nurses are as a group with this policy.

Correlational

Correlational research involves the study of the relationship between two or more variables. The primary purpose is to explain the nature of the relationship, not to determine the cause and effect. For example, if you wanted to examine whether first-time moms who have an elective induction are more likely to have a cesarean birth than first-time moms who go into labor naturally, the independent variables would be ‘elective induction’ and ‘go into labor naturally’ (because they are the variables that ‘vary’) and the outcome variable is ‘cesarean section.’ Even if you find a strong relationship between elective inductions and an increased likelihood of cesarean birth, you cannot state that elective inductions ‘cause’ cesarean births because we have no control over the variables. We can only report an increased likelihood.

Qualitative design methods

Qualitative methods delve deeply into experiences, social processes, and subcultures. Qualitative study generally falls under three types of designs: phenomenology, ethnography and grounded theory.

Phenomenology

In this approach, we want to understand and describe the lived experience or meaning of persons with a particular condition or situation. For example, phenomenological questions might ask “What is it like for an adolescent to have a younger sibling with a terminal illness?” or “What is the lived experience of caring for an older house-bound dependent parent?”

Ethnography

Ethnographic studies focus on the culture of a group of people. The assumption behind ethnographies is that groups of individuals evolve into a kind of ‘culture’ that guides the way members of that culture or group view the world. In this kind of study, the research focuses on participant observation, where the researcher becomes an active participant in that culture to understand its experiences. For example, nursing could be considered a professional culture, and the unit of a hospital can be viewed as a subculture. One example specific to nursing culture was a study done in 2006 by Deitrick and colleagues. They used ethnographic methods to examine problems related to answering patient call lights on one medical surgical inpatient unit. The single nursing unit was the ‘culture’ under study.

Grounded theory

Grounded theory research begins with a general research problem, selects persons most likely to clarify the initial understanding of the question, and uses a variety of techniques (interviewing, observation, document review to name a few) to discover and develop a theory. For example, one nurse researcher used a grounded theory approach to explain how African American women from different socioeconomic backgrounds make decisions about mammography screening. Because African American women historically have fewer mammograms (and therefore lower survival rates for later stage detection), understanding their decision-making process may help the provider support more effective health promotion efforts.

Conclusion

Being able to identify the differences between qualitative and quantitative research and becoming familiar with the subtypes of each can make a literature search a little less daunting.


Math uncovers hidden patterns in these historic art masterpieces

Art historians have missed something incredibly important lurking behind the canvases of art's greatest works.

Whether it's Michelangelo's Sistine Chapel ceiling, Andy Warhol's Campbell's soup cans, or ancient humans' lustrous cave paintings, the creation of art is an inherently human story.

Art historians have dedicated their lives to dissecting and discussing these influential works. But according to a new study published this week in the journal Proceedings of the National Academy of Sciences, scholars have missed something incredibly important lurking behind the canvases of some of the most celebrated masterpieces.

By applying a mathematical formula to nearly 15,000 works of art created across 500 years, a team of scientists has uncovered hidden patterns in these masterworks. In doing so, they not only change how we see this art, but also upend long-standing historic theories behind some of the world's most famous paintings.

Using math to understand something as fundamentally human as art may sound counter-intuitive, but artists themselves have actually been doing it for centuries. By applying ratios, including the so-called "golden ratio," artists have long sought to recreate absolute beauty using mathematical harmony. In this study, the researchers do something a little different: They used mathematical ratios to uncover a hidden "metanarrative" within the painting process.

"From individual qualitative work of art historians emerges a metanarrative that remains difficult to evaluate," write the authors.

Individual interpretation is a kind of qualitative analysis that has reigned supreme in art history. As a result, more subtle, or less easily perceived quantitative patterns have been overlooked, the researchers say. To get around that problem, they argue that a systematic, quantitative approach could be used alongside such qualitative analyses.

With any luck, this metanarrative could uncover previously ignored, or glossed over, anomalies in the world of art history.

Mathematical Masters — To build their mathematical framework, the researchers sourced digital scans of 14,912 landscape paintings spanning the Western renaissance to contemporary works.

With these paintings (digitally) in-hand, the researchers applied a mathematical algorithm that draws smaller and smaller partitions between areas with bigger differences between colors used. For example, the entire image would represent partition zero, and then partition number one would be drawn either horizontally or vertically between the two areas of the painting with the most color difference. While this operation could theoretically be done ad nauseam, the researchers chose to focus on partitions one and two.

They discovered that the overall composition of the landscapes, as determined by the direction of the partition, changed over time independently of the nationality of the artist.

"[T]he long-standing division of landscape art history by nation must be put into question, in favor of a broader comparative perspective," write the authors. "[And] library classification, metadata in visual resource collections, and the categorization of encyclopedia entries should follow suit."

They also uncovered instances when an individual artist's expression truly deviated from the overarching trends of the era. These iconoclasts may be overlooked in favor of a smoother, more "artificial," narrative of art history.

"[A] proper grand narrative of art history requires a multiplicity of perspectives, both qualitative and quantitative, as a great amount of nonlinear detail gets lost in an overly conventional mainstream narrative," the authors write in the study.

Room for improvement — While their samples did include a few works from Japanese and Chinese artists, the collection largely draws from the "canon of Western European art" — a shortcoming the researchers acknowledge and hope to broaden beyond in future studies.

In addition to the limited diversity of artworks used in this study, the authors write that the algorithm itself is also limited because it can only recognize partitions of straight lines and is not yet capable of handling something as seemingly mundane as a curve. The researchers hope their approach can be used as a fundamental background for future research that will expand on their initial techniques.

The technique could be used to investigate other forms of art as well, including photography, architecture, and typography.


Increasingly powerful computers and simplification algorithms permit us to obtain answers for increasingly complex computer-algebra problems. Consequently, we will continue to get results which often are incomprehensibly lengthy and complicated. However, a user of computer-algebra systems need not abandon hope when faced with such results. Often the user is interested in qualitative properties of a result rather than details of an analytical representation of the result. For example, is the result real? bounded? even? continuous? positive? monotonic? differentiable? or convex? Where, if any, are the singularities, zeros, and extrema? What are their orders? What is the local behavior in the neighborhood of these notable features, as exhibited perhaps by series expansions? Are there simple asymptotic representations as certain variables approach infinity?

This paper describes a program which automatically analyzes expressions for some of these properties. The user may enquire about a specific property, such as monotonicity, or he may simply invoke a single function which attempts to determine all of the properties addressed by the collection of more specific functions. The specific functions are appropriate when a user knows which properties are important for his application, but frequently he is ignorant of the most decisive questions or ignorant of specific available functions which automatically investigate the desired properties. The collective qualitative analysis function is intended as a sort of panic button, which hopefully will provide some pleasantly surprising results that serve as a point of departure for further analysis. This function is a tool for deciphering unwieldy expressions that otherwise defy understanding.

Many of the above qualitative properties have numerous testable characterizations, and only a few have been explored here. However, the results of this initial effort indicate that qualitative analysis programs are a promising means of extending the utility of computer algebra.


From insights to action: Suggesting, supposing, and steering

Thus far, our machine-human “bake-off” has demonstrated the important role humans play in data analysis and what value they bring to the activity. Every step of the way—in screening, sorting, and sensing—there is a need for not only human involvement but also human know-how to ensure the analysis’s accuracy and completeness. And the human’s job doesn’t typically end there. The whole point of data analysis is to provide not only insights, but also actionable recommendations—which our algorithm showed only limited capacity to do. In addition, research and insight collection are typically not one-off activities but components of a bigger ongoing research effort or portfolio. Human analysts, with their in-depth knowledge of the data, can help drive the company’s research agenda and sift through and prioritize various implementation plans, communications, and research strategy recommendations. An ideal human-machine research team drives the process by suggesting which data sets are usable and meaningful, moves on to supposing considerations based on contextual understanding to finally steering informed actions to meet key business objectives.


Footnotes

Publisher's Disclaimer: Taylor & Francis makes every effort to ensure the accuracy of all the information (the 𠇌ontent”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.


Qualitative analysis of a degenerate fixed point of a discrete predator–prey model with cooperative hunting

Shengfu Deng, School of Mathematical Sciences, Huaqiao University, Quanzhou, Fujian 362021, China.

School of Mathematical Sciences, Huaqiao University, Quanzhou, China

Minnan Science and Technology University, Quanzhou, China

School of Mathematical Sciences, Huaqiao University, Quanzhou, China

Shengfu Deng, School of Mathematical Sciences, Huaqiao University, Quanzhou, Fujian 362021, China.

Abstract

This paper investigates the qualitative properties near a degenerate fixed point of a discrete predator–prey model with cooperative hunting derived from the Lotka–Volterra model where the eigenvalues of the corresponding linear operator are ± 1. Applying the theory of the normal form and the Takens's theorem, we change the problem of this discrete model into the one of an ordinary differential system. With the technique of desingularization to blow up the degenerate equilibrium of the ordinary differential system, we obtain its qualitative properties. Utilizing the conjugacy between the discrete model and the time-one mapping of the vector field, we obtain the qualitative structures of this discrete model.


Watch the video: SBNM 5411 Lecture 1: Introduction to Quantitative Analysis (November 2021).