Columbia College
Study Guide: Research
Home
Statistics: PSYC/SOCI 324
It's Greek to Me
Family of Origin Scale
Family of Origin Codebook
Graph Paper
Show Case Variables
Study Guide: Statistics
Research: PSYC/SOCI 325
Student Paper #1
Student Paper #2
Example Survey Page 1
IRB Approved
IRB Approved Page 2
Theories in Social Science
Rules of Logic
The Quantitative Term Project
Study Guide: Research
Page 2 Survey Example
Page 3 Survey Example
Page 4 Survey Example
Page 5 Survey Example
Page 6 Survey Example

Material beyond your textbook

Beyond the Study Help Pages:
"Discovering knowledge"

There are many ways to "learn" or "know" something. Most of us rely very heavily on our personal experiences as to "what really is." We decide that since it happened to us, it must be part of the human condition, and therefore "truth." Many of these personal experiences of individuals have been written down or orally transmitted as platitudes (ex: "early to bed, early to rise, makes a man healthy, wealthy and wise"). Some people hold these sayings as truth---but not the scientist doing research.

Other means by which people might seek truth include star gazing (your Zodiac sign) or magic or intuition--but not the scientist doing research. Some might turn to religion or philosophy for truth---but not the scientist doing research.

Does this mean that these "non-scientific ways" of "knowing" are invalid? No, every day you use some of these means to discover enough of the world around you to successfully deal in it. But the scientist doing research has a sytematic method for searching for answers. If he cannot apply that method to a given pnenominon, it does not rule that pnenominon out of existance; it only means that our current methodology does not have a means yet of dealing with it.

One example is the debate about "science vs religion." Some feel that science has disproven religion. But actually many aspects of religion are outside of the current measurement capacity of science, so science is simply silent on those subjects. In some aspects of magic and intuition science currently is maintaining silence. How do you measure a premonition? How can you set the stage to have a premonition when you want to study it? These and many other things are still outside of our methodology.

Even in areas that scientist do research, there are differences of opinion on what scientific methodology to use. Positivism was a popular methodology a century ago. This was the "scientific" method of the day. With positivism, the scientist tried to prove his hypothises as correct. An example was Karl Marx who proposed "dialectic materialism" and constantly tried to prove it as the underlying truth of society.

But more recent science claims "empiricism" coupled with Popper's propositions as science. This is the methodology supported by your text book. Empiricism demands objective measurements that are replicatable. In the debate over "Cold Fusion" the replicatability question is constantly surfacing. If Pon's experiments are not replicatable, then it is not "good science." The Popper's proposition is strongly held. This concept suggests that, to reduce researcher bias, the scientist should try to prove himself WRONG. The idea is to try to blow holes in the findings to sink them; if the ship does not go under, the findings "currently stand." This does not give the findings the validity of absolute truth, but simply allows their continued existance as our "current state of knowledge." Your textbooks are simply based upon these type of findings, and, when and if these findings are proven wrong, they will quickly be disgarded.

Current scientific research, therefore, is constantly testing and retesting our current body of knowledge to try to knock it out for a better set of conclusions. This is taking small, careful steps towards "truth."

But even today's accepted scientific methodology is not accepted by all who call themselves scientists. One school of thought, as found with the ethnomethodologists, feel that bias is inherent in the current way of doing research, and they want to change the program. As a partial evolution from this group, the relatively new "Ecological Paradigm" has been born. It seems to somewhat combine the empiricists with the ethnomethodologists and add some new ideas. We will hold further discussion of this change until later. Suffice it to say that "doing science" has changed and most likely will continue to change. What we will cover in this course is the most prevalent methodology of our day.

Deductive vs Inductive Reasoning

When doing science, the choice of using inductive or deductive reasoning on a given project is basic to the planning of the research. Inductive reasoning, which often is called "qualitative or discriptive research," is used when science embarks on a pnenomenon that is new or one that has had little or no previous work done on it. Simply put, there is not enough in the literature to begin doing deductive research; there is not enough knowledge to begin to ask the pertinent questions the researcher wants to ask.

A major methodology in inductive research is field work or observation research (which may be participant observation or non-participant observation and overt observation or cover observation). The idea is to get into the field and find out what is going on, without taking in pre-conceived notions of what will be found. This method has been used often in anthropology and, by trial and error, many streghths and weaknesses have been uncovered. Because most textbooks on social research cover this methodology only lightly, this guide will emphasize it through both readings and a project. It can be a valuable tool in your future.

Deductive logic is the most often used in sociology research projects. To better understand this metholodolgy, the following funnel is presented:

General Sociological Immagination or Curiousity

Over-all Paradigm (ex:nature or nurture)

Choosen theoretical perspective

Hypotheses which follow

Defined Questions

Instrument

Findings

At the top of the funnel is your immagination or curiousity. C Wright Mills talks about this concept. A scientist needs to brainstorm an idea to see it in the panorama of other pnenomenon and to tease out the real area of interest.

Next comes the paradigm or set of assumptions to be used. Yes, science has to start by assuming, and, yes, if those assumptions are wrong, the whole project is in trouble. This is, indeed, showing the bias of the researcher when he chooses the paradigm. One choice often made in social research is "nature or nuture?" The psychologist, sociobiologist, botanist, and medical doctor are more likely to choose "nature" as the paradigm to assume. In this, they are assuming that such physical charateristics as genetic make-up, out-side invasion of bacteria, brain structure and function or other such tangible attributes are promenent in out-comes.

Those who choose "nuture" are more likely sociologists, family clinitions, criminologists, philosophers and teachers. They assume that the conditions surrounding the pnenomenon had the greater influence on outcomes; change the environment and you change the outcomes.

Theories next flow from these paradigms. One theory flowing from the nurture paradigm involves crime. It suggests that the "relative deprivation" of lower SES people in striving for the American Dream helps create the conditions conducive to crime; they may even "compell" some to crime. It is obvisous that the theory choosen needs to conform to the paradigm assumptions. If a researcher chose nurture, but then used a nature theory, the research project is doomed to mess up.

Once a specific theorectical perspective is chosen, the hypotheses for this particular study can be decided. Not only should the hypotheses be in harmony with the theory, but it should also be stated as a "null hypotheses" in compliance with Pooper's propostions. The researcher may feel that there is a difference in the income potential between males and females (his hypothesis), but he must try to disprove himself, so he states the hypotheses as: "There is no difference between the income potential of college trained males and college trained females working at the same job."

Please note that the hypothesis needs to be quite specific. The one just given needs to be further specified to make a good study with reliable findings; the next step helps this to happen. The actual research questions need to harmonize with the hypothesis and be very specific (to help insure that we have reliability and validity). Doing a project only to find that you asked the wrong questions is poor science.

The instrument we use, say a survey questionaire, therefore must comply with our exacting research questions. The type of scales used need to reflect the type of data needed to really test the pnenomenon. Likert scales, for an example, have strengths and weaknesses; use this type of scale where it will best suit the specific question being asked. Carefully choose open-ended questions for some types of inquiry and closed-ended, forced response questions for other types.

The findings, or the "bottom line" is what all this effort expended has been about. But now the analysis of those findngs is as critical as any of the other steps. Many researchers fall short at this point, perhaps after the long trail feeling that this will take care of itself. But it will not. Ethical, pain-staking analysis is needed, not "lying with statistics." It is also a tenent of science that the research is not completed without the proper presentation of findings to others---for this is how science is done: "brick upon brick."

----------

One key to qualitative research should be obvious: you must go where and when the phenomenon occurs. The meaning of the behavior observed, within the context it is found, is the treasured data of this type of study. The general types of observation are as follows:

1) structured observation: the purpose is to document specific behaviors as they occur. The length, frequency, and conditions under which these behaviors occur are recorded for later analysis

2) participant observation: all behavior occurring is of interest as you surround yourself in the actual field experience. This is time consuming but very rich in detail--you may discover things that can not come out in any other research design.

This type of field work is usually overt (the research subjects know that you are there and have asked permission to do research on them). But it can use some types of covert methods. With covert research, you may be seen in person by the subjects, but your intent of doing research is unknown to them. Another method is to be around the subjects, but not be noticed by them. This might be in research done as an "ancillary person" (such as a waiter who presumably does not listen in on the dinner conversation).

At some point in participant observation, you probably will want to include ethnographic interviews in which you question "why?" as to the beliefs and behaviors of the subjects. This will allow you to ask what the subjects are feeling or thinking in addition to what they are doing. Don't be shocked if their behavior does not align with their beliefs!

Qualitative designs are rich with detail and strong in internal validity as long as the observer bias is under reasonable control. The main weakness is that you learn a lot about the particular setting and particular subjects, but they are not necessarily generalizable to other situations. Ecological fallacy can be fallen into easily in this design

Empirical science is:

1) Logical--a rational activity based on logical reasoning

2) Deterministic--all phenomenon have antecedent causes that are subject to identification and logical understanding

3) General--scientists are concerned with being able to generalize their findings to other situations and populations

4) Parsimonious--it uses as few explanatory factors as possible, disregarding what appears to be irrelevant factors

5) Specific--it is always necessary to clearly specify both the research problem and the methods and procedures which are used to measure the concepts under study

6) Empirically verifiable--the results must be open to evaluation and verification to others for additional study

7) Intersubjective--even though no two scientists are exactly alike with regards to subjective orientations, they still would arrive at the same conclusion upon doing the same experiment

8) Open to modification--science must be ready to accept revision and change, since science is a process of trial and error--no single research design will provide the "ultimate answer."

---------

The Eleven Stages of a Quantitative Research Project

1) Define a research topic

2) Intensify knowledge about the topic

3) Clarify concepts and their measurements

4) Select a data collection method

5) Consider the purpose, value, and ethics of the study

6) Operationalize concepts and design the data collection instruments

7) Select a sample

8) Collect the data

9) Process the data

10) Analyze the data

11) Write up the results.

-----------

Qualitative Reasearch. "What to look for when you read a field study:"

1- Why did the researcher choose the particular setting?

2- Does the researcher begin with a refined set of hypoteses or a set of vague orienting questions?

3- What are different meas of gaining access to a research site?

4- What is the researcher's degree of involvement or immersion in the setting?

5- What constitutes the data (the observations which the reseacher takes away from the field)?

6- How does the author begin to develop generalizations from the data?

7- How elaborate was the preplanning to determine how the study would be carried out?

8- Can you anticipate ethical problems in doing this kind of research?

-------

Evaluative research in the real-world has some troubles for you to watch out for.

It is usually the case that evaluators are employed too late to save the company, program, or campaign. This may mean you, as the messenger, deliver a death blow of reality instead of the hoped for miracle cure. This may not greatly help your popularity standings!

You may work for a governmental agency, like the State Legislative Auditor, and have great pay and benefits, but lack in job security.

This may not be popular with your spouse! You may find that the people you are evaluating despise being evaluated. This may mean little or no cooperation! But you may find the job fun and some-what "James Bondish" as you uncover "truth, justice, and the American Way."

There are, of course, varying levels of evaluative research. In one type of study, you measure the actual behavior of the unit against its stated goals and objectives. This is easy to imagine with regards to the Driver's License Department or some other government agency. You have probably spent agonizing hours waiting in lines and wading through red tape yet you read on the wall that these are public servants dedicated to serving you--the public. You might get a certain enjoyment out of work that evaluates and changes these type of programs.

Yet another use of evaluative research, called structural evaluation, does not concern itself with goals and objectives but in comparing units with other known programs. This type of study employs such devices as organizational charts of differing programs to suggest changes. It works at comparisons of the actual performances against the norms in like-programs.

Cost-benefit evaluations use an economy approach as they compare increasing costs with possible increasing productivity, as well as decreasing costs and still trying to increase productivity. Changes may or may not pay off as intended, and this type of research tries to predict future changes as well as evaluate former changes with respect to efficiency.

Process evaluation, often employed in manufacturing, appeals to concerns about improvement of the various stages of production. Its focus is to clearly describe what is going on (in contrast to what is thought to be going on) and how it can be improved upon. This type of evaluative research is employed heavily in car manufacturing in Japan where a different management style allows the floor worker to be part of the evaluation of his team's work.

Here in the US this idea is being implemented somewhat and needs the third party (you) influence to get it implemented.

Outcomes evaluation is used to focus on benefits that need to be realized by a the consumer of goods or services to keep them satisfied. One important example of this research is the business community's viewpoint of the students graduating from the University of Utah. It was found that the local business people who do the hiring do not perceive the U students are graduating with real-world understanding---especially in research! One of the results of this study was to expand the undergraduates' experience in doing research, so that the prospective employers were placated.

Impact evaluative research is broader than outcomes evaluation. It involves the entire community's viewpoint; it considers the opinions and attitudes of even those not directly benefitted by the program under study. This is the world of PR and is a major area that you may decide to work in.

Whatever level of evaluative research that is chosen, the personal attributes of the researcher impacts these type of studies more than regular research projects. Working with people in an "auditor" way, in a way that is designed to "criticize" their work or program, is usually a delicate matter.

The information obtained can be very important to an organization, yet they shy away from employing an evaluator or from listening to the evaluation for some of the following common complaints:

1) "the evaluation is of poor quality" (usually when the findings are not favorable to the employers mind)

2) "we were not involved enough or the out come would differ"

3) "your recommendations are arguable, too vague to use, more research must be needed"

4) "your language in the presentation was too difficult for all of us to understand, let alone to implement"

5) "you were in an unreal vacuum--you missed the political and/or environmental concerns"

6) "you lack credibility"

7) "you cooperated too much with special interests"

8) "your evaluation comes too late to be of value"

To help improve the value of your evaluative study, try to:

1) Involve personnel that would do the actual implementation of your suggestions. Be careful that this does not unduly bias your work--you are still the researcher, they should not taint findings

2) Present your findings to outsiders as well, especially those that would have an interest in the outcomes and influence implementation

3) Get very timely information

4) Clearly admit the limitations of the study early on to reduce the number of possible objections

A Social Impact Assessment is a part of the larger Environmental Impact Assessment process. Herein lies yet another career possibility. There are 5 types of social impact that are studied:

1) Economic: changes in business activity, jobs, employment, personal income, and in the economic "base" of the community.

2) Demographic: changes in population (not just local: regional) and in population characteristics (gender ratio, age differential)

3) Fiscal: changes in public costs (ex: school districts tax base)

4) Community Service: changes in demand, distribution and quality of public services

5) Social: changes in community organizations, perceptions, lifestyles and life satisfaction. Especially changes in specific

groups such as the elderly, minorities, or other sub groups.

As these assessments are mandated by law, jobs in these areas can be found. Yet the same regulations require the study to be completed in a given time frame, so use of existing data is necessary.

 

Thank You for Visiting!
search tips advanced search
site search by freefind