IDENTIFYING COMPONENTS OF A RESEARCH ARTICLE

IDENTIFYING COMPONENTS OF A RESEARCH ARTICLE

IDENTIFYING COMPONENTS OF A RESEARCH ARTICLE

For US students, need help on this assignment upload it through our website www.mytutorstore.com or send through email at care@mytutorstore.com

  1. Identifying Components of a Research Article Assignment: Student is to bring one article to class each day to be prepared to work on this assignment throughout the semester. See Page 17 in Syllabus.

 

  1. Literature Review: Student will choose a social work topic. Student will choose three (3) research articles, published since April 2015, related to that topic and write a literature review of these articles. Specific guidelines will be provided in class as to the structure of the literature review.

 

  1. Self-Assessment Instrument: Students will develop a personal instrument to assess interpersonal relationship/communication skills. In SOWK 480, students will modify the instrument with content focused on practice skills.  Guidelines for this assignment will be distributed in class.

 

  1. Research Article Critique: See attachment to syllabi (Pages 18-20).

IDENTIFYING COMPONENTS OF A RESEARCH ARTICLE

 

This assignment is based on a social work evidence-based research article you have chosen in an area of interest.  The article you choose for this assignment MUST report empirical research AND be published after April 2015.

While conceptual pieces and literature reviews are certainly useful to your understanding of your area, these types of publications may NOT be used as the basis for this assignment. Similarly, articles presenting instrument reliability and/or validity studies are likewise inappropriate.

 

This assignment must address the following questions:

  1. Identify the purpose of this research study [exploratory, descriptive, explanatory, or some combination.

 

  1. Identify the study’s hypothesis (es) and/or research question(s).

 

  1. List the major variables involved in the study (i.e., independent variables, dependant variables, and control variables).For each, identify the conceptual definition. [NOTE: Indicate whether these were presented in the article.]

 

  1. For each of the variables listed in #3, identify the operational definitions and the how it being measured in the study.Specifically, for standardized measurement instruments, identify any reliability and validity information provided.

 

  1. Describe the sample, sample size, and sampling method used (i.e., probability vs. nonprobability, stating the specific technique).State the population to whom the results may be generalized. Identify any stated limitations of the sample or sample size.

 

  1. Describe the research design (i.e., experimental, quasi-experimental, single subject. Survey, etc.Identify any stated limitations of the research design.

 

  1. Identify the statistical analyses conducted. Discuss any stated limitations of the statistical analyses that were presented.

 

  1. State the procedures used to ensure that the study was sensitive to diversity of race/ethnicity, gender, class, and sexual orientation.

 

  1. Summarize the findings and conclusions of the study.

 

  1. Describe your overall reaction to the research study (i.e., how did it contribute to or fail to contribute to your understanding of your area of interest).Give two suggestions for making the study “better”.

 

NOTE:      You must turn in a copy of the article with your assignment.

 

Research Article Critique

 

This assignment is based on a social work research article you have chosen in an area of interest. The article you choose for this assignment MUST report empirical research and be published since April 2015.

 

The article critique MUST address the following:

 

Evaluate the Research Title:

  1. Is the title sufficiently specific?
  2. Does the title indicate the nature of the research without describing the results?
  3. Has the author avoided a “yes-no” question as a title?
  4. If there is a main title and a subtitle, do both provide important information about the research?
  5. Are the primary variables referred to in the title?
  6. Does the title indicate what types of people participated?
  7. If the title implies causality, doe the method of research justify it?
  8. Has the author avoided using jargon and acronyms that might be unknown to his/her audience?
  9. Overall, is the title effective and appropriate?

 

Evaluate the Abstract:

  1. Is the purpose of the study referred to or at least clearly implied?
  2. Does the abstract highlight the research methodology?
  3. Has the researcher omitted the titles of measures (except when these are the focus of the research)?
  4. Are the highlights of the results described?
  5. Has the researcher avoided making vague references to implications and future research directions?
  6. Overall, is the abstract effective and appropriate?

 

Evaluate the Introduction:

  1. Does the researcher begin by identifying a specific problem area?
  2. Does the researcher establish the importance of the problem area?
  3. Is the introduction an essay that logically moves from topic to topic?
  4. Has the researcher provided conceptual definitions of key terms?
  5. Has the researcher indicated the basis for “factual statements”?
  6. Do the specific research purposes, questions, or hypothesis logically flow from the introductory material?
  7. Overall, is the introduction effective and appropriate?

 

Evaluate the Literature Review:

  1. If there is extensive literature on the topic, has the researcher been selective?
  2. Is the literature review critical?
  3. Is the current research cited?
  4. Has the researcher distinguished between research, theory, and opinion?
  5. Overall, is the literature review portion of the introduction appropriate?

 

Evaluate the Sample and Sampling Method When Researchers Aim is to Generalize:

  1. Was random sampling used?
  2. If random sampling was used, was it stratified?
  3. If the randomness of a sample is impaired by the refusal to participate by some of those selected, is the rate of participation reasonably high?
  4. If the randomness of a sample is impaired by the refusal to participate by some of those selected, is there a reason to believe that participants and non-participants are similar on relevant variables?
  5. If a sample from which a researcher wants to generalize was not selected at random, is it at least drawn from a target group for generalization?
  6. If a sample from which a researcher wants to generalize was not selected at random, is it at least reasonably diverse?
  7. If a sample from which a researcher wants to generalize was not selected at random, does the researcher explicitly discuss this limitation?
  8. Has the author described relevant demographics of the sample?
  9. Is the overall size of the sample adequate?
  10. Are there a sufficient number of participants in each subgroup that is reported on separately?
  11. Has informed consent been obtained?
  12. Overall, is the sample appropriate for generalizing?

 

Evaluate the Sample and Sampling Method When Researchers Aim is Not to Generalize:

  1. Has the researcher described the sample/population in sufficient detail?
  2. For a pilot study or a developmental test of a theory, has the researcher used a sample with relevant demographics?
  3. Even if the purpose is not to generalize to a population, has the researcher used a sample of adequate size?
  4. If a purposive sample has been used, has the researcher indicated the basis for selecting individuals to include?
  5. If a population has been studied, had it been clearly identified and described?
  6. Has the researcher obtained informed consent?
  7. Overall, is the description of the sample adequate?

 

Evaluate the Measurement Instrument:

  1. Have the actual items, questions, and/or directions (or, at least a sample of them) been provided?
  2. Are any specialized response formats and/or restrictions described in detail?
  3. For published instruments, have sources where additional information that can be obtained been cited?
  4. When delving into sensitive matters, is there reason to believe that accurate data were obtained?
  5. Have steps been taken to keep the instrumentation from obtruding on and changing any overt behaviors that were observed?
  6. If the collection and coding of observations is highly subjective, is there evidence that similar results would be obtained if another researcher used the same instrument techniques with the same group at the same time?
  7. If an instrument is designed to measure a single unitary trait, does it have adequate internal consistency?
  8. For stable traits, is there evidence of temporal stability?
  9. When appropriate, is there evidence of content validity?
  10. When appropriate, is there evidence of empirical validity?
  11. Is the instrumentation adequate in light of the research purpose?
  12. Overall, is the instrumentation adequate?

 

Evaluate the Research Design and/or Experimental Procedures:

  1. If two or more groups are compared, were individuals assigned at random to the groups?
  2. If 2 or more comparison groups were not formed at random, is there evidence that they were initially equal in important ways?
  3. If only a single participant or a single group is used, have the treatments been alternated?
  4. Are the treatments described in sufficient detail?
  5. If the treatments were administered by people other than the researcher, were these people properly trained?
  6. If the treatments were administered by people other than the researcher, was there a check to see if they administered the treatments properly?
  7. If each treatment group had a different person administering a treatment, has the researcher tried to eliminate the “personal effect”?
  8. Except for differences in the treatments, were all other conditions the same in the experimental and control groups?
  9. If necessary, did the researchers disguise the purpose of the experiment from the participants?
  10. Is the setting for the experiment “natural”?
  11. Has the researcher used politically acceptable and ethical treatments?
  12. Has the researcher distinguished between random selection and random assignment?
  13. Overall, was the experiment properly conducted?

 

Evaluate Results Section:

  1. Is the results section a cohesive essay?
  2. Does the researcher refer back to the research hypothesis, purposes, or questions originally stated in the intro?
  3. When there are a number of statistics, have they been presented in table form?
  4. If there are tables, are their important aspects discussed in the narrative of the results section?
  5. Have the researchers presented descriptive statistics before presenting the results of inferential tests?
  6. If any differences are statistically significant and small, have the researchers noted that they are small?
  7. Have appropriate statistics been selected?
  8.  Overall, is the presentation of the results adequate?

 

Evaluate Discussion Section:

  1. In long articles, do the researchers briefly summarize the purpose and results at the beginning of the discussion?
  2. Do the researchers acknowledge their methodological limitations?
  3. Are the results discussed in terms of the literature cited in the introduction?
  4. Have the researchers avoided citing new references in the discussion?
  5. Are specific implications discussed?
  6. Are suggestions for future research specific?
  7. Have the researchers distinguished between speculation and data-based conclusions?
  8. Overall, is the discussion effective and appropriate?

 

Overall Evaluation:

  1. Have the researchers selected an important problem?
  2. Were the researchers reflective?
  3. Is the report cohesive?
  4. Does the report extend the boundaries of our knowledge on a topic?
  5. Are major methodological flaws unavoidable or forgivable?
  6. Is the research likely to inspire additional research?
  7. Is the research likely to help in decision-making (either of a practical or theoretical nature)?
  8. All things considered, is the report worthy of publication in an academic journal?
  9. Would you be proud to have your name on the report as a coauthor?

The above framework for critiquing a research article is based on:

Pyrczak, F. (2008). Evaluating research in academic journals (4th ed.). Glendale, CA: Pyrczak.

No Comments

Post a Reply