RESEARCH METHODOLOGY

Chapter 1: INTRODUCTION TO THE RESEARCH METHODOLOGY

1.1 Foundation of Research

a) Foundation of research: Meaning, Objectives, Motivation, Utility

Research is fundamentally a systematic, scientific, and objective search for data or facts to solve a specific problem or answer a question. It is a structured inquiry that increases the common pool of knowledge.

Key Points:

· Meaning: A continuous and purposeful investigation to discover and interpret new facts, revise accepted theories, or apply established theories practically.

· Objectives: The primary goal is to discover answers to questions through the application of scientific procedures. Specific objectives often include describing a phenomenon, explaining relationships between variables, predicting future outcomes, and controlling phenomena.

· Motivation: Researchers are motivated by the desire to solve complex problems, gain intellectual satisfaction, earn a reputation, or serve society. It is driven by curiosity and a desire for systematic understanding.

· Utility (Usefulness): Research provides essential information for sound decision-making in business and governance, creates new theories, validates existing knowledge, and helps society address current challenges.

b) Concept of theory, empiricism, deductive and inductive theory

Research relies on the interplay between abstract ideas (theory) and real-world evidence (empiricism).

· Concept of Theory: A theory is a set of systematically organized, interrelated concepts, definitions, and propositions that are advanced to explain or predict a phenomenon. Theories provide a framework for organizing knowledge and guiding research.

· Empiricism: This refers to the belief that knowledge comes primarily from sensory experience and observable evidence. Empirical research means that any conclusions drawn are based on hard evidence gathered through observation or experimentation.

· Deductive Theory (Deduction): This approach starts with a general theory or principle and moves to specific, testable observations. The researcher deduces a hypothesis from the theory and then tests it with data. Example: All swans are white (Theory) - This specific bird is a swan (Observation) - Therefore, this specific bird is white (Conclusion).

· Inductive Theory (Induction): This approach starts with specific observations and moves toward developing a broader generalization or theory. The researcher observes patterns in the data and uses those patterns to formulate a new theory. Example: Observing 100 white swans - Concluding that all swans are white (Theory).

c) Characteristics of scientific methods

Scientific research methods are distinguished by their systematic and rigorous nature, ensuring reliability and validity in findings.

Key Characteristics:

· Purposiveness: The research is conducted for a specific, clear purpose or objective.

· Rigor: Ensures that the research is thorough, meticulous, and carefully executed, and that appropriate methods are followed to minimize bias and maximize accuracy.

· Testability: The hypotheses developed in the research must be capable of being tested using scientific data collection and analysis.

· Replicability: The results of the research should be reproducible if other researchers conduct the same study under similar circumstances.

· Precision and Confidence: Precision refers to the closeness of the findings to reality, while confidence is the probability that the estimation is correct.

· Objectivity: Conclusions are drawn based on facts and data, not on the subjective biases or values of the researcher.

d) Understanding the language of research: concept, construct, definition, variable

Effective research requires precise language to communicate ideas accurately.

· Concept: An abstract idea or term used to categorize or group objects, events, or observations that have common characteristics. Examples include age, income, or leadership.

· Construct: A concept that is invented or adopted for a special theoretical purpose. Constructs are abstract and not directly observable (e.g., intelligence, customer satisfaction, motivation). They are often defined by a set of related concepts.

· Definition: Provides clarity and boundary to concepts and constructs.

o Conceptual Definition: Defines a term by relating it to other concepts (e.g., Motivation is the internal drive that directs behavior).

o Operational Definition: Defines a construct by specifying the procedures used to measure it (e.g., Motivation is defined by the score on the 10-item Motives Scale).

· Variable: A characteristic, trait, or attribute that can be measured or observed and that can take on different values. Variables are the empirical translation of concepts and constructs.

1.2 Research Process

a) Definition, Importance and limitations of statistics

Statistics is the backbone of quantitative research, providing the tools for data handling and interpretation.

· Definition: Statistics is the science of collecting, organizing, analyzing, interpreting, and presenting data.

· Importance:

o It helps condense large amounts of complex data into a simple, understandable summary.

o It enables researchers to draw valid inferences about a large population based on a smaller sample.

o It is crucial for testing hypotheses and determining the significance of research findings.

· Limitations:

o Statistics deals only with quantitative data, not qualitative phenomena directly.

o It studies group behavior and not individual cases.

o It can be misused or misinterpreted to support incorrect conclusions if not applied carefully.

b) Introduction, types and characteristics of Research

Research can be classified in many ways, but its core purpose remains the discovery of knowledge.

· Introduction: The research process is a systematic sequence of steps taken to answer a research question, moving from defining the problem to reporting the results.

· Types of Research:

o Descriptive vs. Analytical: Descriptive research describes the state of affairs as it exists (e.g., surveys), while analytical research goes beyond describing to analyze the data to explain why or how things are related.

o Applied vs. Fundamental (Basic): Applied research aims to find a solution for an immediate, pressing problem, while fundamental research is driven by a desire for knowledge for its own sake, focusing on generalizing and formulating a theory.

o Quantitative vs. Qualitative: Quantitative research is based on the measurement of quantity or amount and uses statistical analysis. Qualitative research is concerned with qualitative phenomena or phenomena relating to or involving quality or kind, aiming for a deeper understanding.

· Characteristics of Research (as a process): It is logical, objective, systematic, empirical, and strives for precision.

c) Types of data

Data are the facts and figures collected for the purpose of the study.

· Primary Data: Data that are collected for the first time by the researcher for the specific purpose of the study.

o Examples: Data collected through original surveys, interviews, or experiments.

o Advantage: More reliable and relevant to the research problem.

o Disadvantage: Time-consuming and costly to collect.

· Secondary Data: Data that have already been collected and published by sources other than the researcher.

o Examples: Government publications, census reports, academic journals, company records.

o Advantage: Less expensive and easier to obtain.

o Disadvantage: May not perfectly fit the research needs or may be outdated.

d) Survey and Experiments

These are two main methods for collecting primary data.

· Survey Research: Involves systematically collecting data from a sample of individuals (respondents) to describe the characteristics of a large population.

o Methods: Questionnaires (mail, email, online), personal interviews, telephone interviews.

o Purpose: To study large populations, describe patterns, and measure attitudes, opinions, and behaviors.

· Experimental Research (Experiments): A method used to establish a cause-and-effect relationship between variables. The researcher manipulates one or more independent variables (the cause) and measures their effect on a dependent variable (the effect), while controlling all other extraneous variables.

o Key Features: Control group, experimental group, random assignment, and manipulation of the independent variable.

o Purpose: To test hypotheses in a controlled environment to determine causality.

Chapter 2: VALIDATION OF RESULT

a) Problem identification and formulation

The start of all good research is selecting and clearly defining the problem to be addressed.

· Problem Identification: Involves recognizing a gap in knowledge, a contradiction in findings, or an issue that needs a solution. A researcher must ensure the problem is significant, feasible (can be solved), and researchable.

· Problem Formulation: The process of converting a general, ambiguous topic into a clear, specific, and manageable research question or statement. A well-formulated problem acts as the compass for the entire research project.

b) Research question

A research question is the central question that the research seeks to answer.

· Role: It guides the literature search, dictates the methodology used, and frames the interpretation of results.

· Qualities of a Good Research Question (FINER criteria):

o Feasible: Can be answered within the time, resources, and skills available.

o Interesting: Engages the researcher and the scientific community.

o Novel: Confirms or refutes previous findings, or provides new insights.

o Ethical: Does not cause undue harm or violation of rights.

o Relevant: Is important to scientific knowledge, theory, or practice.

c) Investigation questions measurement issues

Investigation questions break down the main research question into specific, smaller inquiries that guide the collection of data.

· Investigation Questions: These are specific, highly focused questions about the variables. Example: If the main question is "Does exercise improve mood?", an investigation question could be "What is the average weekly frequency of exercise among participants?"

· Measurement Issues: These are challenges related to accurately and consistently quantifying the concepts in the study. Key issues include:

o Conceptual Clarity: Ensuring the concept being measured is clearly defined.

o Operationalization: Developing specific, objective procedures for measuring the concept (e.g., defining "mood" as a score on a 15-item scale).

o Error: Dealing with random and systematic errors that inevitably creep into the measurement process.

d) Hypothesis

A hypothesis is a tentative statement about the relationship between two or more variables that is capable of being tested. It is an educated guess.

i. Qualities of a good hypothesis

· Clarity and Precision: The hypothesis must be expressed clearly and unambiguously.

· Testability: It must be possible to collect empirical evidence to test the stated relationship.

· Specificity: It should state the expected relationship between variables in a specific way.

· Theoretical Relevance: It should be consistent with known facts and existing theories.

ii. Null hypothesis (H0)

· Definition: The null hypothesis is a statement that there is no significant difference or no relationship between the variables being studied. It is the statement that the researcher attempts to disprove or reject.

· Goal: In statistical testing, the researcher generally assumes the null hypothesis is true until the evidence suggests otherwise. Example: H0: There is no significant difference in test scores between students who study in the morning and those who study at night.

iii. Alternative hypothesis (Ha)

· Definition: The alternative hypothesis is a statement that directly contradicts the null hypothesis. It is the claim about the population that is true if the null hypothesis is rejected.

· Types:

o Non-directional (Ha): States that a relationship or difference exists, but does not specify the direction. Example: Ha: There is a significant difference in test scores.

o Directional (Ha): States that a relationship or difference exists and specifies the direction. Example: Ha: Students who study in the morning will score significantly higher than those who study at night.

e) Hypothesis testing: logic and importance

Hypothesis testing is a formal statistical procedure for making a decision between two competing claims (the null and alternative hypotheses).

· Logic: The process involves setting up H0 and Ha, collecting data, calculating a test statistic (e.g., t-value, F-value), and determining the probability (p-value) of obtaining the observed results if H0 were true. If the p-value is very low (typically less than 0.05), the null hypothesis is rejected, and the alternative hypothesis is supported.

· Importance:

o It provides objective evidence to support or refute theoretical predictions.

o It is the formal mechanism for drawing statistical inferences about a population from a sample.

o It helps determine if observed differences or relationships are due to genuine effects or merely random chance.

Chapter 3: RESEARCH DESIGN INTRODUCTION

Research design is the blueprint for the collection, measurement, and analysis of data. It is a detailed plan for how the research will be conducted.

a) Steps in the Process of Research

The research process is a cyclical sequence of activities necessary to carry out a study effectively.

· i. Formulating the Research problems: Defining the broad area of interest and narrowing it down to a specific, researchable question or hypothesis.

· ii. Extensive literature survey: Reviewing existing scholarly work to gain background knowledge, understand current debates, and identify gaps in knowledge.

· iii. Developing hypothesis: Constructing tentative, testable statements about the expected relationship between variables.

· iv. Preparing the Research design: Creating a detailed plan specifying the type of study (e.g., experiment, survey), sampling method, data collection tools, and analysis techniques.

· v. Determining sample design: Selecting the method and size of the sample (a subset of the population) that will be studied.

· vi. Collecting data: Executing the plan by gathering the required primary or secondary data using chosen instruments.

· vii. Execution of the project: Carrying out the research design in a systematic and controlled manner, including fieldwork or laboratory work.

· viii. Analysis of data: Processing, cleaning, coding, and statistically analyzing the collected data to look for patterns, trends, and relationships.

· ix. Hypothesis testing: Applying statistical tests to the data to determine whether the results support or reject the null hypothesis.

· x. Generalization and interpretation: Drawing conclusions from the tested hypotheses, relating the findings back to the original research question and theory, and discussing their implications for the broader population.

· xi. Preparation of the report or presentation of the results: Documenting the entire research process, findings, conclusions, and recommendations in a clear, structured, and objective final report.

Chapter 4: INTRODUCTION TO QUALITATIVE & QUANTITATIVE RESEARCH

4.1 Qualitative Research

Qualitative research is an in-depth exploration of phenomena to understand the "why" and "how" of decision-making, experiences, and opinions, often relying on non-numerical data.

a) Essence of Qualitative research

· Focus: Understanding meaning, context, and experience from the participants' perspective. It is exploratory and descriptive.

· Nature: Subjective, interpretivist, and holistic. The researcher is often deeply involved in the process.

· Goal: To generate rich, detailed descriptions and potentially new theories, rather than testing pre-existing ones.

b) Sampling

Sampling in qualitative research is typically non-random and strategic.

· Types:

o Purposive Sampling: Selecting participants because they possess certain characteristics or experiences relevant to the study.

o Snowball Sampling: Asking existing participants to recommend others who fit the criteria.

o Convenience Sampling: Selecting participants who are readily available.

· Size: Sample sizes are usually small and determined by the point of saturation, where no new themes or information emerge from additional data collection.

c) Collection Techniques: Secondary & Primary Data

Qualitative data can be gathered from various sources.

· Primary Data Collection:

o Interviews: In-depth, semi-structured, or unstructured conversations to elicit detailed narratives.

o Focus Groups: Facilitated discussions among a small group of people to explore a specific topic.

o Observation: Researchers immerse themselves in a setting to observe behaviors and interactions.

· Secondary Data Collection: Analysis of existing documents, diaries, photographs, historical records, and public archives to gain context and insight.

d) Review of literature

The review of literature in qualitative research can be conducted differently than in quantitative studies.

· Purpose: To inform the research design, establish the rationale for the study, and provide a framework for comparing and contrasting the study's findings with existing knowledge.

· Timing: Some qualitative researchers conduct a limited initial review to avoid biasing the exploration, preferring to let themes emerge organically from the data before linking them to established theory.

e) Citations

Citations are formal references to the sources used in the research to acknowledge intellectual debt and allow readers to locate the original works.

· Styles: Common citation styles include APA, MLA, and Chicago. The style chosen must be consistent throughout the report.

· Placement: Citations typically appear parenthetically within the text and refer the reader to a corresponding entry in the bibliography.

f) Bibliography

The bibliography (or reference list/works cited) is a comprehensive list of all the sources consulted or cited in the research paper.

· Format: Each entry must adhere strictly to the chosen citation style (e.g., APA).

· Inclusion: It includes authors, dates, titles, and publication information necessary for others to find the source.

4.2 Interpreting Qualitative Data

a) Qualitative Data Analysis Procedures

Qualitative analysis is a non-statistical process that involves working with text, images, or observations to derive meaning and understanding.

· Procedures:

o Data Preparation: Transcribing interviews or field notes into text format.

o Reading and Memoing: Thoroughly reading the data and writing reflective notes (memos) to identify initial ideas and patterns.

o Coding: Systematically applying labels (codes) to sections of text to categorize and link data points.

o Theme Development: Grouping related codes to form overarching themes or patterns of meaning.

o Interpretation: Constructing a narrative that explains the meaning of the themes in relation to the research question.

b) Coding

Coding is the foundational process of qualitative analysis.

· Action: It involves taking raw textual data and assigning descriptive or conceptual tags (codes) to segments of the text.

· Purpose: It breaks the data into manageable chunks, facilitates comparison, and helps organize the material for interpretation.

· Types of Coding: Open coding (initial breaking down of data), axial coding (linking codes to categories), and selective coding (developing a core theme).

c) Thematic development

Thematic development is the crucial step of synthesis and pattern recognition.

· Process: It involves searching for common threads, ideas, or experiences that weave through the coded data. Codes are grouped together to form broader, meaningful categories known as themes.

· Outcome: Themes represent the essential findings of the study and are used to build the narrative and answer the research question.

4.3 Quantitative Research

Quantitative research systematically investigates observable phenomena using mathematical, statistical, or computational techniques.

a) Essence of Quantitative Research

· Focus: Measuring variables, testing relationships, and generalizing findings from a sample to a larger population. It aims for objective, measurable facts.

· Nature: Objective, positivist, and reductionist (breaking phenomena down into measurable variables).

· Goal: To test hypotheses, establish cause-and-effect relationships, and make predictions.

b) Choosing good instruments

The quality of quantitative research hinges on the instruments used for measurement.

· Definition: An instrument is the device or tool used to collect data (e.g., a standardized test, a survey with closed-ended questions, or a blood pressure monitor).

· Criteria for Good Instruments:

o Validity: The instrument accurately measures the specific concept it is intended to measure (e.g., a test for depression truly measures depression, not anxiety).

o Reliability: The instrument yields consistent results when applied repeatedly under the same conditions.

c) Interval and Ratio Scales

These are high-level scales of measurement that permit robust mathematical and statistical analysis.

· Interval Scale: Data are measured on a scale where the intervals between numbers are equal and meaningful. However, the scale does not have a true zero point, meaning ratios are not meaningful. Example: Temperature in Celsius or Fahrenheit. A 400 day is not twice as hot as a 200 day.

· Ratio Scale: Data are measured on a scale where both the intervals between numbers are equal and there is a meaningful, absolute zero point. This allows for all mathematical operations, including multiplication and division (ratios). Example: Weight, height, age, income. A person earning $100,000 earns twice as much as a person earning $50,000.

d) Collection and Analysis Techniques

Quantitative data collection and analysis are highly structured.

· Collection Techniques:

o Surveys: Using structured, closed-ended questionnaires.

o Experiments: Collecting numerical measurements of dependent variables under controlled conditions.

o Content Analysis: Systematically counting the frequency of specific words, themes, or concepts in a text.

· Analysis Techniques: Involves the use of descriptive and inferential statistics.

o Descriptive Statistics: Summarizing data (e.g., mean, median, mode, standard deviation).

o Inferential Statistics: Testing hypotheses and making inferences about the population (e.g., t-tests, ANOVA, regression analysis).

Chapter 5: MEASUREMENT: CONCEPT OF MEASUREMENT

a) What is measured

Measurement in research is the process of assigning numbers or labels to observations, according to rules.

· Variables: The attributes of objects or people that are measured, such as height, intelligence, motivation, sales volume, or public opinion.

· Empirical Indicators: For abstract concepts (constructs) like anxiety or customer satisfaction, researchers measure tangible indicators that stand for the concept (e.g., observable behaviors or responses to specific survey questions). The goal is to measure variables precisely to capture the reality of the concept.

b) Problems in measurement in Research – Validity and Reliability

Two major hurdles in ensuring good measurement are validity and reliability.

· Validity: Refers to the extent to which a measurement instrument truly measures what it is intended to measure. It asks: Are we measuring the right thing?

o Types: Content validity (does the measure cover all relevant aspects?), criterion validity (does the measure correlate with a relevant outcome?), and construct validity (does the measure behave as expected based on theory?).

· Reliability: Refers to the consistency of a measurement. A measure is reliable if it produces the same results under the same conditions repeatedly. It asks: Are we measuring it consistently?

o Types: Test-retest reliability (consistency over time), inter-rater reliability (consistency across different observers), and internal consistency (consistency among different items in a scale).

· Relationship: A measure must be reliable to be valid, but a reliable measure is not necessarily valid. For instance, a broken clock is reliably wrong, but not a valid measure of time.

c) Levels of measurement – Nominal, Ordinal, Interval, Ratio

The level of measurement dictates what kind of statistical analysis can be performed on the data. These four levels, developed by S.S. Stevens, represent a hierarchy of precision.

· Nominal Scale:

o Definition: The lowest level, used purely for classification or categorization. Numbers act as labels or names.

o Properties: Only classifies; no order, equal intervals, or true zero.

o Examples: Gender (1=Male, 2=Female), Hair Color, Marital Status.

· Ordinal Scale:

o Definition: Ranks observations in terms of 'greater than' or 'less than', but the difference between ranks is not measurable or equal.

o Properties: Classifies and shows order/rank, but no equal intervals or true zero.

o Examples: Finishing order in a race (1st, 2nd, 3rd), educational level (High School, Bachelor's, Master's), Likert scales (Strongly Agree to Strongly Disagree).

· Interval Scale:

o Definition: Provides equal, meaningful distances between measurements, but lacks a true zero point.

o Properties: Classifies, shows order, and has equal intervals.

o Examples: Temperature in Celsius, IQ scores.

· Ratio Scale:

o Definition: The highest level, possessing all properties of the lower scales, including a true zero point, allowing for all mathematical operations.

o Properties: Classifies, shows order, has equal intervals, and a true zero point.

o Examples: Height, weight, income, number of children.

Important questions for full subject:

Chapter 1: INTRODUCTION TO THE RESEARCH METHODOLOGY

1.1 Foundation of Research

1. Meaning and Utility: What is the core definition of research, and how does it provide essential utility for decision-making in governance or business?

2. Theory vs. Empiricism: Define "theory" and "empiricism." Explain why both are necessary components of sound research.

3. Deduction vs. Induction: Distinguish between deductive and inductive reasoning, providing a simple example of how each process leads to a conclusion.

4. Scientific Characteristics: List and briefly explain the characteristics of scientific methods, focusing on the roles of Rigor and Replicability.

5. Language of Research: Explain the difference between a Concept and a Construct. Why is an Operational Definition crucial for measuring a construct?

1.2 Research Process

6. Statistics: Define statistics and discuss its primary limitations in research.

7. Types of Research: Compare and contrast Applied research with Fundamental (Basic) research, and distinguish between Descriptive and Analytical research.

8. Data Types: What is the difference between primary and secondary data? What is one major advantage and one major disadvantage of relying on secondary data?

9. Surveys vs. Experiments: What is the primary purpose of survey research? How does the primary goal of experimental research differ, and what features (like the control group) enable this difference?

Chapter 2: VALIDATION OF RESULT

10. Problem Formulation: Explain the difference between identifying a general research problem and formally formulating a clear, specific, and manageable research question.

11. Good Research Question: List and explain the five key criteria (FINER) that define a good research question.

12. Measurement Issues: What are the challenges related to Conceptual Clarity and Operationalization when addressing investigation questions in research?

13. Qualities of Hypothesis: What is a hypothesis, and what four essential qualities must a good hypothesis possess?

14. Null vs. Alternative: Define the Null Hypothesis (H0). Explain how a directional alternative hypothesis (Ha) differs from a non-directional alternative hypothesis.

15. Hypothesis Testing Logic: Explain the core logic of hypothesis testing. If a researcher rejects the null hypothesis, what does this imply about the observed data?

Chapter 3: RESEARCH DESIGN INTRODUCTION

16. Research Blueprint: Explain why the research design is referred to as the "blueprint" of the study.

17. Process Steps: List the 12 sequential steps in the research process, starting from Formulating the Research Problem and ending with Preparation of the report.

18. Interpretation vs. Generalization: In the final stages of the research process, what is the critical difference between the Generalization of findings and the Interpretation of results?

19. Fieldwork: Briefly describe what the step Execution of the project involves, distinguishing it from the preceding steps like Collecting data.

Chapter 4: INTRODUCTION TO QUALITATIVE & QUANTITATIVE RESEARCH

4.1 Qualitative Research

20. Essence and Goal: What is the fundamental essence and primary goal of qualitative research?

21. Qualitative Sampling: What is Purposive Sampling, and how is the necessary sample size determined in qualitative research?

22. Data Collection: Describe three distinct techniques used for primary data collection in qualitative research.

23. Documentation: Differentiate between the functions of Citations within the text and the comprehensive Bibliography at the end of a qualitative report.

4.2 Interpreting Qualitative Data

24. Analysis Process: Outline the key procedures involved in qualitative data analysis, from raw data to themes.

25. Coding and Themes: Define the process of Coding and explain how Thematic Development is achieved by grouping these codes.

4.3 Quantitative Research

26. Essence and Nature: What is the fundamental essence of quantitative research, and what does it mean to say its nature is reductionist?

27. Instrument Quality: What two core criteria—Validity and Reliability—must good measurement instruments satisfy?

28. Scales of Measurement: Explain the difference between the Interval Scale and the Ratio Scale. Why can ratios be meaningfully calculated on one scale but not the other?

29. Analysis Techniques: What is the difference between the information provided by Descriptive Statistics and the conclusions derived from Inferential Statistics?

Chapter 5: MEASUREMENT: CONCEPT OF MEASUREMENT

30. Measuring Constructs: Explain how researchers assign numbers or labels to abstract concepts (constructs) by using Empirical Indicators.

31. Validity Definition: Define Validity in measurement. What is the difference between Content Validity and Construct Validity?

32. Reliability Definition: Define Reliability. Explain the relationship between reliability and validity using a common example (such as a consistently inaccurate measure).

33. Four Levels: List the four levels of measurement.

34. Nominal vs. Ordinal: Compare the Nominal Scale and the Ordinal Scale, focusing on the mathematical property that the Ordinal scale possesses that the Nominal scale does not.

35. Highest Level: Why is the Ratio Scale considered the highest level of measurement, and what mathematical operations does this scale allow that the others do not?

Click here for PPT