]]>

]]>

]]>

]]>

]]>

]]>

The qualitative style of research consists of many decisions that have to be made, thus creating a high probability for mistakes or errors. Ghanem (2003) proposes researchers follow the mechanism model when leading to the decision of accepting or rejecting a hypothesis, which includes three basic steps to minimise these errors. The first step is hypothesis formulation — where the aim of this stage is to “produce a proposed scientific hypothesis as a tentative explanation to the phenomenon in question” (ibid); followed by hypotheses evaluation stage — where researchers may reconsider and develop alternative hypothesis is the working hypotheses may not be suitable when tested; the last step is the hypothesis verification stage — this phase aims for the clarification of the final hypothesis which is done by carrying out either or combined of these three scientific methods: research, observation or experimenting.

While the general flow for formulating hypotheses using the hypothetico-deductive approach is very similar. The author postulates that this step may be comparable or synonymous to the verification stage in the mechanism model. The last step in the hypothetico-deductive approach suggests that although a positively tested hypothesis is more enticing a negative finding of a hypothesis may still be worth the effort during the testing of the hypothesis (Fisher, 2010). However, the author would recommend researchers to be very certain about their findings before concluding their findings as a negative result, as it could lead to two potential errors. Type error 1 — which is the discarding of the false negative result, and type error 2 — which is when a false hypothesis is accepted (ibid). The author analyses that this is one of the harder challenges when formulating a hypothesis. When will a researcher be able to identify if a negative result is false? Moreover, if a researcher does not establish a hypothesis as false, and accept the hypothesis, the whole research is jeopardised. Therefore, the author fears that drawing the wrong conclusion may be one of the biggest challenges. In addition, not all researchers may have time to redesign a hypothesis due to time and capability, for example a Master’s dissertation (Fisher, 2010). The author postulates this issue may arise due to the relationship between variables, which may be more complicated than expected. While there are obstacles to face when verifying hypothesis, there are other challenges to face in the formulation stage of the mechanism model.

Ghanem (2003) suggests that one way to minimise the probability of rejecting a hypothesis relates back to the sampling method. For example, if there were 300 couples on a vacation holiday to provide a researcher with the information on booking platforms, each couple should be given the same 1 in 300 chance of providing the researcher with information. This randomness reduces the risk of bias, therefore resulting in more accurate data. The author draws the connection that supports the previous chapter in false hypothesis. The author suggests that although bias may work in favour of data aligning to the hypothesis desired by the researcher, this breaches research ethics and may result in producing a false hypothesis. This preventive step may help to avert accepting “false” hypothesis. Moreover, Newby (2010) support Ghanem, stating that obtaining poor data may be one of the difficulties that a researcher may face, especially when the researcher will draw conclusions from the analysis collected. One way to counter this is to pay attention to the method of data collection (Ghanem, 2003; Newby, 2010). Even if the researcher has chosen the most appropriate sample size and method, the sample could still be imbalanced. For instance, it a questionnaire contains sensitive questions; the sample may falsify the results and/or refuse to answer that particular question. One method to counter this issue is to be very careful in phrasing and constructing the questions in the questionnaire, and ensuring the questions are diplomatic and inoffensive.

Quantitative research appears very circumstantial. As hypothesis is the testing of possible relationships, Newby (2010) suggests “the nature of proof is the second issue that makes quantitative research distinctive”. The author agrees with this statement, when formulating a hypothesis, how would a researcher really know if the patterns we draw upon really exist? How do researchers prove to fellow academics that the findings are true? Perhaps the only possible process of validating a hypothesis is through many rounds trial and error. A researcher has to look at many aspects before accepting or rejecting hypotheses.

References:

Bryman, A. and Bell, E. (2011) Business Research Methods. 3rd ed. Oxford: Oxford University Press.

Fisher, C. (2010) Researching And Writing A Dissertation: An Essential Guide For Business Students. 3rd ed. England: Pearson Education Limited.

Ghanem, T. (2003) The Process of Formulating Hypotheses and Students’ Difficulties of Hypotheses Formulation in Science Learning. Available from: http://www.academia.edu/10156442/The_Processes_of_Formulating_Hypotheses [Accessed 3 December 2015].

Marshall, C. and Rossman, G. B. (2011) Designing Qualitative Research. 5th ed. Los Angeles: SAGE Publications.

Tahir, S. Z. B. (n.d.) Hypothesis Formulation. Available from: http://www.academia.edu/8665107/HYPOTHESIS_FORMULATION [Accessed 4 December 2015].

]]>The comparison between factor analysis and cluster analysis is about approaching a set of data from two different perspectives. Factor analysis, referring to Bryman and Cramer (2006), is a statistical approach that emphasises on analysing the interrelationship among a great number of quantitative variables and interpreting them in terms of their common underlying dimensions, which are hereby named factors. Bryman and Bell (2011), stresses that factor analysis should be seen as a data reduction and summarisation technique aiming to reduce the number of variables with which the researcher needs to deal to one of more manageable size. On the other hand, cluster, defined as group of similar objects. Correspondingly, cluster analysis or clustering, according to Gorman and Primavera (1983), is a multivariate technique that focuses on grouping objects base on the proximities and similarities in their attributes. Nevertheless, due to the fact that there are clustering process using variables as basis for classification, vice versa, factor analysis procedures using objects as basis of factoring (Krebs et al., 2000). Arguments are that cluster analysis ought not to be identified exclusively with the object oriented approach to the data matrix (ibid). Moreover, Castro (2002) also emphasises on the ambiguity regarding the notion of a “cluster”. In other words, there is no precise definition for “cluster”. Various cluster models as well as clustering algorithms have been developed as a consequence of this ambiguity. Namely, hierarchical and non hierarchical cluster analysis, agglomerative and divisive algorithm, sequential and parallel threshold method, as well as optimising procedure. They vary tremendously depending on the notion of a cluster (Gorman and Primavera, 1983). As in the aim of this article is to investigate drivers for the two distinct methods, which are factor and cluster analysis, the author will not further elaborate on details of cluster models and algorithms. To serve the purpose of this article, “objects” will be referred to as “research units” in the following contexts. Being plotted geometrically, objects within a specific cluster will appear to be close to each other whilst the distance between different clusters will be further apart on the premise that the classification is done successfully (Stamatis, 2003).

As mentioned in previous contexts, one can hardly choose the most appropriate tool to carry out the analysis without understanding the drivers behind certain analysis. Aiming to investigate drivers for the two distinct methods, the author decided to demonstrate the purpose and goal underlying each of the methods. Despite the explicit differences, both techniques, or procedures share same underlying logic, which is classification that is built on homogeneity (Krebs et al., 2000). As a result of their distinctive fundamental basis, cluster analysis and factor analysis yield different information about the data. While factor analysis emphasises on grouping variables, cluster analysis, on the contrary, concentrates on classification of research units based on homogeneity of their similarity on variables. It emphasis on the homogeneity and heterogeneity within the research units (Chambliss and Shutt, 2015). Videlicet, procedure of cluster analysis is based on proximity whilst factor analysis is based on correlation.

In respect to the purposes and objectives of these two techniques. Despite of the fact that both can be utilised as useful tools for data reduction and segmentation , factor analysis aims to reduce the number of variables and identifying the underlying interrelationship between variables and sometimes unobservable or latent construct in a set of data (Rogerson, 2001). It is widely utilised for theory development as it “implies the aspiration of establishing a theoretically based causal relationship between indicators (items) and a latent variable (the factor or dimension)” (Gorman and Primavera, 1983), especially in fields as marketing, genomics, and social sciences researches (Howitt and Cramer, 2014). On the other hand, besides data simplification, cluster analysis’s overarching goals are taxonomy description and relationship identification. Furthermore, utilisation of cluster analysis can be extremely efficient when reachers wish to develop hypotheses concerning the nature of the data or to test and examine previously developed hypotheses (Chambliss and Shutt, 2015). To give an example, if a hotel company believes that their customers are segmented into two group in regards to the comfort of the room and room price per night. Cluster analysis would then be able to classify the costumers who prefers comfort over price verses price over comfort. The resulting clusters, if any, can be portrayed for demographic similarities and differences. Common criticism of cluster analysis is that cluster structure will always be implied on a set of data even if the well separated cluster is unwarranted (Gorman and Primavera, 1983; Saunders et al., 2009; Clark et al., 2010). As for factor analysis, limitation is that, researchers have no other choice but to make vital decisions about factor rotation strategy as premises, which will strongly affect the eventual outcome (Gorman and Primavera, 1983).

In summary, choices of techniques is high relevant to researchers’ intention. Depending on purposes of researches, Bacher (1996, cited Rogerson, 2001) suggests researcher to employ cluster analysis when aiming to classify entities and to exploit factor analysis when aiming to gain insights of underlying correlation of the variables (Sangren, 1999). Nonetheless, Gorman and Primavera (1983) proposed that these two techniques are not exclusive to one another. Comparison between factor analysis and cluster analysis can approach a data set from two complementary perspectives. Referring to one of ht most widely utilised software for statistical analyst, SPSS, factor analysis and cluster analysis can be used in a complementary fashion which will lead to enhancements of the interpretation of results found using each technique individually.

Reference list:

Bryman, A. (2012) Social Research Methods. 4th ed. Oxford: Oxford University Press.

Bryman, A. and Bell, E. (2011) Business Research Methods. 3rd ed. Oxford: Oxford University Press.

Bryman, A. and Cramer, D. (2006) Quantitative Data Analysis with SPSS 12 and 13 A guide for social scientists. New York: Routledge

Castro, V. E. (2002) Why so many clustering algorithms: a position paper. ACM SIGKDD Explorations Newsletter, 4(1), 65 – 75. Available from : http://dl.acm.org/ [Accessed 30 November 2015]

Chambliss, D. F. and Schutt, K. R. (2015) Making Sense of the Social World Methods of Investigation. 3rd ed. Available from : https://uk.sagepub.com [Accessed 30 November 2015]

Clark, M., Riley, M., Wilkie, E. and Wood, C. (2010) Researching and Writing Dissertations in Hospitality and Tourism. UK: Thomson

Howitt, D. and Cramer, D. (2014) Introduction to Research Methods in Psychology, 4rd ed. Available from: http://pearsoned.co.uk/

Gorman, B. S. and Primavera, L. H. (1983) The Complementary Use of Cluster and Factor Analysis Methods. The Journal of Experimental Education, 51(4), 165 – 168. Available from: http://www.jstor.org/ [Accessed 29 November 2015]

Krebs, D., Berger.M., and Ferligoj, A. (2000) Approaching Achievement Motivation –

Comparing Factor Analysis and Cluster Analysis. New Approaches in Applied Statistics, 148 – 171. Available from : http://www.stat-d.si/ [Accessed 29 November 2015]

Rogerson, R. A. (2001) Statistical Methods for Geography. Available from :https://srmo.sagepub.com [Accessed 30 November 2015]

Sangren, S. (1999) A survey of multivariate methods useful for market research. Available from: http://www.quirks.com [Accessed 29 November 2015]

Stamatis, D. H. (2002) Six Sigma and Beyond: Statistics and Probability, Volume III: 003 (Six Sigma and Beyond Series) Available from: https://books.google.ch [Accessed 03 November 2015].

]]>The very first step of starting a hypothesis testing, is to set out the null and alternative hypotheses (Good, 2000). In order to explain the process and methods of hypotheses testing, the author would firstly address the differences between these two forms of hypotheses that are used. A null hypothesis stands for a hypothesis which stipulates that there is no relationship between two variable in the population (Bryman and Bell, 2011). In other words, they are irrelevant. For example, if the researcher aims to find out the relationship between fear of failure and the tendency of academic procrastination among students in hotel and tourism management institutes. A null hypothesis would then be: H0: Fear of Failure has no effect on tendency of academic procrastination among students in hotel and tourism management institutes. On the other hand, based on prior literature on the topic which has a tendency for suggesting a potential result, alternative hypotheses construct informed predictions about expected outcomes (Crewel, 2014). Referring to the prior example mentioned, an alternative hypothesis would be H1: Fear of Failure has positive effects on tendency of academic procrastination among students in hotel and tourism management institutes or H2: Fear of Failure has negative effects on tendency of academic procrastination among students in hotel and tourism management institutes. Due to the fact that null hypothesis is a statement of no difference (relationship), is it generally the null hypothesis that is referred to when a hypothesis testing is conduced (Sridharan, 2015). Reasons behind are that if a null hypothesis is tested as invalid, the alternative hypothesis would then be accepted tentatively with a conclusion stating that there is a difference or relationship between the two discussed variables. Only one hypothesis needs to be tested if the chosen is a null hypothesis, vice versa, all alternative hypotheses need to be tested in order to be accepted (Good, 2000). Apparently, it is less demanding proving a null hypothesis wrong than proving all the other alternative hypotheses right. Following the construction of null and alternative hypotheses, establishing the statistical significance level is the second step needs to be carried out. Tests of statistical significance plays influential roles in the process of hypothesis testing. Even thought the word “significance” in the term tends to imply the importance of the results, it does not necessarily indicate that the findings are intrinsically important or substantively significant. Level of statistical significance is solely and directly related to how confident a researcher can be about his or her result deriving from the study in regards to the generalisability of the sample to the population from which they were chosen (Bryman and Bell, 2011). Levels of significance are being considered as probability levels. To be specific, the level of probability of rejecting the null hypothesis that is set up before head, when it is actually being expected to be confirmed (Sridharan, 2015). By testing the level of statistical significance, the researcher would then be able to establish the degree of risk that he or she may reject the null hypothesis. “The p-value is the probability of obtaining at least as extreme results given that the null hypothesis is true whereas the significance level α is the probability of rejecting the null hypothesis given that it is true” (Sandra, 2007). Conventionally, the universal acceptance of the level of statistical significance is at its maximum of p < 0.05, which is therefore, considered as a standard significant level. When p value is less then 0.05, it simply indicates that there are fewer than 5 out of 100 chances the researcher would have a sample that shows a relationship when there is not one in the population (Bryman and Bell, 2011). In other words, when the statistical significance of the findings is either equals to or less than 0.05, the researcher could reject the null hypothesis and accept the alternative ones. Conversely, the researcher would fail to reject the null hypothesis. (ibid). Once researchers have had the level of statistical significance of the findings tested, computing the test statistic would be the following stage. Last but not least, researcher need to make eventual decision and interpreting the results. Researcher establish hypotheses which are derived from theories based on the existing literature and testing of these hypotheses are being conducted in order to answer the research questions. As in results of the hypotheses tests should eventually lead to rejection, confirmation or reformation the theory or model (Newby, 2010), hypotheses testing plays a tremendously important role of conducting a statistical research procedure even though here are disagreements among scholars in regards to under which circumstances hypothesis testing corresponds with the rest of the research process most appropriately (Zar, 1984). The author therefore, has discussed the procedure of conducting hypothesis test and cacenpts that are most relevant to it in favour of readers who intends to carry out a statistical research to have a basic understanding of the method. References: Bryman, A. and Bell, E. (2011) Business Research Methods. 3rd ed. Oxford: Oxford University Press. Creswell,J.W. (2014) Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. Available from: https://books.google.ch/ [Accessed 31 November 2015]. Good,. P. (2000) Permutation Tests: A Practical Guide to Resampling Methods for Testing Hypotheses. 2nd ed. New York: Springer-Verlag. Newly., P. (2010) Research Methods for Education. Pearson. Sridharan,. R. (2015) Statistics for Research Projects: IAP 2015. Available from: http://www.mit.edu/ [Accessed 31 November 2015]. Zar,.F. (1984) Quantitative Methods (GEO 441) Hypothesis Testing. Available from: http://webspace.ship.edu/ [Accessed 31 November 2015].

]]>