What Are The Assumptions For One Way ANOVA? Essay Sample For College

Question 1: What are the assumptions for one way ANOVA? Why are they important for assumption checking? How do you do the assumption checking? (2 references).

The analysis of variance or ANOVA was developed by R. A. Fisher and is a test of significance between or among means (Keppel & Wickens, 2004). The purpose of which is to compare two or more means and determine the difference between them. The one way ANOVA is used when the data to be tested contains only one independent variable. For example, one would want to test whether there is a significant difference between the math scores of college freshmen in the SAT, the independent variable would be the math scores, while college freshmen could be grouped according to gender, course or even IQ. The ANOVA is a powerful test which compares and determines the significance of the difference between groups. However, the reliability and effectiveness of the one-way ANOVA to actually determine the true mean between groups depends on whether the kind of data used satisfies all the assumptions of the one way ANOVA.

The data that is subjected to ANOVA must be scrutinized to ensure that the following assumptions are met: independence, measurement scale, normality and homogeneity (Kirk,1995). A one way ANOVA can be carried out confidently if the observations or the mean scores derived from the subjects are independent, hence one way ANOVA can only be used if three or more groups in the study are independent from each other. For example, we want to know the difference in math scores of low, average and high IQ students, since the group of students are distinct from each other, then they are independent. Test data should also be a product of interval scale which can be easily determined by the kind of data gathering measure used. The assumption of normality rests on the assumption of independence, using a large number of respondents would result to scores that are distributed normally as opposed to using small number of respondents. The assumption of homogeneity requires that the groups or observations in the study must have the same number, and whose variance is very low. It is important to check whether the data to be analyzed satisfy the four assumptions because violating any of the assumptions will result to less powerful tests of significance, or that the ANOVA result would be false and misleading (Keppel & Wickens, 2004).

Checking the data as it relates to the assumptions of one way ANOVA can be done using a number of tests and observations about the data. Independence of the groups or observations can be done easily by assessing whether the groups are related or whether they are the same group in different situations. The kind of measurement scale can also be checked by examining the instruments used in the study as to the kind of measurement scale it uses (Kirk, 1995). One way ANOVA can only work with interval and ordinal measurement scales, so that if the observation values are non numerical, then it clearly indicates that the ANOVA cannot be used in this case. Determining normality and homogeneity can be done using a number of statistical measures, however when the sample size is large, it is always assumed that the scores are more likely to be normally distributed and with smaller variance between means. The skewness of the distribution of the data can show whether the data falls under the normal curve. Likewise, determining the presence of outliers and determining whether the groups have the same number of observations would indicate the extent of the homogeneity of the data. Outliers can easily be measured by identifying the highest and lowest scores and testing it against the average or the mean score. Boxplots can also be used to graphically represent the homogeneity of the data, another option is to determine the standard deviation of the scores and examine if it is larger or not, the smaller the variance the more homogeneous the scores are.

References

Keppel, G. & Wickens, T.D. (2004) Design and Analysis: A Researcher’s Handbook,  4th ed. NJ: Prentice Hall.

Kirk, R. (1995). Experimental design: Procedures for the behavioral sciences,  3rd  ed. Pacific Grove, CA: Brooks/Cole.

Question 2: What is the difference between multiple regression and logistic regression? Could you think about an example for each of method? (2 references)

Multiple regression is one of the oldest statistical tests which is used to predict the variance of an interval dependent variable from a series or set of independent variables. Multiple regression can be used to establish how the independent variables influence and account for the variance in a dependent variable in a statistically significant level, it can also determine the predictive validity of the independent variable (Berk, 2003).  The assumptions of multiple regression are linear relationships, homoscedasticity, interval data, no presence of outliers, untruncated data range. This would mean that multiple regression can only be used in data that satisfy the above assumptions. The linearity of the relationship of the variables has to be established as it is the most important criteria which determine the effectiveness of the results of the regression. For example, a professional athlete would want to determine which characteristics of an athlete would best predict success, the researcher may gather and quantify measures like health, lifestyle, status and other variables and then test which predicts success in professional sports. One could assume that the relationship of the athlete characteristics to success is linear (follows a straight line). In this way, the recruiter would be able to choose the athlete that has the greatest chance of making it big in the sports industry.

On the other hand, logistic regression can be binomial or multinomial and is used to determine the regression of the data or observation vis a vis the kind of dependent and independent variable. Binomial logistic regression is used when the dependent observation is dichotomous and the independent variable can be any form while the multinomial logistic regression is used when the dependent variable is more than two categories (Pampel, 2000). When a researcher tries to predict the dependent variable based on the independent variables whether it is continuous or categorical is called logistic regression. Thus, logistic regression predicts and determines the probability of an outcome based on the dependent variables. Generally, multiple regression is used to predict the outcome of the dependent variable from a number of different variables, while logistic regression is a much more specific kind of regression used when the independent variable is an attribute.

Multiple regression can be used when a researcher wants to predict the burnout of teachers, he/she thinks that burnout can be affected by a host of factors and determining which significantly leads to burnout can be useful to the school management. Thus the researcher identifies which variables or factors can affect and predict burnout, the researcher required the teachers to identify in a self-report survey where they were asked to determine which factors lead to burnout. This could range from poor working conditions, demotivated students, lack of resources or workload. It is more likely however that teacher burnout was due to lack of resources as measured by multiple regression analysis by testing which factors predict or strongly leads to burnout.  An example of when to use logistic regression is when a researcher wants to determine and predict teacher burnout as it is affected by age and gender. Thus the dichotomous variable of gender (male and female) and age (young and old) will be examined to test which of the attributes are more able to predict the behavior of the individuals studied that is the incidence of burnout. The goal is to test whether group membership or sharing a certain attribute or category will also lead to certain conditions as indicated by the measured variables. In this case, logistic regression is used to determine whether being male or female and whether being young or old predisposes teachers to experience burnout.

References

Berk, R. (2003). Regression analysis: A constructive critique. Thousand Oaks, CA: Sage

Publications.

Pampel, F. (2000). Logistic regression: A primer. Sage Quantitative Applications in the Social

Sciences Series #132. Thousand Oaks, CA: Sage Publications.

Lessons Relative To ANOVA And Nonparametric Tests

1. Lessons relative to ANOVA and Nonparametric tests

ANOVA and non-parametric tests were used to analyze the effect of different factors on a specific parameter of company performance. They were used in Praxidike Systems to analyze the effect of different factors on productivity and customer satisfaction. Statistical analysis was used to identify process changes and improve their quality.

ANOVA assumes that each population being studied has a normal distribution and has the same variance. It also assumes that errors are random and independent of each other. If these assumptions do not hold for the population to be tested, a non-parametric test such as Kruskal-Wallis, would be used to test the effect of different factors on a specific parameter. In order to use Kruskal-Wallis statistical test it is assumed that data is ordinal and not quantitative and also the data could not also be proved to be of normal distribution.

            Chi-Square test is used to test the normality of the data in order to determine whether to use ANOVA or Kruskal-Wallis test. If Chi-Square test indicates the normality of data, ANOVA is used to analyze the effect of different factors on specific parameters. If the data was not found to be normal, Kruskal-Wallis is used instead of ANOVA.

2. Concepts and Analytical Tools Learned

The new tools of statistical analysis tests of ANOVA or Kruskal-Wallis are learned to be used to find out the exact effect of different factors on different parameters of company performance. Once the effect of different factors is determined decision makers are provided better insight on how to improve performance and quality of processes.

Continuous analysis and monitoring of the factors affecting company performance parameters is needed to ensure any companies continuous success.

3. Suggestions to Decision Makers of Praxidike Systems to solve Given Challenges:

In order to improve company productivity, statistical analysis was conducted to determine the level of competency of software engineers. A Kruskal-Wallis test of the competency of the three different levels of software engineers, indicated that not all software engineers have the same competency level since they don’t produce the same number of lines of code per day. It is suggested that extra training is needed for software engineers that are less competent than company mean.

In order to further investigate factors affecting productivity of the company represented by lines of code produced per day, correlation test was conducted to find out factors affecting productivity. It was found that competency affects 96% of productivity, type of project affects 79% of productivity, and scope change affects 61% of productivity.

ANOVA test of these factors confirms their effect on productivity of the company.  Based on the results of these tests, It is suggested that project managers should align the competency of software engineers to the complexity of the project and review project plan and account of scope changes at planning stages.

In order to improve client satisfaction of completed projects which varied from 55 to 90, correlation analysis indicated the strong effect of number of defects and schedule variance on customer satisfaction. It is suggested that project managers review project plans and effort estimation, implement a defect tracking system and review testing procedures.

Foreign Exchange Hedging Strategies At General Motors

Foreign exchange hedging strategies at General Motors: Transactional and Translational Exposure

Translational exposure is the possibility of a company’s liabilities, equities, income, or asset values changing because of a change in exchange rates. This happens if the firm has some assets in foreign currency. On the other hand, transactional exposure is the risk a company faces due to fluctuations in the exchange rates of other currencies after a firm has already entered into a contract.

General Motors was the biggest automobile manufacturer in 2001, controlling 15.1% of the world’s market share by selling more than eight million vehicle units. GM was started in the year 1908, and since then, it has developed with manufacturing operations in several countries, and its vehicles are sold in many countries. GM is, therefore, prone to transactional and translational exposures since it has operations in several countries. Also, the fact that employees do not come from the US only because it has customers in over two hundred nations around the world. GM had an exposure worth one billion dollars against the Canadian dollar, and there was growing concern with the Argentinean peso since it was expected to depreciate.

The treasurer, one Eric Feldstein, was responsible for all money matters in GM, as well as managing all the risks involved in their transactions. The firm’s executives had put in place various policies for the management of exchange risks and hedging processes. Although these policies showed the way forward in treasury operations, there came an occasion when they had to be changed in order to avoid massive losses. This had to be done considering the transactional and translational exposures to the Argentinean peso and the Canadian dollar, since there was an anticipated depreciation of these currencies (Pacific Basin Economic Council, 1992, pp. 1-11).

Financial risk management operations for GM were all under the risk management committee, which was headed by six of the most senior men in the firm. They met four times a year to determine the policies in place for market risk management by providing specific and detailed tools and giving appropriate time frames. There were two centers in the treasurer’s office with the sole purpose of managing all the risks involving foreign exchange. They were:

The New York Finance office was in charge of GM’s subsidiaries in Latin America, Africa, the Middle East, and North America. There was also a regional treasury center for Europe and the Asia-Pacific region. This meant that each business entity had an office located in a geographical area closer to it. GM’s policy for managing risk in foreign exchange was set up to meet three objectives:

Decreasing cash flows and income volatility.

Reducing the time and costs dedicated to the management of foreign exchange.

Have a consistent method for managing foreign exchange in its operations.

The first was for dealing with transactional exposures while leaving translational exposures, while the second involved conducting an internal study to establish the best policy at each given time to avoid employing policies that could harm the firm. Thus, they ended up using a policy that should reduce the foreign exchange risk by half. Risks were determined on a regional basis, and then an evaluation was done to establish which one needed hedging. In places where a currency was seen as highly volatile, hedging was done for the coming six months rather than twelve, and this went a long way in reducing the risk to five million dollars. The hedging policies were followed closely and reviewed to meet the required standards.

GM-Canada was a significant part of the entire GM operation since it not only supplied Canada, but it was also relied upon by other countries, including those in North America. GM-Canada dealt with US-based suppliers, which meant that its cash flow was in US dollars, despite having many assets in Canadian dollars. This created financial exposure of the US dollar to the Canadian one. The exposure was due to outstanding debts to Canadian suppliers and the amount of money that was to be paid as pension and retirement benefits for Canadian employees. A cash flow exposure of 1.7 Canadian dollars was projected for the next twelve months, and GM’s objective was to hedge 50% of this. They initially wanted to hedge 75%, but after calculations, it emerged that this could result in a loss. Due to the fact that GM’s hedging policies were in terms of large amounts, they wanted to hedge in a cost-effective manner. Hence, they zero-rated their future contracts as of the date of trade. They had to specifically compare the entire exposure plus a 50% hedge of future contracts and the total exposure in addition to a 50% hedge of options (Falloon, Ahn 1991, p.31).

Thus, at GM, hedging of financial exposure was taken up very seriously. This is evidenced by the fact that it was headed by six of the most senior executives in the firm. This goes a long way in minimizing losses that can be due to exposures.

References

Desai, Mihir A. and Veblen, Mark F. (2006). Foreign Exchange Hedging Strategies at General Motors: Transactional and Translational Exposures.” Harvard University Press.

Falloon, William D. and Ahn, Mark J. (1991). Strategic Risk Management: How Global Corporations Manage Financial Risk for Competitive Advantage. California: Probus Publishing Company.

Pacific Basin Economic Council, Pacific Basin Economic Council (1992): Managing Exchange Rate Volatility. California: PBEC International Secretariat.

error: Content is protected !!