Children andYouth Services Review, Vol. 21, No. 2, pp. 147-166, 1999 Copyright &, 1999 Elsevier Science Ltd Printed in the USA. All rights reserved 0190-7409/99/$-see front matter
Applications of Multidimensional Similarity Scaling (MDS) in Evaluation Research Francis G. Hare Ryerson Polytechnic University Two cases are presented to illustrate the applicability of multidimensional similarity scaling in evaluation research. In the first case, respondents were the Executive Directors of ten children’s service agencies in the downtown area of a large metropolitan center. They each made paired-comparison judgements of interagency similarity which were used to produce a mapping of the agencies. Dimensions were interpreted through key informant interviews and analysis of published program descriptions. The mapping not only identified the types of services that were available but also an area of potential need for service. In the second case, students judged the similarity of required courses in a university Bachelor’s degree program designed for child welfare/children’s mental health practitioners. Dimensions were interpreted by correlating independently-obtained ratings of the courses with the derived similarity dimensions. The information obtained confirmed that the program was perceived to function as designed and also was useful in a required program restructuring. The implications of adopting multidimensional similarity scaling as an analytic tool in evaluation research are discussed.
Multidimensional Similarity Scaling (MDS) is an analytic technique that has been used in a variety of disciplines to map the inter-relationships of complex stimuli (Shepard, Romney and Nerlove, 1972, Davidson, 1983). Examples range from music excerpts (Hare, 1977) and the visual arts (Goude, 1972, Berlyne, 1975) through geographical analysis (Smith and Hanham, 1981, Jones and Simmons, 1993) to marketing research (Aaker and Day, 1990). The goal of the present study is to explore the applicability of multidiThe author wishes to acknowledge the financial support received from the Faculty of Community Services Research and Scholarship Committee, Ryerson Polytechnic University, as well as the research assistance in Case 1 provided by Ms. P. MacKenna, CYW, BAA (CYC). An earlier version of this paper was presented at the joint conference of the American Evaluation Association and the Canadian Evaluation Society, November 1995. Requests for reprints should be addressed to Dr. Francis G. Hare, Director, School of Child and Youth Care, Ryerson Polytechnic University, 350 Victoria Street, Toronto, Ontario, Canada, MSB 2K3. [[email protected]
mensional similarity scaling in the field of program evaluation research by presenting two cases in which the questions are of the type encountered by evaluation researchers. This exercise is conceptually similar to other studies designed to extend the range of methods available to evaluation researchers. Two recent examples are the work of Johnson (1995) using conjoint analysis and that of Osgood and Smith (1995) using hierarchical linear modeling. Case I: Mapping of social services for children and adolescents
Whether one is conducting a needs assessment to identify and address gaps in existing services or undertaking a rationalization of services to eliminate overlap and duplication, it is necessary to have some procedure that will map the range of available services. One common approach is through factor analytic or clustering techniques, in which a set of “service variables” is hypothesized and existing services are arranged on pre-determined variables. Shields (1991), for example, used this procedure to map services in a rapidlyexpanding suburban region bordering a large metropolitan area. This first case demonstrates how multidimensional similarity scaling can also be used to address the same issue, without the necessity of initially hypothesizing which factors or dimensions are most salient. The three questions to be addresses in this case are : a) Can child and youth serving agencies be thought of as stimuli that could be compared in terms of their similarity ? b) Will the dimensions of similarity that emerge from the analysis be interpretable ? c) Are there any particular concerns in using MDS procedures in this area ?
Method Ten child and youth serving agencies in a large metropolitan area were selected for inclusion in the study according to the following criteria : (a) each agency should have some familiarity with each of the other agencies, (b) the set of agencies should be within reasonable geographical proximity to each other, and (c) each agency should have a fax number, to facilitate the transmission of documents and correspondence.
Applications of MDS in Evaluation Research
A cover letter explaining the project and introducing the research assistant was sent to the Executive Director of each of the selected agencies. In this letter they were informed that they would be contacted by the research assistant and invited to participate in the project. Included in this initial mailing was a copy of the paired-comparison similarity ratings that they would be requested to complete. A similar letter was sent for information to the Executive Director of the umbrella organisation to which all of the selected agencies belonged. Although the Executive Director of the umbrella organisation could have also been invited to participate as a respondent, the decision was made to restrict respondents to those actively involved in running service agencies. The judgement task was described as involving ratings on a five-point scale of the perceived similarity of each of the ten agencies to each of the other nine agencies. This resulted in 45 non-redundant pairings of agencies It was explained that each Executive Director was assumed to be most familiar with his or her own agency, but that it was important that they also compare each of the other nine agencies with each other. A rating of ” 1” on the scale indicated that the two agencies being compared were “Very Similar” and a rating of “5” indicated that the two agencies being compared were “Very Dissimilar”. It was stressed that the terms “similar” and “dissimilar” were being left undefined intentionally, because we sought to discover what criteria they were using rather than imposing a set of criteria on their judgments. It was also stressed that these were not “evaluative” judgments and that we were not seeking to identify strengths or weaknesses of particular programs but rather to develop a “map” of available services. A second general letter was sent to the selected agencies one month after the initial communication. This was in response to concerns expressed by some of the Executive Directors regarding the purpose of the study. Specifically, they suspected that the project had been commissioned by a government body as part of a general cost-cutting agenda. Assurance was given that the study was internally funded by the author’s University and primarily designed to assess the applicability of the multidimensional similarity scaling procedure. They were also assured that individual responses would remain confidential, that the agencies themselves would not be identified and that the overall results would be shared within the group of respondents.
Results of Similarity Scaling A) Derivation of Similarity Space
Completed pairedcomparison scaling for the 45 non-redundant pairings of ten agencies were received from seven of the ten potential respondents. Since the returned scales carried no identifying information on the individual respondent, it was not possible to analyze for systematic bias in the pattern of returns. Data were analyzed using the INDSCAL multidimensional similarity scaling model (Carroll and Chang, 1970), which in general not only produces interpretable dimensions without the need for rotation of axes, but also allows for a comparison of respondents through an analysis of subject weights which indicate the relative salience of specific dimensions. This model is readily available through the SPSS package (e.g. SPSS Inc., 1995) in the Scaling I Alscal I Model: INDSCAL menu. Choice of appropriate dimensionality of the solution is governed by both objective and subjective factors. Objectively, the question is one of how much the fit of the data to the model improves by the addition of dimensions. Subjectively, the question is one of whether the dimensions are interpretable. Fewer dimensions generally make interpretation easier, while more dimensions generally capture more of the data. In the current case, RSQ values were compared for 2,3,4 and 5 dimensional solutions. RSQ values indicate the proportion of the variance of the scaled data which is accounted for by their corresponding distances. RSQ values for the solutions were as follows: 2-dimensional 4-dimensional
RSQ = ,483 RSQ = .656
RSQ = .623 RSQ = .626
Based on the substantial increase in the proportion of the variance accounted for in moving from a two to a three dimensional solution and the minor increase thereafter, a three-dimensional solution was chosen. The subjective criterion of interpretability must now be applied to the derived similarity space. The values of each of the ten agencies on each of the three dimensions are given in Table 1.
Applications of MDS in Evaluation Research
Table 1 Values of Agencies on Dimensions Mapping of Children’s Service Agen
Spatial representation of the similarities among agencies are given in Figure 1. Only Dimensions I and II are presented, primarily because the essential differentiation on Dimension III is between Agency #l and everyone else. Practical considerations in the presentation of solutions of more than two dimensions will be discussed below.
I and II
Mapping of Childrens’ Service Agencies _”
_ _ _
_ _ _ _
_ _ _
On ~___;________ ....~~.~~~.~____?_~~~~~~~~~~____ __________ 0 -IO -21
B) Interpretation of Similarity Space
Various techniques are available to assist in the interpretation of dimensions derived through MDS, including key informant interviews, the use of published program descriptions, focus group discussions of the obtained dimensions and the correlation of rating scales of hypothesised descriptors with the obtained dimensions. The first two of these methods are used in the present case, while the latter two will be used in the second case in this project. Four of the Executive Directors who initially did the similarity judgement task were interviewed individually. They were asked to look at the results of the scaling and offer their interpretations of what the dimensions might represent. In effect they were being asked to describe what they were thinking of when they made their similarity judgments and what they thought the other respondents were thinking of as indicated by the overall perceptual map. In addition to this data source, program descriptions published in a community services directory (The Community Information Centre of Metropolitan Toronto, 1994) were also consulted.
Applications of MDS in Evaluation Research
Dimension I : The interviews with the Executive Directors and the published program descriptions suggest that Dimension I represents a treatment/therapeutic emphasis, with agencies 4 and 10 at one end and agencies 9 and 7 at the other. Agencies 9 and 7 provide residential and support services to pregnant young women and single mothers, while agencies 4 and 10 offer a variety of services to youth (approximately 12 to 21 years of age), including individual, group and family counselling. Agencies 4 and 10 also operate day treatment programs offering “educational and therapeutic learning experiences”. Agency 5 is the only other one located in the same quadranr as agencies 9 and 7 (i.e. Dimension I negative, Dimension JI positive) and it is also designed to serve young women, albeit with a broader range of services. Most of the other agencies are relatively close to the midpoint on Dimension I. Dimension II : This appears to be a gender-based residential service dimension. The defining clusters are agencies 3,8 and 6 at one end and agencies 5, 10 and 2 at the other. The other four agencies are relatively close to the midpoint on Dimension II. The six agencies that define this dimension all have a significant residential component, with the principal difference between the two clusters being one of gender. The three agencies at the negative end (numbers 3,8 and 6) are primarily if not exclusively services for young men, while the residential services of agencies 5, 10 and 2 are designed for young women. Dimension III : The fundamental dichotomy on this dimension is agency #l vs. everyone else, although agency #/2could be considered an anchor at the other end. Both the key informant interviews and the published program descriptions support the hypothesis that this dimension represents the variety of services offered by the agency. Agency #l not only has multiple listings in the Directory of Community Services (op.cit.), but it is also known for the broad range of clients served. Considering agency #/2 as the other end of this dimension, it is rather narrowly specialized in providing long-term residential cam for young women.
Discussion of Case I : Mapping of social services for children and adolescents
Returning to the questions being addressed in this care, the Executive Directors of the agencies involved in this project were capable of conceiving agencies as stimulus objects and making comparative judgments of their similarity. In addition, selection of a three dimensional solution was supported by the data and did seem to offer interpretable dimensions of similarity. In short, the primary objective of the project, namely to investigate the applicability of multidimensional similarity scaling in the areas of program evaluation and the mapping of human services, was realized. An advantage of the MDS approach is that one does not need to predetermine the relevant bases for comparison of the stimulus objects. Instead a “mapping from below” is used, asking respondents to make pairedcomparison similarity judgments and then using the resultant configuration of those objects as a basis for generating hypotheses about the conceptually salient dimensions of the stimulus array. In this particular case, one could have started with the Directory of Community Services and attempted to list all of the characteristics of a set of agencies. That list, however would have been so extensive as to be of relatively little use. Furthermore, two important questions would have been left unanswered by that approach, namely : a) which of those characteristics are most important, and b) are there any important characteristics that are not listed. By starting with the judgments of similarity and then moving to interpretation these questions do no even arise. The dimensions that emerge from the similarity judgments are by definition the ones that are most salient. Another way of expressing this is to say that the approach represents a form of of inductive reasoning in which observations are gathered with minimal preconceptions and patterns of relationships are discovered and interpreted. One drawback of the technique is the number of paired-comparisons that must be done by each respondent. Given a set of n stimulus objects, the number of non-redundant pairings will be (n * (n-l)) / 2. In the current case, comparisons among ten agencies required 45 pairings. Had the set consisted of twelve agencies rather than ten, the number of comparisons would have jumped to 66. There is obviously a trade-off between thoroughness of the stimulus array being mapped and the patience level of the respondents. Another potential difficulty with the technique is that of how the multidimensional space is to be depicted. It was fortunate in the current case that most of the information in the three-dimensional solution could be conveyed in the
Applications of MDS in Evaluation Research
first two dimensions. The third dimension served to differentiate one multiservice agency with an extremely broad range of client groups from all other agencies that tended to be mote nanow in their mandates. A threedimensional drawing was used to present the results in the key informant interviews, although as much time was spent explaining the drawing as probing for hypotheses about dimensions. Although there is some potential in the use of architectural CAD software, it would probably be necessary to carry it installed on a notebook computer and allow those being interviewed to freely rotate the space while they offer their interpretations. An explanation using the idea of people gathered in a room with similar people standing close together and dissimilar people standing further apart was also attempted, although to incorporate a third dimension it is necessary to talk about the relative height of individuals being a further indicator of their similarity. The question of how to conceptualize four or five dimensional solutions is obviously more problematic. For this reason, the most useful reporting technique would involve multiple presentations of dimensions taken two at a time. There was some concern expressed by the respondents over the funding source for this project and the potential uses to which the results could be put. Given the current economic and political climate, agencies are understandably reluctant to take part in something that may make them look redundant. Some respondents were suspicious that a government agency was secretly funding the project with an agenda of service elimination. Others were worried that their agency would be named in this report or that a careful reader could divine their identities. Although confidentiality has been maintained in this report, it must be acknowledged and even stressed that these anxieties are reality-based and must be taken into account. Such concerns are part and parcel of program monitoring and evaluation, although the best form of protection for service providers is to be able to document effective service efficiently delivered. It is also worth noting that the similarity space generated in Case 1 not only mapped existing services but also suggested a gap in service. If Dimension I is thought of as ranging from supportive service to therapeutic intervention and Dimension II is thought of as ranging from services primarily for males to services primarily for females, then the absence of services in the lower left quadrant (supportive services for males) may serve as the basis for a needs assessment. Services that are supportive of males do exist in the broader social service community. For example, one finds programs that are supportive of adolescent fathers (e.g. Gadd, 1996a, 1996b), students at risk for dropping out of high school (e.g. Ham and Federspiel, 1997) and minority
males (e.g. Owens, 1997) . However, at least within the group of agencies represented in Case 1, the more common model is of therapeutic intervention for males and females and supportive services for females. It should be noted that these results reflect the perception of agency Executive Directors; in other words they perceive there to be a lack of supportive services for males within the group of agencies in question. This does not mean that there are no other agencies providing such services, but it does underscore the potential usefulness of the technique in needs assessment. Case 2 : Analysis of an academic program
In this second case the focus shifts from a mapping of existing services to an assessment of whether the components of a program are perceived to function as designed. In the latter part of the 1980s a part-time degreecompletion program was developed for experienced child welfare / youth work practitioners who held a community college diploma (known in some jurisdictions as an “Associate’s Degree”) in the field and wished to upgrade to the baccalaureate level. The goals of the program were to enhance the career options of the workers in this field by presenting a curriculum that emphasized human service organizational issues, research methods and the research literature of the field, and advanced therapeutic strategies. The curriculum was designed through extensive consultation with practitioners and their professional association, faculty in the college diploma programs and employers. As students began graduating with their bachelor’s degrees, the opportunity presented itself to take stock of how the program as implemented measured up to the initial intentions. Funding cutbacks at the university necessitated a thorough re-evaluation of the curriculum, and one reasonable starting point was the perception of the clients (mature students) of the service they were receiving. Students entered the program in third year of a four year bachelor’s degree. Through an articulation agreement with the community colleges, credit for the first two years of the degree program was granted on the basis of the students’ three-year diploma. Approximately one-third of the credits necessary for graduation was from elective courses, with the other two-thirds being specific required courses. In the present analysis only those courses designated as “required” were included, given the variability of course selection for the elective credits.
Applications of MDS in Evaluation Research
Method A letter was sent to a sample of ten students who had completed all of the required courses in their academic program, although they may have still needed certain elective courses to be eligible to graduate. In other words, all respondents had taken and received credit for all of the courses they would be asked to compare.In the letter these students were invited to take part in a study designed to “develop a map of the program requirements”. They were asked to use a seven-point rating scale to judge the similarity of each of eleven courses (consisting of 14 semesters of course work) to each of the other ten courses. A response of ” 1” was described as indicating that the courses were highly similar, while a response of “7” indicated that they were highly dissimilar. It was explained that the procedure required judgments of 55 nonredundant pairs of courses, with the phrase “non-redundant” explained as follows : “No course is paired with itself, and I assume that the similarity of course A with course B is the same as the similarity of course B with course A.” The letter also listed the course title for each of the courses to be compared, to ensure that students would recall the appropriate course, a doubtful prospect if only the course number were given. The paired-comparison sheet attached to the cover letter did not call for any identification of the respondent, although two students did sign the sheet.
Results of Similarity Scaling Derivation of Similarity Space Responses were received from five of the ten students. Given the anonymity of the responses, it was not possible to analyze for systematic bias in the pattern of returns. As in Case 1, the data were analyzed using the INDSCAL model. Both two- and three-dimensional solutions were generated and either one could be defended as appropriate pending an analysis of their interpretability. RSQ values, as explained in Case 1, represent the proportion of the variance of the scaled data which is accounted for by their corresponding distances. These values are based on subject weights which measure the importance of each dimension to each subject. Squared weights on each dimension sum to the RSQ values. In the three-dimensional solution, the subject weights
158 produce an
RSQ value of S25. In the two-dimensional solution the subject weights produce an RSQ value of .458.The values of each of the eleven courses on each dimension of the two- and the three-dimensional solutions are given in Table 2.
Table 2 Values of Courses on Dimensions Analysis of Academic Program
Interpretation of Similarity Space The interpretation the obtained dimensions of similarity began with a focus group discussion in which the courses were organized on each of the dimensions and presented to a group of students who had not taken part in the initial judgment task. Several suggestions were made as to what the dimensions might represent, and these were then incorporated into the following 5point Likert scales with poles Strongly Agree and Strongly Disagree :
Applications of MDS in Evaluation Research
Question 1 : This course involves doing independent research Question 2 : This course has a major emphasis on children and youth Question 3 : This course emphasizes the clinical / therapeutic side of the field Question 4 : This course emphasizes the management/organizational side of the field Question 5 : This is a difficult course Question 6 : This course helps my career
Ratings of each of the eleven courses on each of the six scales was done by a third group of students (N=9) who had not taken part in either the similarity scaling or the focus group. The mean value of respondent ratings of each course on each question was calculated and are presented in Table 3.
Table 3 Mean values of courses on questions Analysis of Academic Program
1 = Strongly agree , 5 = Strongly disagree
Course values on the five derived dimensions were transformed to a set of positive numbers by adding a constant value of 1.80. This linear transformation retains the interval relationships between variables which alternate techniques such as squaring the numbers would not. A comlation matrix was then calculated for the eleven variables (5 dimensions and six questions). Given a relatively small N and a decision to accept only those correlations which were significant at p< .05, the results represent a relatively stringent and conservative test of the relationships between variables. In an exploratory
study such as this, with the goal of testing the applicability of a method in a new context, it is appropriate to emphasize results that are significant in both the statistical and the explanatory senses. Significant correlations are noted in Table 4.
Correlations listed are significant at p< .05
Given the significant correlation between Dimension I of the twodimensional solution and Dimension II of the three-dimensional solution and between Dimension II of the two-dimensional solution and Dimension I of the three-dimensional solution and given both the relatively small contribution of Dimension III of the three-dimensional solution and its lack of clear interpretation, a two-dimensional solution is most appropriate in this case. Values of the courses on the dimensions are presented in Figure 2.
Applications of MDS in Evaluation Research
Figure 2 : Dimensions Of Two Dimensional
I and II Solution
Analysis of Academic Program
Dimension I is significantly negatively correlated with questions 3 and 6. Question 3 refers to an emphasis on the clinical / therapeutic side of the field and question 6 asks whether the course helps the student’s career. Higher values on the scales indicate disagreement with the questions, so the negative correlation with stronger disagreement is in effect a double negative, meaning that courses with higher values on Dimension I are related to career goals and an emphasis on the clinical / therapeutic side of the field. It is also worth noting that these two questions were themselves significantly correlated. The courses that define Dimension I are numbers 6,7 and 11 at the negative end and numbers 3,8,10,1 and 5 at the positive end. Courses numbered 6, 7 and 11 are designated as “required professionally-related courses” and were designed to offer historical, literary and sociological perspectives on children and adolescents. These were the only three courses that were not considered “professional core” and the students did not see them as contributing to their career goals. At the positive end of the dimension, course number 3 was the Independent study / thesis, course 8 was research methodology, course 9 was developmental psychopathology, course 10 was special topics behaviour disorders, course 1 was advanced therapeutic interventions and course 5 was program evaluation research. The two courses at the mid-point of this dimension, numbers 4 and 2, dealt with organizational issues and
professional identity, respectively. They were rated by students as contributing to career goals but not through an emphasis on clinical / therapeutic issues, as may be seen in Table 3. The keys to interpreting Dimension Il are the significant positive correlation with Question 4 (management / organizational emphasis) and the significant negative correlation with Question 2 (emphasis on children and youth). Because the questions were phrased such that higher values indicate stronger disagreement, as one moves across Dimension II higher values on the dimension indicate less of an organizational emphasis. The courses that define the negative end of Dimension II (i.e. more management / organizationally oriented ) are course # 4, called Management in Human Service Organizations and course #2 which deals with professional identity within the human service system. Defining the positive end of the dimension (i.e. less organizationally oriented) are course # 1, on therapeutic interventions and course # 10, a special topics course on behaviour disorders. The negative correlation of Dimension II with Question 2 indicates that as one moves from lower to higher values on the dimension the respondents are less likely to disagree that the course emphasizes children and youth. While the courses noted above were described as defining the organizational interpretation of the dimension, it could also be noted that the two lowest values on Question 2 ( see Table 3 ) were courses # 9 and # 11, both dealing with behavioural disorders among children and adolescents and both located toward the positive end of Dimension II. Similarly, the two highest values on Question 2 were courses # 4 and # 8, with virtually no child-specific content, # 4 being a management course and # 8 being research methods. One somewhat surprising outcome was that Question 5, dealing with course difficulty, was not significantly correlated with either of the two dimensions derived from the similarity judgments. This question approaches a significant correlation with two variables, namely Dimension I ( r = - 55, p = .08 ) and Question 6 ( r = 53, p = .09 ), which means that students may see their more challenging courses as making a greater contribution to their career goals. A larger scale replication may bear this out. A second surprise was the lack of correlation between Question 1 and either of the dimensions. Question 1 asks whether the course involves “doing independent research”, which articulates one of the program goals. Question 1 does approach significance in its correlation with Dimension I ( r = -.53 , p = .09 ) and it is significantly correlated with Question 6 (“helps my career”). The explanation of the lack of significant correlation with the two dimensions may be found in Table 3 in which it is clear that almost all of the courses are
Applications of MDS in Evaluation Research
rated as involving independent research. The two exceptions to this are the literary and historical perspectives courses (numbers 7 and 6, respectively) and perhaps ironically course number 8 on research methods. To summarize this section on interpretation of the similarity dimensions, it appears that Dimension I represents the extent to which the courses are perceived to contribute to students’ career goals. Dimension II differentiates an emphasis on the clients of service and the management and organizational aspects of service delivery Discussion of Case 2 : Analysis of an academic program
In the current case the issue being investigated was whether the multidimensional similarity scaling technique could be used to assess whether the conceptual framework underlying the design of an academic curriculum was in fact perceived by students of that program. The guiding principles in program development were to offer academic upgrading that would enhance career options, to keep a clear focus on children and youth and therapeutic strategies while introducing organizational management content and a research orientation. Professional development and a therapeutic orientation defined the first similarity dimension while the program emphases on children and youth and on organizational issues defined the second dimension. The research orientation appears not to have been picked up by the similarity dimensions primarily because it was perceived to permeate the curriculum. The paired-comparison task in this case involved eleven stimuli, resulting in 55 comparisons, which is mom than the 45 comparisons called for in Case 1. This may have contributed to the lower response rate in this case, although the technique appears to be sufficiently robust as to produce useful information with relatively few respondents. However, it may be the case that eleven or twelve stimuli for paired-comparisons is the upper limit of respondents’ tolerance. The interpretation technique used in this case involved focus group discussion to generate hypotheses about possible interpretations of dimensions, followed by a testing of those hypotheses using Likert scales. While this is a somewhat circuitous route, it does ensure that the meanings attached to the dimensions emerge from the stakeholders rather than being imposed by the researcher. As noted in introducing Case 2, financial belt-tightening within the university served as part of the background of this particular project. It was necessary to reduce and/or restructure the courses in this program. Client
feedback, in this case the judgments of students in the program, assisted the decision-making process. Three of the required courses were deleted from the curriculum, another was restructured, and the range of elective choices was expanded.
Concluding Comments The basic rationale for this project was to test the applicability of multidimensional similarity scaling as a research technique in program evaluation. On the whole, the technique appears to hold potential as an additional tool in the kit of evaluation researchers. Two cases were presented to illustrate different types of research questions and different approaches to interpreting the obtained dimensions. The basic task facing respondents is relatively straightforward, although it is sometimes necessary to stress that the goal is to understand how they view the stimulus objects, in response to their question to the researcher “What do you mean by similarity ?“. The potential of the technique is to allow respondents a greater voice in defining the issues that interest them, rather than having criteria imposed by the researcher. In this sense the approach is one of “evaluation from below”, with roots deeply embedded in the traditions of inductive reasoning. The number of paired-comparisons required increases dramatically with the number of stimulus objects in the set. For this reason the technique is probably of limited value when the number of objects exceeds 12, although overlapping partial matrices of comparisons could be constructed to maximize the information obtained while minimizing the demands on the respondents. As noted in the discussion of Case 1, the issue of how best to depict solutions of three or more dimensions is problematic. It may be best resolved through multiple presentations of two-dimensional solutions, although a wellconstructed three-dimensional model could be used. However, as one goes above three dimensions not only does visualization become a challenge, but the number of graphs taking dimensions two at a time grows rapidly. Fortunately, the scaling procedure generally tends to produce meaningful information in three or fewer dimensions, as it did in each of the two cases discussed here. One area of potential applicability that was not explored in this project is that of analyzing different perspectives of different stakeholders. The individual differences component of the model used in this study could be used, for
Applications of MDS in Evaluation Research
example, to compare the perspectives of funding sources, agency staff and agency clients on different components of service. Finally, it must be acknowledged that the scaling technique is somewhat complex, computer dependent and probably unfamiliar to many in the field of services for children and youth. Other tests of its applicability are clearly warrented. If on balance it proves to be an effective and useful tool, then the effort required to learn and use the procedure will be worthwhile.
Aaker, D. A. & Day, G. S. (1990) . Murkering Research (4th ed.). Toronto: John Wiley and Sons. Berlyne, D. E. (1975). Dimensions of perception of exotic and pre-Renaissance paintings. Canadian Journal of Psychology, 29, 15 l- 173. Carroll, J. D. & Chang, J. J. (1970). Analysis of individual differences in multidimensional scaling via an N-way generalization of “Eckart-Young” decomposition. Psychometrika, 35, 283-3 19. Community Information Centre of Metropolitan Toronto. (1994). Directory of Community Services in Metropolitan Toronto. Toronto, Canada: Author. Davidson, M. L. (1983). Multidimensional Scaling. Toronto: John Wiley and Sons Gadd, J. (1996a, January 18). Investing in teen fathers a hard sell. The Globe and Mail, p. A12. Gadd, J. (1996b, January 18). Program helped young fathers cope. The Globe and Mail, p. A12. Goude, G. (1972). A multidimensional scaling approach to the perception of art. Scandinavian Journal of Psychology, 13, 258-284. Hare, F.G. (1977). Dimensions of music perception. Scientific Aesthetics, I, 27 l290. Hare, F.G. & Federspiel, B. (1997, November). Developing models of colluborution in a drop-out prevention program. Paper presented at the meeting of the American Evaluation Association, San Diego, CA. Johnson, R. B. (1995). Estimating an evaluation utilization model using conjoint measurement and analysis. Evaluation Review, 19, 3 13-338. Jones, K. & Simmons, J. (1993). Location, Location, Location : Analyzing the retail environment. Toronto: Nelson Canada Osgood, D. W. & Smith, G. L. (1995). Appliying hierarchical linear modeling to extended logitudinal evaluations : The Boys Town follow-up study. Evaluation Review, 19, 3-38.
Owens, R.G. (1997, Summer). Mentoring provides solutions. National Dropout Prevention Newsletter,
Shepard, R. N., Romney, A. K. & Nerlove, S. B. (1972). Multidimensional Scaling : Theory and Applications in the behavioral sciences. New York: Seminar Press. Shields, C. (1991). Alternatives to traditional residential treatment-Project Report. Mississauga, Canada : Ontario Ministry of Community and Social.Services. Smith, C. J. & Hanham, R. Q. (198 1). Anyplace but here! Mental health facilities as noxious neighbors. Professional Geographer, 33, 326-334. SPSS Inc.. (1995). SPSS for Windows (Version 6.1.2) [Computer Software]. Chicago, IL: Author.