Background and Introduction
In a world where the Internet provides us with instant access to the sum of all human knowledge, traditional fact-based teaching is no longer the most effective way to equip students to succeed in their future endeavours, be they in the workplace or elsewhere. For the students of today to achieve success in the modern world, they require attributes and skills that allow them to take the wealth of information that is readily available and use it in meaningful ways.
Two such advantageous attributes are the possession of high levels of working memory and creative problem-solving skills. Working memory retains, and allows processing of, information relevant to the task at hand, while creative problem-solving is the mental process of creating a solution to a problem independently rather than applying a previously learned solution.
While there is growing recognition of the importance of these attributes, with 97% of educators and 96% of policymakers agreeing that creative problem-solving is important for students to learn in school in a recent study carried out by Adobe (2018), they are not consistently tested for within education. Although numerous validated tests for working memory exist, they are typically utilised only when assessing a struggling student rather than applied consistently across all pupils. For creative problem-solving, fewer validated tests exist. Without the use of such instruments, it is more difficult for academic institutions to assess the presence of these desirable attributes within their cohorts, and tailor interventions to remediate any weaknesses in these areas that such assessment might reveal.
Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service
Differing theories exist on the relationship between working memory and problem-solving ability. Hambrick and Engle (2003) present arguments to support the contention that subjects with higher levels of working memory enjoy a correlating higher level of problem-solving capability, based in part on an improved capacity to suppress irrelevant or misleading recalled information. However, Wiley and Jarosz (2012) argue that this relationship between levels of working memory and ability to solve problems applies to analytical problem-solving only, and that there is in fact an inverse relationship between a subject’s working memory and their creative problem-solving ability. This could be attributed to the Einstellung effect, a tendency to attempt to solve a given problem in a specific way (based on recollection of past success) even if superior methods for solving the problem are available to those willing to use a more novel approach.
This paper seeks to validate differing methods of assessment of the target attributes through the creation of a multi-trait multi-method (MTMM) matrix, as developed by Campbell and Fisk (1959). The matrix will measure two traits (working memory and creative problem-solving) using two methods (computerised and one-to-one tests), delivered to adult participants of mixed age and gender.
The multi-method aspect of the MTMM matrix provides the opportunity to establish convergent validity between the two pairs of tests that measure the same trait. This is of particular benefit because it allows us to establish whether the increasingly popular computerised method of test delivery displays convergent validity with traditional delivery methods that are more well-established but also more difficult and time-consuming to administer. The multi-trait aspect of the matrix may display divergent validity, indicating that the traits are either not linked or inversely linked, or convergent validity, supporting the contention of Hambrick and Engle (2003) that the traits are linked. This paper supports the view that the two traits are not linked, and as such will seek to provide evidence of divergent validity.
The key benefit of this analysis will be to provide evidence of the validity of multiple instruments for the testing of highly desirable attributes that may be under-assessed in education today, and particularly the validation of newer computerised instruments against those delivered by more traditional means. Another potential benefit of this analysis, should the data build on the theory of an inverse relationship, could be a better understanding of the factors that affect problem-solving ability, and help determine if different approaches should be taken for the nurturing of analytical and creative problem-solving skills.
This paper will define the constructs we wish to measure, detail the relevant validity considerations, explain the method used and the instruments selected, present the results and then draw conclusions.
Constructs
In order to select the most appropriate testing instruments for this study we must first define the constructs that we wish to measure. The two traits we are seeking to measure are working memory and creative problem-solving, so it is necessary to seek a satisfactory definition for each of these traits.
According to Baddeley (1983, 1992, 2000), working memory is “the temporary storage of information in connection with the performance of other cognitive tasks such as reading, problem-solving or learning”. Baddeley and Hitch (1974) proposed the multicomponent model for working memory, comprising:
A central executive responsible for directing attention, coordinating simultaneous cognitive processes, and managing the “slave systems” described below
The phonological loop, handling sound data taking the form of language
The visuo-spatial sketchpad, handling short term storage and manipulation of visual and spatial information
The episodic buffer, added to the model in 2000, integrating phonological and visual information and providing the link between working and long-term memory.
Working memory has a limited capacity (Miller, 1956). Research has found that while a typical adult may be able to recall a random sequence of 7 digits or 6 characters, this capacity can be significantly affected by the performance of a concurrent task alongside memorising (Turner & Engle, 1989).
With these observations in mind, we can conclude that our measures of the working memory construct should engage multiple aspects of the multicomponent model and could include an element of concurrent task processing in order to assess the impact on a participant’s capacity.
Creative problem-solving can be described as the mental process of creating a solution to a problem independently rather than applying a previously learned solution. Compared to other areas of cognitive study there are relatively few widely accepted and validated instruments for the testing of creative problem-solving, perhaps because, as suggested by Torrance and Safter (1990), “Creativity can be expressed among all people in an extremely broad array of areas or subjects, perhaps in a nearly infinite number of ways.”
While there is no single agreed definition of creative problem-solving, we can look instead at attempts to define its two components, namely creativity and problem-solving.
Numerous attempts have been made to define creativity based on two or three criteria:
“Creativity requires both originality and effectiveness” (Runco & Jaeger, 2012)
“A creative response is novel, good, and relevant” (Kaufman & Sternberg, 2010)
“Creativity is the ability to come up with ideas that are new, surprising, and valuable” (Boden 2004)
Problem solving, meanwhile, was defined by Polya (1945) as a 4-stage process:
Understanding the problem
Devising a plan
Carrying out the plan
Looking back
If we apply the creativity definition of Runco & Jaeger to stage 2 of Polya’s process, we can conclude for the purpose of this study, that creative problem-solving requires that the plan devised to solve the problem must be both original and effective, and this is the operational definition we will use for this study. Given that some of the items within our instruments are designed to accept only one correct answer, we must accept that in this context being “original” means demonstrating the ability to think in an independent and individual manner, rather than being the first to arrive at a given idea.
Validity Considerations
According to Messick (1980), “Validity is an overall evaluative judgment, founded on empirical evidence and theoretical rationales, of the adequacy and appropriateness of inferences and actions based on test scores”.
For a test to be considered valid, we must establish that it adequately assesses the constructs that it is intended to measure. For this study, we are testing against two constructs: working memory and creative problem-solving. In order to ensure our tests are valid, we must ascertain that the tasks we set are indeed measuring those traits, and are not unduly influenced by extraneous factors (for example, if our working memory test were to ask questions using complex wording and numerous steps, it could become a measure of comprehension as well as working memory and thus be considered invalid).
The question of whether a testing instrument appears to measure the construct for which it is intended is known as face validity. To establish face validity, we must rely upon expert opinion, either by seeking the feedback of subject matter experts on our instrument, or by using existing tests and tasks which are widely accepted as being appropriate for the measurement of our construct. In some fields, there may be no consensus over the most valid means of testing a given trait, and it may be necessary to select the testing approach that has the most support amongst experts, accepting that there may also be dissenting opinions.
Similarly, construct validity – the extent to which the instruments measure the desired constructs and are not influenced by other traits – can be difficult to establish in isolation without deferring to the opinion of subject matter experts. However, one of the benefits of creating a multi-trait multi-method matrix is that we will use multiple instruments designed to measure the same trait and can therefore use our results to assess the convergent validity of our instruments.
Convergent validity is a measure of the extent to which instruments seeking to measure the same construct achieve correlating results. In our study, we will utilise two tests designed to measure working memory, and two tests designed to measure creative problem-solving. We would hope and expect to see a high degree of convergent validity between the two tests of working memory, and a similarly high level of convergent validity between our two creative problem-solving tests. While convergent validity provides strong evidence of the overall validity of our instruments, and has the advantage of being scientifically measurable, we should acknowledge that convergent validity should not be the only facet of validity we consider because two tests with flawed construct validity (for example because they are designed to test working memory but also require ability in maths) could still exhibit a high degree of convergent validity.
Having established the validity of our instruments, we can observe whether there is also a high positive correlation between the performance of the participants against our two constructs, suggesting that the two traits being measured are closely linked (as concluded by Hambrick and Engel (2003), or whether we establish divergent validity – evidence that the traits we are measuring are unrelated (with a low level of correlation) or even inversely related (as argued by Wiley and Jarosz (2012)), which would be suggested by negative correlation of results.
Method
Given that the key aim of this study is to assess the validity of our measurement tools while examining the relationship between working memory and creative problem-solving, and furthering our understanding of whether these two traits are closely related, unrelated, or inversely related, the creation of a multi-trait multi-method (MTMM) matrix as developed by Campbell and Fisk (1959) is an eminently suitable method for us to adopt. The use of multiple methods of assessment for each construct under observation will help us establish convergent validity for each of our instruments.
For the purpose of this study, the methods to be used were one-to-one and computer-based assessment. The results of our testing against our first construct, working memory, can then be compared to the data from our testing of our second construct, creative problem-solving, to determine whether the dataset indicates divergent validity between our assessments for the two different constructs.
The sample size for this exercise has been calculated using the following formula, proposed by Hulley et al (2013) for use in medical research but applicable wherever we seek to ascertain a minimum sample size based on some assumptions about a study:
N = [(Zα+Zβ)/C]2 + 3
This formula can be explained in Table 1:
Table 1.
α
Type I error – alpha: the probability of making a Type I error (rejecting the null hypothesis when in fact it is true)
β
Type II error – beta: the probability of making a Type II error (accepting the null hypothesis when in fact it is false). Directly linked to the “power” of our study, because power is calculated as 1 – β.
r
The expected correlation coefficient
C
0.5 * ln[(1+r)/(1-r)]
N
Minimum sample size derived
For our study, we will set the variables as follows:
Table 2.
α
0.05. This is a widely accepted alpha value (Fisher, 1925)
β
0.2. This is a widely accepted beta value (Cohen 1988), giving a power of 0.8 or 80%.
r
0.5/-0.5. While there can be no precise definition, a correlation coefficient of more than 0.5 (or less than -0.5) is typically considered significant, having been labelled as a “Large” degree of correlation by Cohen (1988).
Based the above variables, the result of our formula is 29, meaning that 29 is the minimum recommended sample size for our study. To add some contingency in the event of a participant being unable to complete all the tests, the sample size for the study was set at 30.
Measurement Tools – Creative Problem-Solving and Working Memory
Creative Problem-Solving
The creative problem-solving measures were administered using both one-to-one and computerised methods.
One-to-one items comprised:
Four Creative Problem-Solving Puzzles as documented by Scheerer M (1962) in Scientific America.
Klapper and Witkin’s Ring and Peg Problem – Participants will be given an image and the scenario of placing two rings on a peg from a position of six feet from the rings and pegs, using anything they see in the image of the room.
Participants were asked to join a series of 9 dots arranged in a square drawing four continuous straight lines, without lifting the pen and without tracing the same line more than once.
Dunker’s Candle Problem – Participants were presented with the following task: how to fix and light a candle on a wall (a cork board) in a way so the candle wax won’t drip onto the table below. To do so, one may only use the following along with the candle: a book of matches and a box of thumbtacks
Perceptual Fixation – The participants were presented with two images of abstract shapes and asked to place one image on the other to create two closed figures.
Participants were given a maximum of 7.5 minutes to solve each puzzle, giving a total test duration of 30 minutes. Participants were given the option to pass on a puzzle they felt unable to complete and move on to the next. Each of the four tasks was deemed to be complete when the correct solution was provided, the allotted time expired, or the participant chose to pass.
Computer-based items comprised:
Remote Associates Test (RAT) – This test, developed by Martha Mednick (Mednick 1962; Mednick & Mednick, 1967), is widely used in the study of creativity and has the benefit of requiring no subject knowledge beyond familiarity with the English language. For the purposes of this research a total of 45 items were selected from the item bank created by Bowden and Jung-Beeman (2003) based on the items in the RAT as developed by Mednick. The 45 items were selected based on the normative data presented in their study, taking in to account the number of participants achieving the correct solution, to cover a range of levels of difficulty. As the participants in this study were British, items based on American English words and phrases were not selected.
All participants were given verbal instructions and asked to complete 3 practice items to which they received the correct answer. Each item provided three stimulus words, with participants required to identify a fourth word, which, when combined with each of the three stimulus words, would result in a common compound word or phrase. Once participants had completed the practice questions, they were asked to complete the RAT (45 items). Participants were not given time restrictions per item but allowed 30 minutes to complete all 45 items.
Working Memory
The working memory measures were administered by both one-to-one and computerised methods using a series of instruments widely used in studies such as Conway et al., 2005; Waters & Caplan, 2003, Kane et al., 2004.
One-to-one items comprised:
Reading Spanis one of two verbal working memory tests developed by Daneman and Carpenter (1980), designed to measure the ability to store and process information by asking the participant to read unrelated sentences presented to them, with the task of remembering the last word of each sentence. Participants were initially given three sets of two sentences, at each subsequent level, the number of sentences per set increased to a maximum of six at the highest level. The task was deemed to be completed once the participant failed all three sets at any given level.
Listening Spanis the second of the two verbal working memory tests developed by Daneman and Carpenter (1980). The test facilitator read aloud a series of single statements, as in the reading span, participants were asked to recall the last word of each statement and in addition are asked to state whether the statement is true or false. Participants were again initially given three sets of two sentences and at each subsequent level, the number of sentences per set increased to a maximum of six at the highest level. The task was deemed to be completed once the participant failed all three sets at any given level.
Digit Span is one of the oldest (Richardson JT, 2007) and frequently used working memory tests. The test facilitator read aloud a series of random digits, starting with three digits. The participant was required to repeat the digits back to the facilitator, firstly in the same order as presented to them and secondly in the reverse order. Each time the participant answers correctly, the number of digits increases by one. The task was deemed to be completed once the participant failed three times to repeat the correct digits, in the same order or in reverse.
Spatial Span Taskdeveloped by Shah and Miyake (1996) is a letter rotation task. The test facilitator shows the participant a set images, each containing a single letter which is rotated in different orientations. The participant’s task was to determine whether the image is normal or mirror imaged, while also remembering the direction each letter pointed in order to indicate the top of the letter on a grid, as the letters can be presented in the paper horizontally, vertically etc. The number of letters shown increased by one each time the participant answered correctly. The task was deemed to be completed once the participant failed three times to answer correctly.
Computer-based items comprised:
Complex Span requires numbers to be memorised while the participant rapidly identifies whether a separate, interleaved set of numbers are greater or lesser than a specified constant. This simultaneous memorising and processing of data has been recognised as a valid test of working memory (T. S. Redick et al., 2012). Participants were given one set of 3-5 number pairs to allow them to familiarise themselves with the task and the controls, the results of which were discounted, and then a second set of 3-5 pairs for which the results were recorded. The task was then deemed complete.
Task Shiftingrequires the participant to rapidly switch between two distinct operations: determining whether a number is odd or even, or whether it is greater or lesser than a given constant. The test assesses one of the key functions of the central executive component of working memory: the ability to coordinate and switch between two distinct simultaneous tasks (Baddeley, 1996). Participants were shown 20 random numbers, alongside a symbol indicating whether the task was to determine whether the number was odd or even, or whether they had to specify if the number was above or below 5. The participants were given a maximum of 2 seconds to determine the correct task and provide a response by pressing one of two keys, representing “odd or less than 5” and “even or more than 5”. The task was deemed complete once a response had been received or time had expired for all 20 numbers & symbols.
Picture Recognitionrequires the participant to memorise a series of images and then review a second set of images, rapidly deciding whether each image appeared in the first series. The visuo-spatial sketchpad component of working memory identified by Baddeley and Hitch (1974), which allows visual data to be stored and manipulated, is thereby assessed. A key facet of this test is that the images cannot be readily described in words and thus we ensure that the phonological loop component of working memory is not engaged. The participants are shown an initial selection of 6 practice images to allow them to become familiar with the exercise and its controls and are then shown two sets of 30 images. As each image from the second set of 30 is presented, the participant must press a given key to specify that the image is “old” (having been present in the first set) or “new”.
The complex span, task shifting and picture recognition tests are all delivered online using a tool called Tatool Web, a platform created by von Bastian and Locher (2013) to facilitate easier creation and delivery of psychological tests and experiments.
N-Back – this test requires the subject to memorise a series of pictures, specifying whether each image is the same one that appeared a given number of items (n) previously in the sequence, and is one of the most popular tests of working memory (Owen et al, 2005). For our study we chose to do a 2-back test (n=2), meaning that participants had to determine if he image displayed was the same as the one appearing 2 images previously in the sequence. The test was deemed to be completed once 15 matching pairs had been displayed, regardless of whether they were correctly identified by the participant.
Study Sample and Ethical Considerations
A diverse range of adults were invited to participate in the research through email invitation. To minimise selection bias and increase internal validity, the selection was done on the basis that the first 30 participants to volunteer would be selected, with no further selection process being applied.
Before conducting the assessments all 30 participants were given the research information sheet and reminded that participation was voluntary and that if they did not want to take part in all or some of the assessments they could withdraw consent at any time, without giving a reason in line with BERA’s Ethical Guidelines for Educational Research (2018), fourth edition.
All study procedures were approved by Durham University’s School of Education Ethics Committee prior to conducting the study.
All 30 volunteers participated in the assessments.
Data Analysis
Internal Consistency
The most common approach to defining reliability is based on the internal consistency of a test’s results (Black and William, (2012). Therefore, a key consideration in establishing the reliability of the testing instruments used in this study was to assess whether they displayed a good level of internal consistency. The internal consistency of an instrument can be calculated based on the correlations between its various items. As each instrument utilised was designed to measure a single construct, we should expect to see a high level of correlation between the items on each test.
In practice this would mean that participants with similar levels of the measured trait would achieve broadly similar results for each item. An example of an item that would adversely affect internal consistency would be one that was only answered correctly by participants who achieved a low score for the test as a whole. Such an item might be found to be measuring a different trait than that defined in the construct.
A common method for calculating internal consistency is the use of Cronbach’s alpha (Cronbach, 1951). While there can be no true scientific exactitude over the categorisation of the alpha result obtained from a Cronbach’s alpha calculation of internal consistency, the guidelines offered by George and Mallery (2003) are widely adopted:
.9 – Excellent
.8 – Good
.7 – Acceptable
.6 – Questionable
.5 – Poor
< .5 – Unacceptable
The software package Jamovi was used to perform the Cronbach’s alpha analysis of our results. Within this study, the one-to-one creative problem-solving test achieved a Cronbach’s alpha value of α = 0.76 (acceptable). Our computerised creative problem-solving test (RAT) achieved an alpha value of α = 0.93 (excellent). Within this test, there was a single item (Q22) which displayed negative correlation and as such could be considered for either reversal or exclusion.
Further consideration of this item revealed no logical basis for reversing it, and the impact on consistency of removal was minimal, so it was not excluded, although further consideration of the case for exclusion could be made if the same item displayed negative correlation in future tests.
The one-to-one working memory test displayed a Cronbach’s alpha value of α = 0.83 (good), and our computer-based test of working memory achieved α = 0.82 (good). Thus, all four of our testing instruments (or groups of instruments) display levels of internal consistency that would typically be considered as ranging from acceptable to excellent.
We should recognise that internal consistency is only a part of overall test reliability, alongside the external reliability that can be measured by comparing test/re-test stability. However, given that re-testing is outside of the scope of this study, the Cronbach’s alpha values described above give us a satisfactory indication that we can proceed to draw conclusions on the basis of what appear to be reliable testing instruments.
Table 3 – Multitrait Multimethod Matrix of Working Memory and Creative Problem-Solving
The results, represented in the form of a multi-trait multi-method matrix (table 1), display several clearly defined characteristics. Firstly, as discussed earlier, high values in the reliability diagonal indicate acceptable (or better) levels of reliability across all tests.
As shown above, heterotrait, monomethod correlation values are low. This means there is little relationship between a participant’s results for the two measured traits, even when the method of measurement is the same. The correlation coefficient (r) between the computer-based tests of working memory and creative problem-solving was just r = 0.06, an effect size that can be categorised as “small” according to the guidelines proposed by Cohen (1988). The correlation between the one-to-one tests of working memory and creative problem-solving was even lower at r = 0.04. This suggests our tests have a very low method factor (the extent to which the method utilised influences the outcome).
The heterotrait, heteromethod correlation values are also low, as one would expect for traits that are not linked. These values reflect the level of correlation between participants’ performance in tests measuring different traits and delivered using different methods. The correlation coefficient between the computer-based working memory test and the one-to-one creative problem-solving test was r = 0.12, while the correlation coefficient between the one-to-one working memory test and the computer-based creative problem-solving test was r = 0.05.
Both the heterotrait monomethod and the heterotrait heteromethod values therefore show discriminant validity for the two measures – finding no significant correlation between two traits that were not expected to correlate.
The monotrait, heteromethod correlation values were high. The correlation coefficient between our computer-based and one-to-one tests of working memory was r = 0.85. The correlation coefficient between the computer-based and one-to-one test of creative problem-solving was also r = 0.85. In both cases, this can be categorised as a “large” effect size according to Cohen’s guidelines. This demonstrates convergent validity: different testing instruments intended to measure the same construct have returned results displaying a high degree of correlation.
Limitations
While this research provides strong evidence that the testing instruments used are reliable and valid for use with the defined constructs, we should recognise certain limitations. The scope of the study did not allow for confirmation of external reliability through test/re-test comparison, nor the execution of the same test on a second group of participants. While efforts were made to recruit a diverse group of participants, there were several biases within the group in terms of region of residence, age, ethnicity and social class. It would be recommended that future studies seek to improve upon the level of diversity amongst the participants.
Additionally, it must be recognised that creativity is a trait that does not lend itself readily to definitive measurement. For the purposes of our creativity testing using the RAT, all questions allowed for only one correct result, but we should accept the inherent weakness in this approach: the most creative participant may be able to devise several possible solutions to a given problem, each of which may be an equally valid demonstration of the measured trait but only one of which will be accepted as per the marking scheme for the test. This problem could be countered by adjusting the tests to allow for multiple answers to a given question, but the development of an adequate marking scheme would then become significantly more difficult.
Conclusion
This study makes use of the multitrait multimethod matrix devised by Campbell and Fiske (1959) to provide evidence supporting the validity of a number of well-known instruments for measuring working memory and creative problem-solving. The matrix demonstrates strong levels of convergent validity between different methods for measuring the same trait, and similarly marked levels of divergent validity in the lack of correlation between participant results in testing of the two different traits under examination, even when the same method is used to assess the two traits.
The results presented in this paper support in part the findings of Wiley and Jarosz (2012) that high levels of working memory do not confer a correspondingly high level of creative problem-solving ability, but do not corroborate their contention that an inverse relationship exists (such an inverse relationship would be reflected in our matrix as a negative correlation coefficient, which was not found).
Finally, participant feedback indicated that while the tests of working memory appeared to the participants to have high levels of face validity (it was clear to the participants that their working memory was being assessed by the tests), the creative problem-solving tests were not universally recognised as having face validity, largely due to the fact that they accepted only one correct answer in most cases and thus denied more creative participants the opportunity to devise innovative solutions to the questions posed (see the limitations section above). For this construct, additional work may be required to create instruments which have higher levels of face validity while still providing a reliable marking scheme.
References
Adobe (2016) https://theblog.adobe.com/why-creative-problem-solving-and-lifelong-learning-should-anchor-21st-century-education/
Baddeley, A., Hitch, GJ. (1974) Working Memory In: Bower, ed. The Psychology of learning and motivation: Advances in research and theory. New York: Academic Press,. p. 47 – 89.
Baddeley, A. (1983). Working Memory. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences,302(1110), 311-324. Retrieved from http://www.jstor.org/stable/2395996
Baddeley, A (1992). Working Memory: The Interface between Memory and Cognition. Journal of Cognitive Neuroscience 1992 4:3, 281-288
Baddeley, A. (1996). Exploring the Central Executive. The Quarterly Journal of Experimental Psychology Section A, 49(1), 5–28. https://doi.org/10.1080/713755608
Baddeley, A. (2000). A.D. Baddeley The episodic buffer: a new component of working memory? Trends Cogn. Sci., 4 (11) pp. 417-423
Black, P. & William, D. (2012). The reliability of assessments. In J. Gardner (Ed.), Assessment and learning (pp. 243-263). London: SAGE Publications Ltd doi: 10.4135/9781446250808.n15
Boden, M. (2004). The Creative Mind. London: Routledge, https://doi.org/10.4324/9780203508527
Bowden, E.M. & Jung-Beeman, M. (2003) Normative data for 144 compound remote associate problems Behavior Research Methods, Instruments, & Computers (2003) 35: 634.
British Educational Research Association [BERA] (2018) Ethical Guidelines for Educational Research, fourth edition, London. https://www.bera.ac.uk/researchers-resources/publications/ethicalguidelines-for-educational-research-2018
Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56(2), 81-10
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Earlbaum Associates.
Conway, A.R.A., Kane, M.J., Bunting, M.F. et al. (2005) Working memory span tasks: A methodological review and user’s guide. Psychonomic Bulletin & Review 12: 769.
Cronbach, L. J. (1951) Coefficient Alpha and Internal Structure of Tests, Psychometrika. 16. 297-334. 10.1007/BF02310555.
Daneman, M., & Carpenter, P. A. (1980). Individual differences in working memory and reading.Journal of Verbal Learning & Verbal Behavior,19, 450–466.
Fisher, R. A. (1925), Statistical Methods for Research Workers, Edinburgh: Oliver and Boyd.
George, D. & Mallery, P. (2003). SPSS for Windows step by step: A simple guide and reference. 11.0 update (4th ed.). Boston, MA: Allyn & Bacon.
Hambrick, D. Z., & Engle, R. W. (2003). The role of working memory in problem solving. In J. E. Davidson & R. J. Sternberg (Eds.), The psychology of problem solving (pp. 176-206). New York, NY, US: Cambridge University Press.
Hulley SB, Cummings SR, Browner WS, Grady D, Newman TB. (2013) Designing clinical research : an epidemiologic approach. 4th ed. Philadelphia, PA: Lippincott Williams & Wilkins; 2013. Appendix 6C, page 79.
Kane, M.J., Hambrick, D.Z., Tuholski, S.W., Wilhelm, O., Payne, T.W., & Engle, R.W. (2004). The generality of working memory capacity: a latent-variable approach to verbal and visuospatial memory span and reasoning. Journal of experimental psychology. General, 133 2, 189-217
Kaufman, J., & Sternberg, R. (Eds.). (2010). The Cambridge Handbook Creativity (Cambridge Handbooks in Psychology). Cambridge: Cambridge University Press. doi:10.1017/CBO9780511763205
Mednick, S. (1962). The associative basis of the creative process. Psychological Review, 69(3), 220-232.
Mednick, M.P. & Andrews, S.M. (1967). Creative thinking and level of intelligence. Journal of Creative Behavior, 1, 428–431
Messick, S. (1980). Test validity and the ethics of assessment. American Psychologist, 35, 1012-1027.
Miller, G. A. (1956). The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychological Review, 63(2), 81-97
Owen, A. M., McMillan, K. M., Laird, A. R., and Bullmore, E. (2005). N-Back working memory paradigm:a meta-analysis of normative functional neuroimaging studies. Hum. Brain Mapp. 25, 46–59. doi: 10.1002/hbm.20131
Polya, G. (1945). How to solve it; a new aspect of mathematical method. Princeton, NJ, US: Princeton University Press.
Redick T S, Broadway J M, Meier M E, Kuriakose P. S., Unsworth N., Kane M. J., et al. Measuring Working Memory Capacity with Automated Complex Span Tasks[J]. European Journal of Psychological Assessment. 2012, 28(3): 164–171
Richardson JT. Measures of short-term memory: a historical review. Cortex. 2007;43(5):635–650.
Runco, M. & Jaeger,G. (2012). The Standard Definition of Creativity. Creativity Research Journal – 24:1, 92-96
Scheerer, M. (1963). PROBLEM-SOLVING. Scientific American,208(4), 118-131. Retrieved from http://www.jstor.org/stable/24936537
Shah, P., & Miyake, A. (1996). The separability of working memory resources for spatial thinking and language processing: an individual differences approach. Journal of experimental psychology. General, 125 1, 4-27
Torrance, EP. & Safter, TH. (1990). The incubation model of teaching. Buffalo NY, Bearly Limited
Turner M.L., Engle R.W. (1989) Journal of Memory and Language, 28 (2) , pp. 127-154.
von Bastian, C.C., Locher, A. & Ruflin, M. Behav Res (2013) 45: 108. https://doi.org/10.3758/s13428-012-0224-y
Waters, G.S. & Caplan, D. (2003) The reliability and stability of verbal working memory measures. Behavior Research Methods, Instruments, & Computers 35: 550.
Wiley J., Jarosz A.F. (2012) Psychology of Learning and Motivation – Advances in Research and Theory, 56, pp. 185-227.
Wiley J., Jarosz A.F. (2012). How Working Memory Capacity Affects Problem Solving. 10.1016/B978-0-12-394393-4.00006-6.
.
Mark A. Runco & Garrett J. Jaeger (2012): The Standard Definition of Creativity, Creativity Research
Journal, 24:1, 92-96
To link to this artic
We provide professional writing services to help you score straight A’s by submitting custom written assignments that mirror your guidelines.
Get result-oriented writing and never worry about grades anymore. We follow the highest quality standards to make sure that you get perfect assignments.
Our writers have experience in dealing with papers of every educational level. You can surely rely on the expertise of our qualified professionals.
Your deadline is our threshold for success and we take it very seriously. We make sure you receive your papers before your predefined time.
Someone from our customer support team is always here to respond to your questions. So, hit us up if you have got any ambiguity or concern.
Sit back and relax while we help you out with writing your papers. We have an ultimate policy for keeping your personal and order-related details a secret.
We assure you that your document will be thoroughly checked for plagiarism and grammatical errors as we use highly authentic and licit sources.
Still reluctant about placing an order? Our 100% Moneyback Guarantee backs you up on rare occasions where you aren’t satisfied with the writing.
You don’t have to wait for an update for hours; you can track the progress of your order any time you want. We share the status after each step.
Although you can leverage our expertise for any writing task, we have a knack for creating flawless papers for the following document types.
Although you can leverage our expertise for any writing task, we have a knack for creating flawless papers for the following document types.
From brainstorming your paper's outline to perfecting its grammar, we perform every step carefully to make your paper worthy of A grade.
Hire your preferred writer anytime. Simply specify if you want your preferred expert to write your paper and we’ll make that happen.
Get an elaborate and authentic grammar check report with your work to have the grammar goodness sealed in your document.
You can purchase this feature if you want our writers to sum up your paper in the form of a concise and well-articulated summary.
You don’t have to worry about plagiarism anymore. Get a plagiarism report to certify the uniqueness of your work.
Join us for the best experience while seeking writing assistance in your college life. A good grade is all you need to boost up your academic excellence and we are all about it.
We create perfect papers according to the guidelines.
We seamlessly edit out errors from your papers.
We thoroughly read your final draft to identify errors.
Work with ultimate peace of mind because we ensure that your academic work is our responsibility and your grades are a top concern for us!
Dedication. Quality. Commitment. Punctuality
Here is what we have achieved so far. These numbers are evidence that we go the extra mile to make your college journey successful.
We have the most intuitive and minimalistic process so that you can easily place an order. Just follow a few steps to unlock success.
We understand your guidelines first before delivering any writing service. You can discuss your writing needs and we will have them evaluated by our dedicated team.
We write your papers in a standardized way. We complete your work in such a way that it turns out to be a perfect description of your guidelines.
We promise you excellent grades and academic excellence that you always longed for. Our writers stay in touch with you via email.