Reply to these discussions from my classmates separately. Agree or disagree and use references if needed.
1-Spearman’s two-factor theory (g) factor was the basis of the continual arguments of intelligence. Spearman designated the (g) as a representation of the general electrochemical mental energy of the brain’s cognitive ability (Cohen, 2022). His two-factor theory (g) and (s) are representative of general intelligence and specific intelligence. (g) factors are those that persons are born with and considered novel in nature. (s) factors are those that are learned and are still yet individually specific.
Cattell-Horn’s two-factor theory of 1966. Cattell and then Horn expanded on Spearmans (g) factor by explaining that the (g) was in fact fluid intelligence and denoting it as (Gf). Expanding on Spearman’s factors of intelligence, Cattell-Horn argued that crystallized intelligence was believed to be that which is large and cumulative over the lifespan on an individual. This intelligence would also be such that wouldn’t be subsequent to aging or brain injury such as language or reading (Cohen, 2022).
Luria’s information processing approach explains how persons intake and process short and long-term memory. In Luria’s approach, the theory of information processing is explained as sequential and simultaneous. Simultaneous processing allows persons to intake information, then understand it in one setting. However, sequential processing is essentially a step by step descriiption of piecing information together such as clues to come to a conclusion or summation.
Cattell-Horn & Carrolls CHC model 1997. The CHC, I considered, is a compromise of theories. It’s foundation agrees on the basis of Sperarman’s two factor analysis as well as Cattell-Horns two factor theory 19666. However, Carroll’s three stratum theory expands the theory down such as pyramid of descriiptive (s) factors (cognitive abilities), adding a three layer effect with (g) factor, agreed, being the essential top intelligence factor.
The WISC V& VI are test typically given to gifted children, therefore, I feel as a general measure of intelligence it rather exclusive of children not considered “gifted” prior to the exam. However, the KABC, seemingly equally assesses the general intelligence (g) and those Gf and Gc. According to Cohen, 2022. there was a smaller gap in test score in racial and ethic groups due to the lack of strong emphasis on verbal communication. Unlike the previous, Stanford-Binet test for intelligence was revised to encompass ages 2 to 89. It is a more robust test based on the total IQ generated by an equation of mental age divided by chronological age then multiplied by 100. This test is more inclusive of fluid intelligence.
The Cattlle-Horn & Carroll CHC model of 1997 is most appealing in referring to intelligence theories. Ideally, the foundation of Spearmans theory is intact as I believe that we all are born with novel intelligence or “gifted”. With that, the construct of fluid and crystalized intelligence adds to the gained intelligence through experience and education. This is apparent and supported when observing persons exposed to different educational and environmental backgrounds.
2-As humans strived to learn more about the universe around them, from our earliest ancestors with fire to how we can safely land astronauts on planet Mars, there was one thing that was always utilized even if we weren’t always aware. An early theory of intelligence was developed in 1927 by Charles Spearman, which we now call Spearman’s two-factor theory of intelligence. He wanted to find correlations among specific tests, which introduced the concept of a general factor and a special ability (g and s, respectively). The g factor, which was later disputed by one of his students, is commonly understood as the energy in the brain that was available for problem-solving every problem, and the s factor was for more specialized skills. Now his student, Cattell, would instead back the idea that all intelligence cannot be grouped into one label, thus creating the notion of a general fluid intelligence and general crystallized intelligence (gf and gc) which would be the base for the Cattell-Horn’s two-factor theory. Where gf would be very similar to Spearman’s definition of g, gc is more related to the intelligence of past experiences, not current needs. Cattell and Horn’s theory wasn’t the only theory spurred by Spearman, as a man named John Carroll would introduce his three-stratum theory. Carroll’s three-stratum theory is based on the idea of narrow, broad, and general intelligence. Despite there being over thirty years between the Cattell-Horn theory and Carroll’s theory, their similarities got them to be lumped together and become known as Cattell-Horn and Carroll’s CHC model. The positive of being lumped together in this fashion is that this theory embraces both ideas and doesn’t attempt to prove one more right than the other. The final theory I will discuss is by Aleksandr Luria, known as Luria’s information processing approach. What I believe may be the biggest difference between Luria’s theory and the past ones I mentioned is what the theories focus on. Instead of discussing what is being processed, Luria details how information is processed (which might have been made clear simply by the names). Luria also presents new vocabulary: parallel and sequential processing styles. Where sequential gathers information a chunk at a time, parallel processing is understanding everything at once. Now, while all of these theories have been around for decades, we may not fully appreciate how much they have done for the psychology community. We can find each one of these’s theories in our tests, such as: how the Wechsler Intelligence Scale for Children and Woodcock-Johnson Tests of Cognitive Abilities both utilize information from Spearman’s two-factor theory, how Kaufman Assessment Battery for Children relates to Luria’s information processing approach, and how the Cattell-Horn theory relates to all and the Stanford-Binet Intelligence Scales. If I had to pick between the four, I would have to pick CHC as not only does it recognize the influence of multiple types of intelligence, but I also believe that it could be useful in my future career.
Reference:
Cohen, R. J. (2021). Psychological testing and assessment (10th Edition). McGraw-Hill Higher Education (US).
3- Deception, as most of us would acknowledge, is not a good thing. People do not like to be deceived but as it relates to research it can be used to help gather information. In the field of psychology, it has been used because it was determined necessary for the purposes of a specific research question. Ethics are of great importance in general and significantly more so in conducting research and especially research involving human participants. In the helping professions ethics are everywhere with guidelines on ethical treatment of research participants and ethical research conduct. When psychologists conduct research with clients/patients, students or subordinates as participants, psychologists take steps to protect the prospective participants from adverse consequences of declining or withdrawing from participation (APA, 2016). A researcher is also responsible to adhere to other guidelines such as the expectation of informed consent. For example, research participants, should know the time frame, how the research will be conducted etc. When it comes to deception though, informed consent may not include any information related to any deception for the purpose of research, but they must be made aware of it at the earliest opportunity without compromising the research. Based on the APA guidelines the ends should basically justify the means (APA, 2016).
While the code of ethics provides guidelines for helping professionals, there are some disadvantages. For example, the code of ethics can’t outline a set a specific parameter for deception used in research such as how much, to what extent. This leaves the helping professional with the discretion to make that determination. Deception must be used for the greater good and it can be a great tool in research especially research related to measure behaviors and reactions to different kinds of stimulation. That is because it helps to help in the development of treatment plans and any potential medications to address issues. Although, there are some limitations in the application of the code of ethics to resolve ethical dilemmas, the strengths outweigh those limitations. There is no set of rules without limitations and understanding these limitations allows the helping professionals to work within these parameters and to set reasonable expectations.
Although research is important and deception can be used, it would not be ethical to deceive participants about any medications or anything that could result in psychological, physical, physiological damage. The research should not put participants in unsafe situations where any kind of harm can result.
Jess
References
The American Psychological Association (2016). Ethical principles of psychologists and
code of Conduct.
4-The topic and prompt that I chose to write about for this discussion is Developmental Theories. We learned this week that developmental psychologists come from many different schools of thought, including behaviorism, psychoanalysis, cognitive psychology, psychobiology, humanistic psychology, and perhaps others as well. The developmental theory that I chose to discuss is behaviorism. Behaviorism is considered to focus on the idea that all behaviors are learned through interaction with the environment. The key point of behaviorism includes counterconditioning, stimulus control, reinforcement management, helping relationships, and self-liberation. When it comes to scrutiny and how this theory fits “real-life” situations, in our modern world, behaviorism is looked at as being criticized for modifying behavior at the expense of personal agency. Some possible modifications of behaviorism includes positive punishment, negative punishment, positive reinforcement, and negative reinforcement.