Practice 3.2 Algorithms with authentic IB Digital Society (DS) exam questions for both SL and HL students. This question bank mirrors Paper 1, 2, 3 structure, covering key topics like systems and structures, human behavior and interaction, and digital technologies in society. Get instant solutions, detailed explanations, and build exam confidence with questions in the style of IB examiners.
Sentencing criminals using artificial intelligence (AI)
In 10 states in the United States, artificial intelligence (AI) software is used for sentencing criminals. Once criminals are found guilty, judges need to determine the lengths of their prison sentences. One factor used by judges is the likelihood of the criminal re-offending*.
The AI software uses machine learning to determine how likely it is that a criminal will re-offend. This result is presented as a percentage; for example, the criminal has a 90 % chance of re-offending. Research has indicated that AI software is often, but not always, more reliable than human judges in predicting who is likely to re-offend.
There is general support for identifying people who are unlikely to re-offend, as they do not need to be sent to prisons that are already overcrowded.
Recently, Eric Loomis was sentenced by the state of Wisconsin using proprietary AI software. Eric had to answer over 100 questions to provide the AI software with enough information for it to decide the length of his sentence. When Eric was given a six-year sentence, he appealed and wanted to see the algorithms that led to this sentence. Eric lost the appeal.
On the other hand, the European Union (EU) has passed a law that allows citizens to challenge decisions made by algorithms in the criminal justice system.
* re-offending: committing another crime in the future
Identify two characteristics of artificial intelligence (AI) systems.
Outline one problem that may arise if proprietary software rather than open-source software is used to develop algorithms.
The developers of the AI software decided to use supervised machine learning to develop the algorithms in the sentencing software.
Identify two advantages of using supervised learning.
The developers of the AI software used visualizations as part of the development process.
Explain one reason why visualizations would be used as part of the development process.
Explain two problems the developers of the AI system could encounter when gathering the data that will be input into the AI system.
To what extent should the decisions of judges be based on algorithms rather than their knowledge and experience?
Define the term “finite” in the context of algorithms.
Identify two reasons why an algorithm should have well-defined inputs and outputs.
Explain why an algorithm must be unambiguous to function correctly.
Describe one example where the feasibility of an algorithm impacts its use in a real-world application.
Cameras in school
The principal at Flynn School has received requests from parents saying that they would like to monitor their children’s performance in school more closely. He is considering extending the school’s IT system by installing cameras linked to facial recognition software that can record student behaviour in lessons.
The facial recognition software can determine a student’s attention level and behaviour, such as identifying if they are listening, answering questions, talking with other students, or sleeping. The software uses machine learning to analyse each student’s behaviour and gives them a weekly score that is automatically emailed to their parents.
The principal claims that monitoring students’ behaviour more closely will improve the teaching and learning that takes place.
Discuss whether Flynn School should introduce a facial recognition system that uses machine learning to analyse each student’s behaviour and give them a score that is automatically emailed to their parents.
In criminal justice, "black box" algorithms are increasingly used to make decisions about bail, parole, and sentencing. However, the lack of transparency and potential for bias raise serious ethical concerns about fairness and accountability.
Evaluate the challenges of implementing algorithmic transparency and accountability in criminal justice, particularly with “black box” algorithms.
Facial recognition algorithms, used for security in airports, rely on large datasets and are sometimes criticized for algorithmic bias. For instance, these algorithms have been known to misidentify individuals of certain racial backgrounds, raising fairness and transparency issues.
Identify two issues related to algorithmic bias in facial recognition software.
Explain why transparency is essential for accountability in facial recognition algorithms used in security.
Discuss one risk associated with “black box” algorithms in facial recognition systems.
Evaluate the impact of algorithmic bias on fairness in facial recognition, particularly concerning racial and ethnic disparities.
Should we completely automate journalism?
Some of the news articles that you read are written by automated journalism software. This software uses algorithms and natural language generators to turn facts and trends into news stories.
Narrative Science, a company that produces automated journalism software, predicts that by 2026 up to 90 % of news articles could be generated by machine learning algorithms.
Discuss whether it is acceptable for news articles to be generated by automated journalism software.
Source A
Source B (Asteria University press release excerpt)
Asteria University adopted ScholarSort to manage a growing number of scholarship applications. The university states that the algorithm improves consistency by applying the same scoring rules to all applicants and by reducing manual handling of documents. ScholarSort is described as a set of step-by-step procedures: it checks eligibility, converts grades into a common scale, assigns points for financial need, and adds a “context” score based on school-resourcing indicators and first-generation status. Applicants can submit an appeal if they believe data is incorrect or circumstances are not captured. The university notes that ScholarSort is not designed to “predict future success” but to rank applicants according to published criteria. It also notes that changing weights (for example, increasing need) will change who receives awards, and that such changes are policy decisions.
Source C (same as above)
Applications processed: 8,400 (2024) → 14,900 (2026); staff hours spent on initial screening fell by 41% after ScholarSort.
Appeals rose from 3% to 9% of applicants; 28% of appeals resulted in a score change.
“Missing data defaults” affected 12% of applications (most commonly incomplete household documents); these applicants’ average rank dropped by 18 percentile points.
Among applicants with the same academic score band, those from low-resourcing schools received scholarships at 1.6× the rate of those from high-resourcing schools (consistent with the context weighting).
Audit sampling found the tie-break rule (alphabetical by surname when composite scores match) decided 7% of final offers in one faculty.
Source D (criticism by student)
ScholarSort is presented as neutral because it is “just applying published weights,” but the ethical problem is that the weights and defaults quietly become the real gatekeepers. A missing document does not simply mean “unknown”; it becomes a penalty through default scoring, which often hits families with less time, less connectivity, or more complex paperwork. The university celebrates efficiency, yet the rising appeal rate suggests the system is producing more disputable outcomes, not fewer. Even small technical choices, like using alphabetical tie-breaks, create arbitrary winners and losers while still looking objective on paper. And the phrase “the committee approves” can become a shield: humans rubber-stamp what the ranking already decided. If the algorithm is shaping access to education, accountability must include transparency about how each component changes outcomes, and a way to challenge decisions without requiring applicants to become expert auditors of a scoring system.
Identify two steps in ScholarSort’s ranking process shown in Source A.
Analyse how ScholarSort’s design choices described in Source C could affect the efficiency and perceived fairness of scholarship allocation.
Compare what Source C and Source D suggest about the accountability of ScholarSort’s decisions.
Examine whether ScholarSort should be considered an appropriate use of algorithms in education funding. Answer with reference to all the sources (A–D) and your own knowledge of the Digital Society course.
Quantum computers encode in qubits instead of binary data. Where 8 bits in a computer can represent any number between 0 and 255, 8 qubits can represent those same numbers simultaneously. In research situations, quantum computers are able to examine a multitude of possible combinations and run a variety of simulations simultaneously.
To what extent will quantum computing impact research and development in academic environments?
The quest for Artificial General Intelligence (AGI)—machines that can understand and learn like humans—has fueled excitement in the AI community. However, this enthusiasm may be misplaced.
Limitations of Current AI:
Despite impressive advancements in models like GPT-4, these systems lack true understanding and depend heavily on training data. They generate responses based on patterns rather than comprehension, raising doubts about their ability to replicate human-like intelligence.
Unrealistic Expectations
Tech leaders often predict AGI's arrival within a few years, overlooking the complexities of human cognition. This has led to skepticism among experts, who emphasize that AI is more likely to progress incrementally rather than suddenly achieve AGI.
A Pragmatic Approach
Focusing on narrow AI applications—designed for specific tasks—will yield more immediate benefits. Recognizing current limitations is essential for responsible AI development. In summary, while AGI is an intriguing goal, its realization is not as imminent as some claim.
Artificial General Intelligence (AGI) represents the pinnacle of AI research, aiming to create machines that can think, learn, and adapt across various tasks like humans. Recent advancements have fueled optimism about its potential.
Technological Breakthroughs
Innovations in deep learning and neural networks have led to significant improvements in machine learning capabilities. Systems like GPT-4 demonstrate impressive problem-solving and language understanding, showcasing the progress toward AGI.
Broad Applications
AGI holds the promise of transforming industries, enhancing productivity, and solving complex global challenges. From healthcare to climate change, AGI could provide solutions that were previously unimaginable, driving innovation and efficiency.
Future Prospects
While the journey toward AGI is complex, ongoing research and development continue to push boundaries. As we build more sophisticated models, the possibility of achieving AGI becomes increasingly plausible.
Identify two characteristics of an ‘AI winter’.
Explain the difference between artificial general intelligence (AGI) and domain-specific AI.
Use the source above, your additional source and your knowledge of the uses of AI to, compare the claims and perspectives made by the artificial intelligence researchers and the companies creating artificial intelligence solutions, and the impacts that artificial intelligence will have on society.
With reference to the two sources and your knowledge of artificial intelligence, discuss whether changes in artificial intelligence have been incremental or radical.
Selecting candidates for political parties
Political parties often have large numbers of applicants who wish to act as representatives in their various governing bodies. A senior party official must make the final decision about which applicants should be offered which roles. Many roles receive as many as 15 applications, and it is not possible to interview each applicant.
In an attempt to streamline the application process, a prospective representative will need to complete two tasks:
Neither task will involve the team in the political party.
The applicants will be directed to a link provided by the software developer where they can complete both tasks. The responses to the questionnaires and the videos will be analysed using artificial intelligence (AI).
The software will score the questionnaire and video for each applicant and send it to the senior party official’s team. The applicants with the highest scores will then be invited by the political party for an interview.
The software developers claim this will reduce the number of applications the senior party official needs to process and lead to the most appropriate applicants being selected for an interview.
Discuss whether the political party should introduce the digital system to assist the senior party official when deciding which applicants should be offered roles as representatives.
Practice 3.2 Algorithms with authentic IB Digital Society (DS) exam questions for both SL and HL students. This question bank mirrors Paper 1, 2, 3 structure, covering key topics like systems and structures, human behavior and interaction, and digital technologies in society. Get instant solutions, detailed explanations, and build exam confidence with questions in the style of IB examiners.
Sentencing criminals using artificial intelligence (AI)
In 10 states in the United States, artificial intelligence (AI) software is used for sentencing criminals. Once criminals are found guilty, judges need to determine the lengths of their prison sentences. One factor used by judges is the likelihood of the criminal re-offending*.
The AI software uses machine learning to determine how likely it is that a criminal will re-offend. This result is presented as a percentage; for example, the criminal has a 90 % chance of re-offending. Research has indicated that AI software is often, but not always, more reliable than human judges in predicting who is likely to re-offend.
There is general support for identifying people who are unlikely to re-offend, as they do not need to be sent to prisons that are already overcrowded.
Recently, Eric Loomis was sentenced by the state of Wisconsin using proprietary AI software. Eric had to answer over 100 questions to provide the AI software with enough information for it to decide the length of his sentence. When Eric was given a six-year sentence, he appealed and wanted to see the algorithms that led to this sentence. Eric lost the appeal.
On the other hand, the European Union (EU) has passed a law that allows citizens to challenge decisions made by algorithms in the criminal justice system.
* re-offending: committing another crime in the future
Identify two characteristics of artificial intelligence (AI) systems.
Outline one problem that may arise if proprietary software rather than open-source software is used to develop algorithms.
The developers of the AI software decided to use supervised machine learning to develop the algorithms in the sentencing software.
Identify two advantages of using supervised learning.
The developers of the AI software used visualizations as part of the development process.
Explain one reason why visualizations would be used as part of the development process.
Explain two problems the developers of the AI system could encounter when gathering the data that will be input into the AI system.
To what extent should the decisions of judges be based on algorithms rather than their knowledge and experience?
Define the term “finite” in the context of algorithms.
Identify two reasons why an algorithm should have well-defined inputs and outputs.
Explain why an algorithm must be unambiguous to function correctly.
Describe one example where the feasibility of an algorithm impacts its use in a real-world application.
Cameras in school
The principal at Flynn School has received requests from parents saying that they would like to monitor their children’s performance in school more closely. He is considering extending the school’s IT system by installing cameras linked to facial recognition software that can record student behaviour in lessons.
The facial recognition software can determine a student’s attention level and behaviour, such as identifying if they are listening, answering questions, talking with other students, or sleeping. The software uses machine learning to analyse each student’s behaviour and gives them a weekly score that is automatically emailed to their parents.
The principal claims that monitoring students’ behaviour more closely will improve the teaching and learning that takes place.
Discuss whether Flynn School should introduce a facial recognition system that uses machine learning to analyse each student’s behaviour and give them a score that is automatically emailed to their parents.
In criminal justice, "black box" algorithms are increasingly used to make decisions about bail, parole, and sentencing. However, the lack of transparency and potential for bias raise serious ethical concerns about fairness and accountability.
Evaluate the challenges of implementing algorithmic transparency and accountability in criminal justice, particularly with “black box” algorithms.
Facial recognition algorithms, used for security in airports, rely on large datasets and are sometimes criticized for algorithmic bias. For instance, these algorithms have been known to misidentify individuals of certain racial backgrounds, raising fairness and transparency issues.
Identify two issues related to algorithmic bias in facial recognition software.
Explain why transparency is essential for accountability in facial recognition algorithms used in security.
Discuss one risk associated with “black box” algorithms in facial recognition systems.
Evaluate the impact of algorithmic bias on fairness in facial recognition, particularly concerning racial and ethnic disparities.
Should we completely automate journalism?
Some of the news articles that you read are written by automated journalism software. This software uses algorithms and natural language generators to turn facts and trends into news stories.
Narrative Science, a company that produces automated journalism software, predicts that by 2026 up to 90 % of news articles could be generated by machine learning algorithms.
Discuss whether it is acceptable for news articles to be generated by automated journalism software.
Source A
Source B (Asteria University press release excerpt)
Asteria University adopted ScholarSort to manage a growing number of scholarship applications. The university states that the algorithm improves consistency by applying the same scoring rules to all applicants and by reducing manual handling of documents. ScholarSort is described as a set of step-by-step procedures: it checks eligibility, converts grades into a common scale, assigns points for financial need, and adds a “context” score based on school-resourcing indicators and first-generation status. Applicants can submit an appeal if they believe data is incorrect or circumstances are not captured. The university notes that ScholarSort is not designed to “predict future success” but to rank applicants according to published criteria. It also notes that changing weights (for example, increasing need) will change who receives awards, and that such changes are policy decisions.
Source C (same as above)
Applications processed: 8,400 (2024) → 14,900 (2026); staff hours spent on initial screening fell by 41% after ScholarSort.
Appeals rose from 3% to 9% of applicants; 28% of appeals resulted in a score change.
“Missing data defaults” affected 12% of applications (most commonly incomplete household documents); these applicants’ average rank dropped by 18 percentile points.
Among applicants with the same academic score band, those from low-resourcing schools received scholarships at 1.6× the rate of those from high-resourcing schools (consistent with the context weighting).
Audit sampling found the tie-break rule (alphabetical by surname when composite scores match) decided 7% of final offers in one faculty.
Source D (criticism by student)
ScholarSort is presented as neutral because it is “just applying published weights,” but the ethical problem is that the weights and defaults quietly become the real gatekeepers. A missing document does not simply mean “unknown”; it becomes a penalty through default scoring, which often hits families with less time, less connectivity, or more complex paperwork. The university celebrates efficiency, yet the rising appeal rate suggests the system is producing more disputable outcomes, not fewer. Even small technical choices, like using alphabetical tie-breaks, create arbitrary winners and losers while still looking objective on paper. And the phrase “the committee approves” can become a shield: humans rubber-stamp what the ranking already decided. If the algorithm is shaping access to education, accountability must include transparency about how each component changes outcomes, and a way to challenge decisions without requiring applicants to become expert auditors of a scoring system.
Identify two steps in ScholarSort’s ranking process shown in Source A.
Analyse how ScholarSort’s design choices described in Source C could affect the efficiency and perceived fairness of scholarship allocation.
Compare what Source C and Source D suggest about the accountability of ScholarSort’s decisions.
Examine whether ScholarSort should be considered an appropriate use of algorithms in education funding. Answer with reference to all the sources (A–D) and your own knowledge of the Digital Society course.
Quantum computers encode in qubits instead of binary data. Where 8 bits in a computer can represent any number between 0 and 255, 8 qubits can represent those same numbers simultaneously. In research situations, quantum computers are able to examine a multitude of possible combinations and run a variety of simulations simultaneously.
To what extent will quantum computing impact research and development in academic environments?
The quest for Artificial General Intelligence (AGI)—machines that can understand and learn like humans—has fueled excitement in the AI community. However, this enthusiasm may be misplaced.
Limitations of Current AI:
Despite impressive advancements in models like GPT-4, these systems lack true understanding and depend heavily on training data. They generate responses based on patterns rather than comprehension, raising doubts about their ability to replicate human-like intelligence.
Unrealistic Expectations
Tech leaders often predict AGI's arrival within a few years, overlooking the complexities of human cognition. This has led to skepticism among experts, who emphasize that AI is more likely to progress incrementally rather than suddenly achieve AGI.
A Pragmatic Approach
Focusing on narrow AI applications—designed for specific tasks—will yield more immediate benefits. Recognizing current limitations is essential for responsible AI development. In summary, while AGI is an intriguing goal, its realization is not as imminent as some claim.
Artificial General Intelligence (AGI) represents the pinnacle of AI research, aiming to create machines that can think, learn, and adapt across various tasks like humans. Recent advancements have fueled optimism about its potential.
Technological Breakthroughs
Innovations in deep learning and neural networks have led to significant improvements in machine learning capabilities. Systems like GPT-4 demonstrate impressive problem-solving and language understanding, showcasing the progress toward AGI.
Broad Applications
AGI holds the promise of transforming industries, enhancing productivity, and solving complex global challenges. From healthcare to climate change, AGI could provide solutions that were previously unimaginable, driving innovation and efficiency.
Future Prospects
While the journey toward AGI is complex, ongoing research and development continue to push boundaries. As we build more sophisticated models, the possibility of achieving AGI becomes increasingly plausible.
Identify two characteristics of an ‘AI winter’.
Explain the difference between artificial general intelligence (AGI) and domain-specific AI.
Use the source above, your additional source and your knowledge of the uses of AI to, compare the claims and perspectives made by the artificial intelligence researchers and the companies creating artificial intelligence solutions, and the impacts that artificial intelligence will have on society.
With reference to the two sources and your knowledge of artificial intelligence, discuss whether changes in artificial intelligence have been incremental or radical.
Selecting candidates for political parties
Political parties often have large numbers of applicants who wish to act as representatives in their various governing bodies. A senior party official must make the final decision about which applicants should be offered which roles. Many roles receive as many as 15 applications, and it is not possible to interview each applicant.
In an attempt to streamline the application process, a prospective representative will need to complete two tasks:
Neither task will involve the team in the political party.
The applicants will be directed to a link provided by the software developer where they can complete both tasks. The responses to the questionnaires and the videos will be analysed using artificial intelligence (AI).
The software will score the questionnaire and video for each applicant and send it to the senior party official’s team. The applicants with the highest scores will then be invited by the political party for an interview.
The software developers claim this will reduce the number of applications the senior party official needs to process and lead to the most appropriate applicants being selected for an interview.
Discuss whether the political party should introduce the digital system to assist the senior party official when deciding which applicants should be offered roles as representatives.