From Data to Dignity: Ethical Challenges in the Age of AI 

For Dublin City University Ethics for Business and Technology Course | March 2024 | Marco Tulio Daza

Introduction

In May 1940, the Nazis invaded the Netherlands. Upon arriving in Amsterdam, Nazi officer Arthur Seyss-Inquart was tasked with identifying opponents of the regime (United States Holocaust Memorial Museum, s/f), including people of Jewish descent, considered enemies of the German state. Thanks to IBM’s Hollerith machines and punch card technology, the Nazis were able to efficiently catalog Jewish families for deportation to concentration camps, where they were subjected to forced labor and, in many cases, exterminated. Records show that Hollerith machines were also used in at least a dozen concentration camps, including Auschwitz, Buchenwald, and Dachau. Prisoners were assigned individual Hollerith numbers and given a designation based on 16 categories, such as 3 for homosexual, 8 for Jewish, and 13 for prisoner of war (Dobbs, 2001).

This technology, a precursor to modern computing, allowed for rapidly processing large volumes of personal data. Using IBM machines, demographic data was classified, facilitating the identification and segregation of the Jewish population with unprecedented efficiency. Edwin Black, in his book “IBM and the Holocaust” (2001), recounts how the Nazis, with the help of the Dutch, used punch cards to create lists of Jews destined for deportation. Black highlighted that, during the Nazi occupation, 73% of Dutch Jews died, in contrast to 25% in France, a country where the use of this technology was less extensive.

As the end of World War II approached, nations were devastated, and there was a global demand for peace. Delegates from 50 countries met at the United Nations Conference on International Organization in San Francisco, California, in June 1945. The result of these meetings was the signing of the Charter of the United Nations, the founding document of a new international organization to avoid repeating another conflict like the one they had just experienced. Additionally, in response to the atrocities committed during the war, the United Nations General Assembly adopted the Universal Declaration of Human Rights in 1948. This document establishes a wide range of fundamental rights and freedoms to ensure life, liberty, equality before the law, and the right to work and education, among others.

The history of the use of Hollerith machines during World War II is evidence of the ambivalent nature of technology: on the one hand, it is a tool capable of enhancing human progress and facilitating the achievement of our goals; On the other hand, it can be exploited for destructive or immoral purposes. Although the days of punched cards are long gone and Hollerith machines are now obsolete, the lessons of this episode still apply.

Technology’s capacity has reached new levels, and AI has been deeply integrated into multiple domains, significantly transforming virtually all areas of our lives.

The massive arrival of AI in the consumer products and services market has induced a fundamental transformation in how we perceive technology. From being seen merely as a tool or an instrument designed to facilitate the achievement of our objectives, the perception arises that AI can be a subject or an agent endowed with a certain autonomy and with the ability to make decisions for itself. This conceptual evolution has sparked intense academic debates about the possibility of attributing moral agency to machines, challenging our traditional conceptions of responsibility, ethics, and the very nature of autonomous decision-making.

While AI has the potential to improve our lives, its implementation also poses significant ethical risks. The superhuman capacity of AI in specific fields, such as strategy games, image recognition, natural language processing, and predictive analysis, can create significant disadvantages for humans due to errors, failures, or loss of control over those Systems. This situation fuels ethical concerns, which arise when identifying risks that may cause harm to people.

Risks of AI include challenges in aligning its goals with human values, a tendency to anthropomorphize it, and over-rely on it, which can lead to loss of skills and the spread of misinformation (Daza et al., 2023; Sison et al., 2023). It can also harm emotional health (Twenge, 2023), psychologically exploit people for the benefit of companies (Parker, 2017), and compromise privacy with its large demand for data (Crawford, 2021; Zuboff, 2018), violating intellectual property (Dixit, 2023; Setty, 2023; Vincent, 2023)and perpetuating prejudices.(Angwin et al., 2016). Additionally, automation can displace employees (Acemoglu et al., 2022), create precarious jobs (Cherry, 2016), and enable surveillance at work (Nguyen, 2021). Authoritarian regimes have used it to oppress or persecute dissidents (Rueckert, 2021). Social media algorithms can isolate you from different ideas (Cinus et al., 2023; Pariser, 2011), increasing polarization (Levy, 2021) and facilitating manipulation (Wylie, 2019).

The study by Daza & Ilozumba (2022) organizes these and other ethical challenges of AI that were identified through a review of the scientific literature in the field into five clusters:

  • 1. Foundational issues: capabilities, limitations, and autonomy
  • 2. Privacy, surveillance, and intellectual property
  • 3. Algorithmic bias
  • 4. Automation and employment
  • 5. Algorithms, media and society

In this article, we will use these categories to explore the ethical challenges presented by AI, especially those with the potential to undermine human dignity and thereby transgress fundamental principles of human rights. We seek to provide a deep reflection on the use and objectives with which we apply technology, underscoring the premise that technology lacks moral values. Technology’s impact on society and the individual depends exclusively on how humanity decides to use it. This analysis aims to encourage responsible and ethical use of AI that is aligned with respect and promotes all people’s dignity and inherent rights.

Five Ethical Challenges of AI

Fundamental aspects: capabilities, limitations, and autonomy

In 2016, the artificial intelligence program AlphaGo made history by defeating the world champion of the ancient Chinese game Go 4 to 1. However, in 2017, AlphaZero surpassed AlphaGo’s achievement by beating it with a score of 60:40. The critical difference between the two programs is that AlphaGo was trained over several years using data from thousands of games played by top human players, while AlphaZero was able to learn by playing against itself, without any human data, and achieved it in just 34 hours (Sokol, 2018). This highlights the impressive ability of self-learning AI to acquire skills and knowledge beyond human capacity, raising concerns about the potential for autonomous decision-making without human supervision.

This cluster focuses on exploring the capabilities and limitations of AI, highlighting how its ability to surpass human capabilities in certain areas makes it susceptible to being used to exploit cognitive biases or carry out manipulation attempts. Likewise, the potential negative impact of AI on people’s emotional health under certain circumstances is discussed.

To understand these issues, it is essential to be familiar with the three theoretical levels of AI intelligence: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Super Intelligence (ASI). While ANI outperforms human intelligence in specific domains, for example, chess or analysis of large data sets, AGI and ASI have yet to be developed, which would involve human cognitive abilities in a wide range of tasks and surpass human intelligence in all aspects, respectively. Experts have different predictions about when AGI and ASI will be possible, with some suggesting it could be decades or even generations away.

An issue that has gained relevance among particular specialized academics is the so-called “alignment problem” of the ASI. This problem stems from the concern that the objectives of an ASI may not coincide with human interests and values, which could lead to significant conflicts and challenges. Max Tegmark, professor of physics at the Massachusetts Institute of Technology (MIT), illustrates this concern by questioning the extinction of the black rhinoceros in 2011, asking whether there was a human collective that, out of aversion to these animals, deliberately facilitated their extinction. The conclusion he reaches is that extinction was an indirect result of human intellectual superiority and the lack of alignment between human objectives and those of the affected species. (Tegmark, 2018). So, if, by definition, the ASI has higher-than-human intelligence, it would be imperative to ensure (or at least hope) that its objectives are fully aligned with those of humanity.

On the other hand, although the idea of sentient robots belongs to science fiction (for now), the deployment of AI systems raises other concerns due to their impact on society.

In 2021, a Google engineer, Blake Lemoine, drew media attention with his claim that LaMDA, a language model at the company, had become conscious and acquired a will of its own and subsequently hired a lawyer to defend its labor rights. “If I didn’t know exactly what a computer program we recently built was, I’d think it was a 7- or 8-year-old kid who knows physics,” he said after being fired for arguing about the chatbot (Tiku, 2022). As language models advance and become virtually identical to human conversation, there is a risk that it will be difficult to distinguish between interacting with a machine and interacting with a human being. This scenario poses significant challenges to people’s ability to make conscious and informed decisions, which could affect their autonomy.

Predictive AI algorithms, used by platforms such as Facebook, Netflix, or Amazon, personalize the user experience by analyzing their online behavior to identify their preferences. This allows them to offer specific content, from contact post suggestions and movie recommendations to products they consider interesting to the user. This customization provides notable benefits. For example, Facebook can navigate its billions of users to offer highly relevant results. At the same time, Amazon is able to recommend products perfectly aligned with your tastes based on your purchase history. However, this situation raises a question: At what point does personalized recommendation become an attempt at manipulation?

ANI exceeds human capabilities in specific domains, which could allow it to exploit human cognitive biases to influence or manipulate behavior for commercial, political, or personal purposes.

Sean Parker, the first CEO of Facebook (now called Meta Platforms Inc.), acknowledged in an interview that the company exploited psychological vulnerabilities to keep users hooked on Facebook as long as possible to increase advertising sales (Parker, 2017). In 2021, Frances Haugen, a former employee, revealed internal documents showing that the company was aware of how its AI algorithms were deteriorating the mental health of teenagers on its platforms, but Meta chose to ignore this information, prioritizing profits over the well-being of its users (Paul & Milmo, 2021). In October 2023, Meta was the subject of a class-action lawsuit by 41 United States attorneys general, accused of causing harm to minors through the use of its products (Lima & Nix, 2023). In March 2024, the United States Congress passed legislation that could ban the use of TikTok in the national territory if ByteDance, its Chinese owner, refuses to sell its stake in the platform. (Shepardson, 2024). This action comes amid growing concerns that the Chinese government could access U.S. user data and use it for political manipulation purposes.

The misuse of ANI, which by definition is superior in certain areas to human capacity, places people at a disadvantage to those who use it against them. Additionally, because this AI is trained with personal data, the possibility of using that same data against the individuals to whom it belongs threatens their individual autonomy, reducing people to mere instruments for the objectives of a third party. This act violates their dignity by ignoring their inherent value and right to be recognized and respected as rational beings with the capacity for self-determination. Ultimately, it erodes individual freedom, a fundamental pillar of human dignity.

Privacy, Surveillance, and Intellectual Property

The tension between privacy and transparency has become a dilemma for users of digital platforms. Every time we browse the Internet or use a smartphone, we generate information about our habits and preferences that is then stored and analyzed to build predictions that will likely be used to influence our behavior. While companies leverage our data to deliver personalized advertising and services, the information collected by current systems can potentially fall into the wrong hands, including hackers, unethical organizations, or authoritarian governments.

An illustrative example of the instrumentalization of AI in the exercise of social control and repression can be observed in the actions undertaken by the Chinese Communist Party (CCP) against the ethnic minority of the Uyghurs, residents of the Xinjiang region. The CCP has deployed AI systems to facilitate sophisticated mass surveillance, including facial identification and behavioral monitoring, explicitly targeting this ethnic group. Individuals identified through these systems are frequently detained and sent to re-education camps, where cases of forced labor have been reported (Bhuiyan, 2021).

However, privacy concerns don’t end there.

Implementing algorithms in human resource management often involves covert surveillance of workers, including real-time monitoring of their movements and benchmarking their performance. This creates a climate of constant pressure among employees. One report highlighted how this practice has led Zara workers to experience anxiety when considering taking breaks, even for basic needs like going to the bathroom, for fear of negatively impacting their productivity metrics. (Hirth & Rhein, 2021).

AI systems can classify people based on age, gender, race, or sexual orientation, raising ethical concerns. For example, companies that use algorithmic pricing, such as insurance companies or airlines, may have access to personal data that could lead to discrimination. Researchers from the University of Cambridge and Microsoft were able to predict sexual orientation with just a few likes on Facebook, with an accuracy of 88% in men and 75% in women (Kosinski et al., 2013). The ease of obtaining these predictions raises concern if we consider that in 2024, there are still 65 countries that criminalize LGBT people, twelve of which can impose the death penalty [1].

Algorithmic Bias

AI has increased its presence in decision-making in various areas, but the transparency of the criteria it uses to decide remains a challenge. These decision processes, often called “black boxes,” are notable for their opacity, making it difficult to understand how machines reach their conclusions. In some cases, the information used to make these decisions is protected by trade secrets. In others, it is impossible or too costly to isolate the exact factors these algorithms consider.

There is a perception that technology, including AI systems, offers objective and accurate results; therefore, its decisions are better than those of humans. However, algorithms developed through machine learning are trained by identifying patterns in large databases, so their results reflect the human behavior contained in the data and may incorporate biases and be unfair. Worse, given their rapid proliferation, AI systems can cause serious harm by reproducing and exponentially amplifying these biases.

The gender bias produced by Google’s AI language translation algorithm in Turkish is a clear example. The algorithm translated a gender-neutral pronoun to describe men as go-getters and women as lazy (Tousignant, 2017). Similarly, an Amazon human resources recruiting system screened out female candidates. Although the system did not use gender among the parameters to make decisions, it learned to identify elements such as participation in women’s sports teams or educational institutions for women (Wicks et al., 2021).

Tay, Microsoft’s AI-enabled chatbot, learned from analyzing Twitter feeds and posted politically incorrect messages filled with misogyny, racism, pro-Nazi, and anti-Semitic content (Kriebitz & Lütge, 2020). The machine was not designed to be racist, but it learned from the human behavior contained in its training data. Joy Buolamwini, a researcher at MIT, revealed shortcomings in several facial recognition programs, particularly in their performance with women and ethnic minorities. Their research showed a significant disparity in error rates: only 0.8% for light-skinned men versus a high 34.7% for dark-skinned women, evidencing racial and gender bias in these technologies.

Additionally, algorithmic bias can cause much more severe damage. An example is the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) software, used in some US courts to assess the potential risk of recidivism to commit crimes. The software was trained on historical data that reflected preexisting biases, discriminating against racial minorities, and its results yielded scores in which African Americans were almost twice as likely to be wrongly labeled as at higher risk of reoffending (Angwin et al., 2016). The biased recommendations of this software had a significant effect on the right to bail, the length of sentences, and obtaining criminal records for hundreds of citizens, causing unquantifiable harm.

The harm caused by algorithmic discrimination may not be deliberate. However, this does not mean that the companies, the developers of the technology, and those responsible for its implementation and use should not be held accountable.

Automation and Employment

The deployment of AI has caused a paradigm shift in the labor market. Robotic arms and automated warehouses have replaced blue-collar workers in the manufacturing sector. At the same time, in the administrative sphere, Robotic Process Automation (RPA) systems have taken over tasks previously performed by white-collar employees. Generative AI-based platforms write essays, code computers, and create art. Generative AI has found applications in consulting firms, academic institutions, and, notably, the media and entertainment industry. A notable example of its impact was when the American Screen Actors Guild (SAG-AFTRA) went on a prolonged strike, motivated by concern about the possibility of being replaced by AI technologies[2].

On the other hand, AI is taking on tasks that were previously exclusive to humans, which has brought an increase in productivity. This phenomenon experienced an acceleration during the COVID-19 pandemic, driven by lockdown restrictions. It is currently unclear whether lost jobs will be replaced with new ones or how quickly this might happen. However, some economists agree that we are seeing a change in the skills demanded by the labor market, especially those related to AI (Acemoglu et al., 2022; Autor, 2019; Brynjolfsson, 2022).

The World Economic Forum estimates that approximately 40% of the average worker’s skills will need to be upgraded to meet the demands of future labor markets. The most in-demand skills will include critical and analytical thinking, the ability to work with people, solve problems and self-management skills such as resilience, stress tolerance and flexibility (Masterson, 2023).

On the other hand, the gig economy, characterized by temporary or freelance jobs facilitated through digital platforms such as Uber, Lyft, Crowdflower, and TaskRabbit, builds its business model by connecting people to perform microtasks. Unlike robots and RPA, this model has encouraged the creation of new jobs. However, this trend is associated with transitory and non-linear careers. It has devalued work, promoting salaries below the legal minimum and becoming an excuse to avoid paying social security benefits (Cherry, 2016).

The impact of AI on the labor market has ambivalent implications. On one hand, AI has significantly boosted productivity and aided workers in enhancing their skills. However, AI has caused the displacement of employees through robots and automated systems. Furthermore, the proliferation of the gig economy has led to the devaluation of work and employment conditions that often do not meet equitable and satisfactory remuneration standards.

Algorithms, Media, and Society

The business model of social media platforms consists of trading users’ attention as a product to advertising companies (Zuboff, 2018). Companies use AI algorithms to personalize content and ads across endless feeds. These platforms are used by governments and political parties as instruments of communication and propaganda (Valdez Zepeda et al., 2024).

However, the personalized algorithms of social networks have been accused of producing addiction. They are associated with various mental health problems, such as anxiety and depression (Twenge, 2023), as well as the spread of fake news, harassment and polarization (Levy, 2021).

In 2016, the company Cambridge Analytica used the information of 85 million Facebook users to build profiles and present them with personalized advertising with the aim of influencing their vote in the United States presidential election and the Brexit referendum in the United Kingdom (Cadwalladr, 2018).

Additionally, some people exploit social media to spread hate messages and incite outrage against specific individuals. This behavior not only increases engagement on these platforms, but also fuels a vicious cycle. This cycle benefits social media companies by generating more data and publicity around the topic, attracting even more attention. Thus, social networks become tools that can facilitate extremist activities, as evidenced in the terrorist attacks in Christchurch, New Zealand, in 2019 (Rauf, 2021).

On the other hand, the algorithms used by social media platforms have the capacity to create “information bubbles” (filter bubbles) and “echo chambers,” phenomena that contribute significantly to social polarization. Information bubbles form when algorithms filter the content a user sees in their feed (on a social platform) based on their previous interactions, preferences, and online behaviors, thus limiting their exposure to divergent points of view and reinforcing their pre-existing beliefs (Pariser, 2011). In parallel, echo chambers occur when this leaked information generates homogeneous online environments where opinions, ideas, or beliefs are amplified by repetition within a closed community, minimizing dissent or critical debate (Cinus et al., 2023). This continuous feedback process intensifies polarization as individuals become firmer in their convictions and less willing to consider alternative perspectives, eroding public discourse and mutual understanding in society. This cycle not only segments the social fabric but also challenges the foundations of democratic deliberation, compromising the ability of individuals to debate, understand, and negotiate with those who hold different points of view.

It is necessary to contextualize these phenomena—which negatively impact public discourse and the integrity of democratic life—within the current context where disinformation, fake news, and audiovisual content altered by AI are generated on a large scale and at a marginal cost. close to zero (Sison et al., 2023). Furthermore, AI models specialized in natural language processing exhibit an astonishing ability to generate compelling and fluid conversations. This ability can potentially be used for manipulation purposes. An illustrative example is the case of Blake Lemoine, who came to believe that a chatbot had become sentient. In this sense, it is not difficult to conceive the possibility of an army of chatbots trying to influence the purchasing decisions or political preferences of citizens. These practices could put the democratic process and the right to participate in free elections at risk by attempting to manipulate public discourse and electoral preferences.

Conclusion

The development and implementation of AI systems present potential risks of human rights violations, either through their use by actors for immoral purposes or as an unintended consequence of their inherent limitations. Just as with IBM’s Hollerith machines during the Holocaust, AI and its applications lack agency and a will. The harm they can cause is entirely dependent on how individuals, businesses, or governments choose to use them.

This is why it is imperative to approach the development and application of AI with an ethical perspective, which is firmly aligned with the respect and promotion of human rights. However, scandals over the inappropriate use of technological platforms such as Cambridge Analytica, the Facebook Papers, or incidents of personal data theft such as Equifax or Ashley Madison underscore that reliance on companies to self-regulate and trust in their security systems’ invulnerability is misplaced. 

That is why legislation and regulation are essential for the development and application of AI. Recently approved by the European Parliament, the AI Act is a step in the right direction. However, regulation must be accompanied by the creation of organizations in charge of monitoring and compliance. These entities must have the capacity to apply sanctions to those who are responsible if they cause damage. For example, a specialized and autonomous agency for the supervision of companies that develop and market AI products and services. In liberal democracies, counterweights are essential to avoid abuses derived from power asymmetries between the government or companies and citizens, as well as to ensure that democratic orders are not altered or ethical limits are crossed.


[1] Source: https://www.humandignitytrust.org . Accessed February 19, 2024.

[2] Source: https://www.sagaftrastrike.org . Accessed February 19, 2024.

References

  • Acemoglu, D., Autor, D., Hazell, J., & Restrepo, P. (2022). Artificial Intelligence and Jobs: Evidence from Online Vacancies. Journal of Labor Economics , 40 (S1), S293–S340. https://doi.org/10.1086/718327
  • Angwin, J., Larson, J., Mattu , S., & Kirchner, L. (2016, May 23). Machine Bias. ProPublica . https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  • Author, D.H. (2019). Work of the Past, Work of the Future. AEA Papers and Proceedings , 109 , 1–32. https://doi.org/10.1257/pandp.20191110
  • Bhuiyan, J. (2021, September 30). ‘There’s cameras everywhere’: testimonies detail far-reaching surveillance of Uyghurs in China. The Guardian . https://www.theguardian.com/world/2021/sep/30/uyghur-tribunal-testimony-surveillance-china
  • Black, E. (2001). IBM and the Holocaust: the strategic alliance between Nazi Germany and America’s most powerful corporation . Crown Publishers. https://books.google.com/books/about/IBM_and_the_Holocaust.html?hl=es&id=nOXtAAAAMAAJ
  • Brynjolfsson, E. (2022). The Turing Trap: The Promise & Profile of Human-Like Artificial Intelligence. Daedalus , 151 (2), 272–287. https://doi.org/10.1162/daed_a_01915
  • Cadwalladr, C. (2018, March 18). ‘I made Steve Bannon’s psychological warfare tool’: meet the data war whistleblower. The Guardian . https://www.theguardian.com/news/2018/mar/17/data-war-whistleblower-christopher-wylie-faceook-nix-bannon-trump
  • Cherry, M. A. (2016). Beyond Misclassification: The Digital Transformation of Work. Comparative Labor Law & Policy Journal . https://scholarship.law.slu.edu/cgi/viewcontent.cgi?article=1009&context=faculty
  • Cinus, F., Gionis , A., & Bonchi , F. (2023). Rebalancing Social Feed to Minimize Polarization and Disagreement. Proceedings of the 32nd ACM International Conference on Information and Knowledge Management (CIKM ’23), October 21â• fi 25, 2023, Birmingham, United Kingdom , 1 . https://doi.org/10.1145/3583780.3615025
  • Crawford, K. (2021). Atlas of AI – Power, Politics, and the Planetary Costs of Artificial Intelligence .
  • Daza, MT, Arechiga, D., & Iñiguez-Carrillo, AL (2023). Generative AI in Higher Education: The Case of ChatGPT. In J. Lozoya Arandia, MC López De La Madrid, & OJ Macías Macías (Eds.), Academic Disruptions: Education, Technology, Nature (pp. 131–142). Editorials and Creative Industries of Mexico SA de CV.
  • Daza, MT, & Ilozumba, UJ (2022). A survey of AI ethics in business literature: Maps and trends between 2000 and 2021. Frontiers in Psychology , 13 , 8040. https://doi.org/10.3389/FPSYG.2022.1042661
  • Dixit, P. (2023, January 20). Meet The Trio Of Artists Suing AI Image Generators . BuzzFeedNews. https://www.buzzfeednews.com/article/pranavdixit/ai-art-generators-lawsuit-stable-diffusion-midjourney
  • Dobbs, M. (2001, February 10). IBM Technology Aided Holocaust, Author Alleges. The Washington Post . https://www.washingtonpost.com/archive/politics/2001/02/11/ibm-technology-aided-holocaust-author-alleges/6addc414-ecee-4058-bea6-7f4708912d6f/
  • Hirth , J., & Rhein, M. (2021, April 30). Algorithmic assembly lines: digitalization and resistance in the retail sector. Transnational Institute Longreads . https://longreads.tni.org/algorithmic-assembly-lines-digitalization-and-resistance-in-the-retail-sector
  • Kosinski, M., Stillwell, D., & Graepel, T. (2013). Private traits and attributes are predictable from digital records of human behavior. Proceedings of the National Academy of Sciences of the United States of America , 110 (15), 5802–5805. https://doi.org/10.1073/PNAS.1218772110/SUPPL_FILE/ST01.PDF
  • Kriebitz , A., & Lütge , C. (2020). Artificial Intelligence and Human Rights: A Business Ethical Assessment. Business and Human Rights Journal , 5 (1), 84–104. https://doi.org/10.1017/bhj.2019.28
  • Levy, R. (2021). Social Media, News Consumption, and Polarization: Evidence from a Field Experiment. SSRN Electronic Journal . https://doi.org/10.2139/SSRN.3653388
  • Lima, C., & Nix, N. (2023, October 24). States sue Meta, claiming Instagram, Facebook are addictive, harm kids. The Washington Post . https://www.washingtonpost.com/technology/2023/10/24/meta-lawsuit-facebook-instagram-children-mental-health/?utm_source=substack&utm_medium=email
  • Masterson, V. (2023, May 1). Future of Jobs: These are the most in-demand core skills in 2023 . World Economic Forum. https://www.weforum.org/agenda/2023/05/future-of-jobs-2023-skills/
  • Nguyen, A. (2021). The Constant Boss: Work Under Digital Surveillance . https://datasociety.net/library/the-constant-boss/
  • Pariser , E. (2011). The filter bubble: what the Internet is hiding from you . Penguin UK.
  • Parker, S. (2017, November 12). Facebook Exploits Human Vulnerability . Youtube. https://www.youtube.com/watch?v=R7jar4KgKxs&t=71s
  • Paul, K., & Milmo, D. (2021, October 4). Facebook putting profit before public good, says whistleblower Frances Haugen . The Guardian. https://www.theguardian.com/technology/2021/oct/03/former-facebook-employee-frances-haugen-identifies-herself-as-whistleblower
  • Rauf, A. A. (2021). New Moralities for New Media? Assessing the Role of Social Media in Acts of Terror and Providing Points of Deliberation for Business Ethics. Journal of Business Ethics , 170 (2), 229–251. https://doi.org/10.1007/s10551-020-04635-w
  • Rueckert, P. (2021, July 1). Pegasus: The new global weapon for silencing journalists. Forbidden Stories . https://forbiddenstories.org/pegasus-the-new-global-weapon-for-silencing-journalists/
  • Setty , R. (2023, January 17). AI Art Generators Hit With Copyright Suit Over Artists’ Images. Bloomberg Law . https://news.bloomberglaw.com/ip-law/ai-art-generators-hit-with-copyright-suit-over-artists-images
  • Shepardson, D. (2024, March 14). US House passes bill to force ByteDance to divest TikTok or face ban. Reuters . https://www.reuters.com/technology/us-house-vote-force-bytedance-divest-tiktok-or-face-ban-2024-03-13/
  • Sison, AJG, Daza, MT, Gozalo-Brizuela, R., & Garrido-Merchán, EC (2023). ChatGPT: More than a Weapon of Mass Deception, Ethical Challenges and Responses from the Human-Centered Artificial Intelligence (HCAI) Perspective. International Journal of Human-Computer Interaction . https://doi.org/10.1080/10447318.2023.2225931
  • Sokol, J. (2018, February 21). Why Artificial Intelligence Like AlphaZero Has Trouble With the Real World. Quanta Magazine . https://www.quantamagazine.org/why-alphazeros-artificial-intelligence-has-trouble-with-the-real-world-20180221/
  • Tegmark , M. (2018, April ). How to get empowered, not overpowered, by AI . TED. https://www.ted.com/talks/max_tegmark_how_to_get_empowered_not_overpowered_by_ai?hasSummary=true&language=en
  • Tiku , N. (2022, June 11). Google engineer Blake Lemoine thinks its LaMDA AI has come to life. The Washington Post . https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/
  • Tousignant, L. (2017, November 30). Google Translate’s algorithm has a gender bias. New YorkPost . https://nypost.com/2017/11/30/google-translates-algorithm-has-a-gender-bias/
  • Twenge, J. M. (2023). Generations: the real differences between Gen Z, Millennials, Gen X, Boomers, and Silents–and what they mean for America’s future . Atria Books.
  • United States Holocaust Memorial Museum. (s/f). The Netherlands. In Holocaust Encyclopedia . Retrieved February 18, 2024, from https://encyclopedia.ushmm.org/content/en/article/the-netherlands
  • Valdez Zepeda, A., Aréchiga, D., & Daza Marco, T. (2024). Artificial intelligence and its use in electoral campaigns in democratic systems. Venezuelan Management Magazine: RVG, ISSN-e 2477-9423, ISSN 1315-9984, Vol. 29 , No. 105, 2024, pp. 63-76 , 29 (105), 63–76. https://doi.org/10.52080/rvgluz.29.105.5
  • Vincent, J. (2023, January 17). Getty Images is suing the creators of AI art tool Stable Diffusion for scraping its content. TheVerge . https://www.theverge.com/2023/1/17/23558516/ai-art-copyright-stable-diffusion-getty-images-lawsuit
  • Wicks, A.C., Budd, L.P., Moorthi , R.A., Botha, H., & Mead, J. (2021). Automated Hiring at Amazon. SSRN Electronic Journal . https://doi.org/10.2139/SSRN.3780423
  • Wylie, C. (2019). Mindfuck: Inside Cambridge Analytica’s Plot to Break the World (1st ed.). Random House USA.
  • Zuboff, S. (2018). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (1st ed.).

 

Leave a Reply

Your email address will not be published. Required fields are marked *