Tulio Daza

Tulio Daza

The Price of Profits: A Brief History of Facebook’s Privacy Practices

This essay examines Facebook's (now Meta) privacy practices, highlighting numerous controversies and scandals. It details the extensive data collection methods used to fuel revenue through targeted ads, and significant incidents such as the Cambridge Analytica scandal and internal memos prioritizing growth over user welfare. The analysis reveals Facebook’s hefty fines, the impact on user trust, and the financial repercussions, arguing that Meta’s focus on profits has led to persistent privacy violations and ethical concerns. The essay concludes with a reflection on the implications for shareholders and society.

A Weapon of Mass Deception: Ethical Challenges and Responses of Generative AI and chatGPT

This report was prepared by Alejo Jose G. Sison, Marco Tulio Daza, Roberto Gozalo-Brizuela and Eduardo C. Garrido-Merchan as an input for the President of the United States' Council of Advisors on Science and Technology (PCAST) Working Group on Generative AI. The discussion took place on May 19th, 2023 and covered two topics: 1) AI Enabling Science and 2) AI Impacts on Society. This meeting was livestreamed and it is accessible via the PCAST website.

Understanding the 5 Ethical Challenges of Artificial Intelligence: A taxonomy proposal

AI is a fast-evolving technology that is having a significant impact on the world. While AI has the potential to revolutionize our lives for the better, it also poses significant ethical risks that we should all be aware of. From the use of AI-generated deepfakes to the potential for machines to surpass human intelligence, this post presents the five major ethical challenges posed by AI.
My desk according to AI

Unveiling the Limitations of ChatGPT in Education: The Risks of AI Hallucinations, Bias, Misinformation, and Disinformation

Generative AI has revolutionized content creation in diverse fields such as art, music, film, literature, and coding. Large Language Models (LLMs), like ChatGPT, have opened up new avenues for answering questions and generating text in response to natural language commands. However, LLMs face significant risks such as artificial intelligence hallucination, algorithmic biases, and the production of false or inaccurate information. I propose a few measures to mitigate these risks. By adopting these measures, we can enhance the trustworthiness and reliability of LLMs, ensuring that they fulfill their potential as powerful tools for content generation and knowledge dissemination.