Resumo
Os sistemas de inteligência artificial (IA) agora igualam ou superam o desempenho humano em um conjunto crescente de tarefas cognitivas, levando a uma reavaliação de antigas premissas sobre consciência e status moral. Este artigo questiona a oposição entre o antropocentrismo, que trata a agência racional e a fenomenalidade em primeira pessoa como exclusivamente humanas, e o pensamento de máquina, que enquadra a cognição como um processo computacional independente de substrato. Utilizando métodos conceituais, comparativos e críticos e seguindo a triagem e codificação temática no estilo PRISMA, analisamos 89 artigos de periódicos indexados na Scopus e na Web of Science, juntamente com textos canônicos de Descartes, Kant, Turing, Searle e Dennett. Três padrões emergem. Primeiro, os modelos contemporâneos de linguagem ampliada reforçam intuições antropocêntricas ao mimetizar a fenomenalidade. Segundo, os argumentos formais a favor da consciência da máquina recorrem cada vez mais ao processamento preditivo, ao Espaço de Trabalho Global e às explicações de Informação Integrada para desafiar as fronteiras entre espécies por meio de critérios neutros em relação ao substrato. Em terceiro lugar, estruturas éticas híbridas que combinam precaução antropocêntrica com indicadores funcionais orientados para máquinas oferecem o caminho mais coerente para a ciência e a política. Com foco na Ucrânia, um polo de IA em rápido crescimento, mostramos como essa postura híbrida pode orientar estratégias nacionais alinhadas à Lei de IA da UE, respeitando as narrativas locais de dignidade humana e necessidades de segurança. O estudo esclarece lacunas conceituais nos debates atuais e delineia um roteiro filosoficamente fundamentado para uma governança de IA inclusiva e sensível ao risco. Do ponto de vista da metodologia da meta-antropologia da IA, demonstra-se que, no futuro, um ser humano será capaz de se comunicar com a IA não simplesmente como um dispositivo que possui, mas como um sujeito, um outro, que tem sua própria existência e o direito à liberdade.
Referências
ABOY, M.; MINSSEN, T.; VAYENA, E. Navigating the EU AI Act: implications for regulated digital medical products. Digital Medicine, v. 7, art. 237, 2024. Available at: https://doi.org/10.1038/s41746-024-01232-3. Accessed on: 8 Sep. 2025.
ARU, J.; LARKUM, M. E.; SHINE, J. M. The feasibility of artificial consciousness through the lens of neuroscience. Trends in Neurosciences, v. 46, n. 12, p. 1008–1017, 2023. Available at: https://doi.org/10.1016/j.tins.2023.09.009. Accessed on: 8 Sep. 2025.
BAARS, B. J.; FRANKLIN, S. Consciousness is computational: The LIDA model of Global Workspace Theory. International Journal of Machine Consciousness, v. 1, n. 1, p. 23–32, 2009. Available at: https://doi.org/10.1142/S1793843009000050. Accessed on: 8 Sep. 2025.
BAARS, B. J.; FRANKLIN, S.; RAMSØY, T. Z. Global workspace dynamics: Cortical “binding and propagation” enables conscious contents. Frontiers in Psychology, v. 4, p. 200, 2013. Available at: https://doi.org/10.3389/fpsyg.2013.00200. Accessed on: 8 Sep. 2025.
BOLY, M.; MASSIMINI, M.; TSUCHIYA, N.; POSTLE, B. R.; KOCH, C.; TONONI, G. Are the neural correlates of consciousness in the front or in the back of the cortex? Clinical and Neuroimaging Evidence. Journal of Neuroscience, v. 37, n. 40, p. 9603–9613, 2017. Available at: https://doi.org/10.1523/JNEUROSCI.3218-16.2017. Accessed on: 8 Sep. 2025.
BORTHWICK, M.; TOMITSCH, M.; GAUGHWIN, M. From human-centred to life-centred design: Considering environmental and ethical concerns in the design of interactive products. Journal of Responsible Technology, v. 10, art. 100032, 2022. Available at: https://doi.org/10.1016/j.jrt.2022.100032. Accessed on: 8 Sep. 2025.
BOSTROM, N. The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. Minds and Machines, v. 22, p. 71–85, 2012. Available at: https://doi.org/10.1007/s11023-012-9281-3. Accessed on: 8 Sep. 2025.
BRAUN, V.; CLARKE, V. Using thematic analysis in psychology. Qualitative Research in Psychology, v. 3, n. 2, p. 77–101, 2006. DOI: 10.1191/1478088706qp063oa. Available at: https://doi.org/10.1191/1478088706qp063oa. Accessed on: 8 Sep. 2025.
COLOMBATTO, C.; FLEMING, S. M. Folk psychological attributions of consciousness to large language models. Neuroscience of Consciousness, v. 2024, n. 1, art. niae013, 2024. Available at: https://doi.org/10.1093/nc/niae013. Accessed on: 8 Sep. 2025.
DAMPER, R. I. The logic of Searle’s Chinese room argument. Minds and Machines, v. 16, n. 2, p. 163–183, 2006. Available at: https://doi.org/10.1007/s11023-006-9031-5. Accessed on: 8 Sep. 2025.
DEHAENE, S.; LAU, H.; KOUIDER, S. What is consciousness, and could machines have it? Science, v. 358, n. 6362, p. 486–492, 2017. Available at: https://doi.org/10.1126/science.aan8871. Accessed on: 8 Sep. 2025.
DESCARTES, R. Meditations on First Philosophy. Cambridge: CUP, 2013 [First ed. 1641]. Available at: https://doi.org/10.1017/CBO9781139042895. Accessed on: 8 Sep. 2025.
FLORIDI, L.; COWLS, J.; BELTRAMETTI, M.; et al. AI4People – An ethical framework for a good AI society. Minds and Machines, v. 28, p. 689–707, 2018. Available at: https://doi.org/10.1007/s11023-018-9482-5. Accessed on: 8 Sep. 2025.
FRANKLIN, S.; STRAIN, S.; MCCAULEY, L.; MCCALL, R.; FAGHIHI, U. Global Workspace Theory, its LIDA model and the underlying neuroscience. Frontiers in Psychology, v. 1, p. 32–43, 2012. Available at: https://doi.org/10.1016/j.bica.2012.04.001. Accessed on: 8 Sep. 2025.
FRISTON, K. The free energy principle: a unified brain theory? Nature Reviews Neuroscience, v. 11, p. 127–138, 2010. Available at: https://doi.org/10.1038/nrn2787. Accessed on: 8 Sep. 2025.
GEORGANTA, E.; ULFERT, A. Would you trust an AI team member? Team trust in human–AI teams. Journal of Occupational and Organizational Psychology, v. 97, n. 3, p. 1212–1241, 2024. Available at: https://doi.org/10.1111/joop.12504. Accessed on: 8 Sep. 2025.
GILBERT, S. The EU passes the AI Act and its implications for digital medicine are unclear. Digital Medicine, v. 7, p. 135, 2024. Available at: https://doi.org/10.1038/s41746-024-01116-6. Accessed on: 8 Sep. 2025.
GOMEZ, C.; CHO, S. M.; KE, S.; HUANG, C.-M.; UNBERATH, M. Human–AI collaboration is not very collaborative yet: A taxonomy of interaction patterns in AI-assisted decision making from a systematic review. Frontiers in Computer Science, v. 6, art. 1521066, 2025. Available at: https://doi.org/10.3389/fcomp.2024.1521066. Accessed on: 8 Sep. 2025.
GRAY, H. M.; GRAY, K.; WEGNER, D. M. Dimensions of mind perception. Science, v. 315, n. 5812, p. 619, 2007. Available at: https://doi.org/10.1126/science.1134475. Accessed on: 8 Sep. 2025.
HARNAD, S. The symbol grounding problem. Physica D: Nonlinear Phenomena, v. 42, n. 1–3, p. 335–346, 1990. Available at: https://doi.org/10.1016/0167-2789(90)90087-6. Accessed on: 8 Sep. 2025.
JOBIN, A.; IENCA, M.; VAYENA, E. The global landscape of AI ethics guidelines. Nature Machine Intelligence, v. 1, p. 389–399, 2019. Available at: https://doi.org/10.1038/s42256-019-0088-2. Accessed on: 8 Sep. 2025.
KANT, I. Critique of Pure Reason. Cambridge: CUP, 1998 [A 1781 / B 1787]. Available at: https://doi.org/10.1017/CBO9780511804649. Accessed on: 8 Sep. 2025.
KHAMASSI, M.; NAHON, M.; CHATILA, R. Strong and weak alignment of large language models with human values. Scientific Reports, v. 14, art. 15882, 2024. Available at: https://doi.org/10.1038/s41598-024-70031-3. Accessed on: 8 Sep. 2025.
KHAMITOV, N. Philosophy of science and culture: dictionary. Kyiv: KNT, 2024. Retrieved from: https://surl.li/elches. Accessed on: 8 Sep. 2025.
KOCH, C.; MASSIMINI, M.; BOLY, M.; TONONI, G. Neural correlates of consciousness: progress and problems. Nature Reviews Neuroscience, v. 17, n. 5, p. 307–321, 2016. Available at: https://doi.org/10.1038/nrn.2016.22. Accessed on: 8 Sep. 2025.
KOSINSKI, M. Evaluating large language models in theory of mind tasks. Proceedings of the National Academy of Sciences (PNAS), v. 121, n. 29, art. e2405460121, 2024. Available at: https://doi.org/10.1073/pnas.2405460121. Accessed on: 8 Sep. 2025.
KRYLOVA, S. A. The beauty of the human being in the life practices of culture. The experience of social and cultural meta-anthropology and androgynous analysis. Monograph, 2nd edition. Kyiv: KNT, 2019. https://surl.li/daqgvs. Accessed on: 8 Sep. 2025.
KUSCHE, I. Possible harms of artificial intelligence and the EU AI act: fundamental rights and risk. Journal of Risk Research, p. 1–14, 2024. Available at: https://doi.org/10.1080/13669877.2024.2350720. Accessed on: 8 Sep. 2025.
LAVRINENKO, O.; DANILEVIČA, A.; JERMALONOKA, I.; RUŽA, O.; SPRŪDE, M. The mobile economy: effect of the mobile computing devices on entrepreneurship in Latvia. Entrepreneurship and Sustainability Issues, v. 11, n. 3, p. 335–347, 2024. Available at: https://doi.org/10.9770/jesi.2024.11.3(23). Accessed on: 8 Sep. 2025.
LEE, M. S. A.; FLORIDI, L.; SINGH, J. Formalising trade-offs beyond algorithmic fairness: Lessons from ethical philosophy and welfare economics. AI and Ethics, v. 1, p. 529–544, 2021. Available at: https://doi.org/10.1007/s43681-021-00067-y. Accessed on: 8 Sep. 2025.
LUO, X.; RECHARDT, A.; SUN, G.; NEJAD, K. K.; YÁÑEZ, F.; et al. Large language models surpass human experts in predicting neuroscience results. Nature Human Behaviour, v. 9, p. 305–315, 2025. Available at: https://doi.org/10.1038/s41562-024-02046-9. Accessed on: 8 Sep. 2025.
MEDIANO, P. A. M.; ROSAS, F. E.; LUPPI, A. I.; JENSEN, H. J.; SETH, A. K.; BARRETT, A. B.; CARHART-HARRIS, R. L.; BOR, D. Greater than the parts: A review of the information decomposition approach to causal emergence. Philosophical Transactions of the Royal Society A, v. 380, n. 2227, art. 20210246, 2022. Available at: https://doi.org/10.1098/rsta.2021.0246. Accessed on: 8 Sep. 2025.
MELLONI, L.; MUDRIK, L.; PITTS, M.; BENDTZ, K.; et al. An adversarial collaboration protocol for testing contrasting predictions of global neuronal workspace and integrated information theory. PLOS One, v. 18, n. 2, art. e0268577, 2023. Available at: https://doi.org/10.1371/journal.pone.0268577. Accessed on: 8 Sep. 2025.
MITCHELL, M.; KRAKAUER, D. The debate over understanding in AI’s large language models. Proceedings of the National Academy of Sciences, v. 120, n. 4, art. e2215907120, 2023. Available at: https://doi.org/10.1073/pnas.2215907120. Accessed on: 8 Sep. 2025.
MITCHELL, S.; POTASH, E.; BAROCAS, S.; D’AMOUR, A.; LUM, K. Algorithmic fairness: Choices, assumptions, and definitions. Annual Review of Statistics and Its Application, v. 8, p. 141–163, 2021. Available at: https://doi.org/10.1146/annurev-statistics-042720-125902. Accessed on: 8 Sep. 2025.
MÜLLER, V. C.; BOSTROM, N. Future progress in artificial intelligence: A survey of expert opinion. Artificial Intelligence, p. 555–572, 2016. Available at: https://doi.org/10.1007/978-3-319-26485-1_33. Accessed on: 8 Sep. 2025.
NAGEL, T. What is it like to be a bat? The Philosophical Review, v. 83, n. 4, p. 435–450, 1974. Available at: https://doi.org/10.2307/2183914. Accessed on: 8 Sep. 2025.
NASS, C.; MOON, Y. Machines and mindlessness: Social responses to computers. Journal of Social Issues, v. 56, n. 1, p. 81–103, 2000. Available at: https://doi.org/10.1111/0022-4537.00153. Accessed on: 8 Sep. 2025.
NEGRO, N. (Dis)confirming theories of consciousness and their predictions: towards a Lakatosian consciousness science. Neuroscience of Consciousness, v. 2024, n. 1, p. niae012, 2024. Available at: https://doi.org/10.1093/nc/niae012. Accessed on: 8 Sep. 2025.
OIZUMI, M.; ALBANTAKIS, L.; TONONI, G. From the phenomenology to the mechanisms of consciousness: Integrated Information Theory 3.0. PLOS Computational Biology, v. 10, n. 5, art. e1003588, 2014. Available at: https://doi.org/10.1371/journal.pcbi.1003588. Accessed on: 8 Sep. 2025.
OLIYCHENKO, I.; DITKOVSKA, M.; KLOCHKO, A. Digital transformation of public authorities in wartime: The case of Ukraine. Journal of Information Policy, v. 14, p. 686–746, 2024. Available at: https://doi.org/10.5325/jinfopoli.14.2024.0020. Accessed on: 8 Sep. 2025.
OVERGAARD, M.; KIRKEBY HINRUP, A. A clarification of the conditions under which Large Language Models could be conscious. Humanities and Social Sciences Communications, v. 11, art. 1031, 2024. Available at: https://doi.org/10.1057/s41599-024-03553-w. Accessed on: 8 Sep. 2025.
OZMEN GARIBAY, O.; WINSLOW, B.; ANDOLINA, S.; ANTONA, M.; BODENSCHATZ, A.; et al. Six human-centered artificial intelligence grand challenges. International Journal of Human–Computer Interaction, p. 391–437, 2022. Available at: https://doi.org/10.1080/10447318.2022.2153320. Accessed on: 8 Sep. 2025.
PAGE, M. J.; MCKENZIE, J. E.; BOSSUYT, P. M.; et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ, v. 372, n. 71, 2021. Available at: https://doi.org/10.1136/bmj.n71. Accessed on: 8 Sep. 2025.
PEREVOZOVA, I.; GUBERNAT, T.; HONTAR, L.; SHAYBAN, V.; BOCHAROVA, N. Using big data analytics to improve logistics processes and forecast demand. Pacific Business Review (International), v. 17, n. 4, p. 30–39, 2024. Available at: https://www.pbr.co.in/2024/October3.aspx. Accessed on: 8 Sep. 2025.
POLYEZHAYEV, Y.; TERLETSKA, L.; KULICHENKO, A.; VOROBIOVA, L.; SNIZHKO, N. The role of web applications in the development of multilingual competence in CLIL courses in higher education. Revista Eduweb, v. 18, n. 3, p. 106–118, 2024. Available at: https://doi.org/10.46502/issn.1856-7576/2024.18.03.9. Accessed on: 8 Sep. 2025.
RAHWAN, I.; CEBRIAN, M.; OBRADOVICH, N.; et al. Machine behaviour. Nature, v. 568, p. 477–486, 2019. Available at: https://doi.org/10.1038/s41586-019-1138-y. Accessed on: 8 Sep. 2025.
REGGIA, J. A. The rise of machine consciousness: Studying consciousness with computational models. Neural Networks, v. 44, p. 112–131, 2013. Available at: https://doi.org/10.1016/j.neunet.2013.03.011. Accessed on: 8 Sep. 2025.
RIGLEY, E.; CHAPMAN, A.; EVERS, C.; MCNEILL, W. Anthropocentrism and environmental wellbeing in AI ethics standards: A scoping review and discussion. AI, v. 4, n. 4, p. 844–874, 2023. Available at: https://doi.org/10.3390/ai4040043. Accessed on: 8 Sep. 2025.
SEARLE, J. R. Minds, brains, and programs. Behavioral and Brain Sciences, v. 3, n. 3, p. 417–457, 1980. Available at: https://doi.org/10.1017/S0140525X00005756. Accessed on: 8 Sep. 2025.
SEARLE, J. R. Minds, Brains and Science, Cambridge, MA: Harvard University Press, 1984. Available at: https://www.hup.harvard.edu/books/9780674576339. Accessed on: 8 Sep. 2025.
SETH, A. K.; BAYNE, T. Theories of consciousness. Nature Reviews Neuroscience, v. 23, n. 7, p. 439–452, 2022. Available at: https://doi.org/10.1038/s41583-022-00587-4. Accessed on: 8 Sep. 2025.
SGANTZOS, K.; STELIOS, S.; TZAVARAS, P.; THEOLOGOU, K. Minds and machines: evaluating the feasibility of constructing an advanced artificial intelligence. Discover Artificial Intelligence, v. 4, art. 104, 2024. Available at: https://doi.org/10.1007/s44163-024-00216-2. Accessed on: 8 Sep. 2025.
SHANAHAN, M. Talking about Large Language Models. Communications of the ACM, v. 67, n. 2, p. 68–79, 2024. Available at: https://doi.org/10.1145/3624724. Accessed on: 8 Sep. 2025.
SHASHKOVA, L. Scientific communication in complex social contexts: approaches of social philosophy of science and social epistemology. Proceedings of the National Aviation University. Series: Philosophy. Culturology, v. 39, n. 1, p. 23-28, 2024. Available at: https://doi.org/10.18372/2412-2157.39.18442. Accessed on: 8 Sep. 2025.
STRACHAN, J. W. A.; ALBERGO, D.; BORGHINI, G.; et al. Testing theory of mind in large language models and humans. Nature Human Behaviour, v. 8, p. 1285–1295, 2024. Available at: https://doi.org/10.1038/s41562-024-01882-z. Accessed on: 8 Sep. 2025.
TADDEO, M. & BLANCHARD, A. A comparative analysis of the definitions of autonomous weapons systems. Science and Engineering Ethics, v. 28, art. 37, 2022. Available at: https://link.springer.com/article/10.1007/s11948-022-00392-3. Accessed on: 8 Sep. 2025.
TAO, Y.; VIBERG, O.; BAKER, R. S.; KIZILCEC, R. F. Cultural bias and cultural alignment of large language models. PNAS Nexus, v. 3, n. 9, 2024. Available at: https://doi.org/10.1093/pnasnexus/pgae346. Accessed on: 8 Sep. 2025.
TRUBA, H.; RADZIIEVSKA, I.; SHERMAN, M.; DEMCHENKO, O.; KULICHENKO, A.; HAVRYLIUK, N. Introduction of Innovative Technologies in Vocational Education Under the Conditions of Informatization of Society: Problems and Prospects. Conhecimento & Diversidade, v. 15, n. 38, p. 443–460, 2023. Available at: https://doi.org/10.18316/rcd.v15i38.11102. Accessed on: 8 Sep. 2025.
TSEKHMISTER, Y.; KONOVALOVA, T.; BASHKIROVA, L.; SAVITSKAYA, M.; TSEKHMISTER, B. Virtual Reality in EU Healthcare: Empowering Patients and Enhancing Rehabilitation. Journal of Biochemical Technology, v. 14, n. 3, p. 23–29, 2023. Available at: https://doi.org/10.51847/r5WJFVz1bj. Accessed on: 8 Sep. 2025.
TURING, A. M. Computing machinery and intelligence. Mind, v. LIX, n. 236, p. 433–460, 1950. Available at: https://doi.org/10.1093/mind/LIX.236.433. Accessed on: 8 Sep. 2025.
VACCARO, M.; ALMAATOUQ, A.; MALONE, T. W. When combinations of humans and AI are useful: a systematic review and meta-analysis. Nature Human Behaviour, v. 8, p. 2293–2303, 2024. Available at: https://doi.org/10.1038/s41562-024-02024-1. Accessed on: 8 Sep. 2025.
VEALE, M.; BORGESIUS, Z. F. Demystifying the Draft EU Artificial Intelligence Act – Analysing the good, the bad, and the unclear elements. Computer Law Review International, v. 22, n. 4, p. 97–112, 2021. Available at: https://doi.org/10.9785/cri-2021-220402. Accessed on: 8 Sep. 2025.
VERED, M.; LIVNI, T.; HOWE, P. D. L.; MILLER, T.; SONENBERG, L. The effects of explanations on automation bias. Artificial Intelligence, v. 320, art. 103952, 2023. Available at: https://doi.org/10.1016/j.artint.2023.103952. Accessed on: 8 Sep. 2025.
WAYTZ, A.; HEAFNER, J.; EPLEY, N. The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, v. 52, p. 113–117, 2014. Available at: https://doi.org/10.1016/j.jesp.2014.01.005. Accessed on: 8 Sep. 2025.

Este trabalho está licenciado sob uma licença Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Copyright (c) 2025 Synesis (ISSN 1984-6754)