The Ethical and Editorial Challenges of Artificial Intelligence in Knowledge Production for Urban Studies

Rodrigo Firmino, Associate Editor of journal urbe, Curitiba, Paraná, Brazil.

Logo of the urbe. Revista Brasileira de Gestão UrbanaThe recent advancement of Artificial Intelligence (AI) in scientific research, manuscript writing, and editorial workflows reframes, in new terms, issues that have long permeated the field of Urban Studies: technological mediation, power asymmetries, system opacity, and the unequal production of space and knowledge. In the context of open science, this discussion must be treated not as a mere technical update, but as an epistemological, ethical, and political problem. The SciELO Network’s statement in support of open science with IDEIA articulates this vision by linking impact, diversity, equity, inclusion, and accessibility to the need for sound ethical practices throughout the entire research cycle, within models that acknowledge differences among disciplines, institutions, and regions.

In the urban context, this debate is particularly sensitive. Technologies never function as neutral tools: they play a role in shaping territorialities, distribute visibility and invisibility, redefine boundaries, and normalize specific forms of control. This formulation helps us consider AI in the realm of scientific output as well: it is a socio-technical infrastructure that, by operating in an opaque and asymmetrical manner, can silently reconfigure the criteria of legibility, authority, and validation of knowledge.

The first ethical problem, therefore, is that of opacity. Generative systems produce texts, abstracts, preliminary reports, linguistic revisions, and bibliographic syntheses with little clarity regarding how they arrived at those results. In Urban Studies, where context, historicity, and the position of enunciation are decisive, this opacity is not irrelevant. It can erase mediations, dissolve conflicts, and transform concrete inequalities into abstract and semantically acceptable descriptions. The risk lies not only in factual “errors” or invented references, but in the fabrication of an appearance of coherence that renders less visible the processes of simplification, exclusion, and framing operated by the system.

The second issue is that of accountability. AI tools do not claim authorship, do not take responsibility for analytical choices, and cannot uphold ethical commitments regarding data, stakeholders, or interpretive consequences. This is why editorial policies generally reject AI as an author and require transparency whenever its use substantially influences textual, analytical, or visual content. There is also a growing consensus that reviewers should not submit manuscripts to generative AI platforms, precisely because this compromises the confidentiality of the review, the integrity of the editorial process, and, potentially, the copyright of unpublished texts. This point is central. If peer review depends on trust, confidentiality, and intellectual responsibility, its outsourcing—even partial—to opaque systems erodes the very legitimacy of the process.

In the recent editorial debate, we observe the consolidation of a minimum set of norms for the use of AI in scholarly communication. There is convergence, for example, regarding the impossibility of attributing authorship to AI systems, the requirement for transparency when generative tools substantially interfere with the drafting or preparation of materials, and the prohibition against submitting manuscripts under review to external platforms. This convergence is significant because it shows that the issue is no longer limited to the instrumental use of technology but extends to the foundational principles of scientific output: intellectual responsibility, methodological traceability, editorial confidentiality, and copyright protection. The concern, therefore, is not limited to the accuracy of the content produced but extends to the very institutional conditions that ensure the legitimacy of published knowledge.

 

 

At the same time, significant areas of ambiguity remain. While the use of AI for linguistic review tends to be more readily accepted, its application to image generation, data processing, analytical support, or the preparation of expert opinions remains fraught with restrictions, caveats, and formulations that are still far from settled. This indicates that we are dealing with a regulatory field in the making, in which certain ethical boundaries have already been recognized, but others remain in dispute. For Urban Studies, this lack of definition is particularly problematic because it opens the door to the normalized use of systems that can distort evidence, oversimplify complex contexts, reinforce biases, and compromise the interpretive integrity of research grounded in profoundly unequal territorial realities. Hence the importance of clear, situated, and publicly justified editorial policies capable of addressing not only the technical risks of AI but also its epistemological and political implications.

There is also a third ethical axis, less debated but decisive for open science: that of geopolitical asymmetry in knowledge production. Generative models are trained on unequal bases, with strong linguistic, editorial, and geographic concentrations. Consequently, they tend to favor argumentative styles, bibliographic repertoires, and regimes of evidence closer to the hegemonic centers of scientific circulation. In a field such as Urban Studies, this can reinforce a double marginalization: that of peripheral urban subjects and that of non-hegemonic ways of narrating, interpreting, and theorizing the city. Rather than expanding epistemic plurality, AI may contribute to the standardization of scientific language and to the reproduction of hierarchies already established in the geopolitics of knowledge.

It is at this point that the connection with IDEIA becomes more substantive. Impact should not be confused with a focus on increasing productivity; diversity is not merely the formal multiplication of voices if the systems of mediation tend to homogenize them; equity requires recognizing material inequalities in access to computational infrastructure and technical literacy; inclusion demands attention to the ways in which certain modes of writing and knowing are silently discredited; and accessibility must include transparency regarding how texts, opinions, and decisions were produced. Thinking critically about AI based on these principles means shifting the debate from technological enthusiasm to the realm of the concrete conditions of production, circulation, and validation of scientific knowledge.

Therefore, rather than generically permitting or prohibiting AI, scientific journals must formulate specific, public, and verifiable policies for authors, reviewers, and editorial teams. The editorial policies of Revista urbe, recently updated, guide its editorial community to distinguish technical support from substantive intervention; require a statement of use when there is an impact on writing, analysis, or image production; prohibit the submission of manuscripts to external systems during review; and reaffirm, in all cases, full human responsibility for published content. The initiative not only responds to a technological trend but also reaffirms an editorial stance consistent with the promotion of open science committed to the production of urban knowledge rooted in Latin America.

The challenge, therefore, does not lie in abstractly embracing or resisting AI, but in building forms of editorial and scientific governance capable of subjecting these tools to public principles, explicit accountability, and transparent criteria. For Urban Studies, this means insisting that innovation without reflection deepens inequalities, that openness without regulation can compromise rights, and that scientific integrity today also depends on the ability to make visible the limits, risks, and conditions of use of these technologies.

External links

urbe. Revista Brasileira de Gestão Urbana – SciELO

urbe. Revista Brasileira de Gestão Urbana – Social media: Facebook | X | Instagram

Pontifícia Universidade Católica do Paraná (PUCPR) – Social media: Facebook | X | Instagram

 

Como citar este post [ISO 690/2010]:

FIRMINO, R. The Ethical and Editorial Challenges of Artificial Intelligence in Knowledge Production for Urban Studies [online]. SciELO in Perspective: Humanities, 2026 [viewed ]. Available from: https://humanas.blog.scielo.org/en/2026/05/06/the-ethical-and-editorial-challenges-of-artificial-intelligence-in-knowledge-production-for-urban-studies/

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Post Navigation