p-ISSN 2980-4868 | e-ISSN 2980-4841
https://ajesh.ph/index.php/gp
Legal Uncertainty in Criminal Law Enforcement through the
Utilization of Artificial Intelligence Technology in Indonesia
Agus Nawawi1*, Azis Budianto2, Rineke
Sara3
1,2,3Universitas Borobudur, East
Jakarta, DKI Jakarta, Indonesia
ABSTRACT
The integration of Artificial Intelligence
(AI) technology in law enforcement has become a significant development in
Indonesia's information technology landscape. The use of AI in law enforcement
presents substantial challenges, including issues of accountability, privacy
concerns, and ethical implications. This study aims to evaluate the
effectiveness of existing regulations in addressing the use of AI technology in
criminal law enforcement in Indonesia and to identify the need for
comprehensive legal reforms. The findings indicate that although regulatory
frameworks exist, their effectiveness in managing AI applications in criminal
law enforcement remains inadequate. There is an urgent need to update the laws
to accommodate the rapid advancements in AI and to address emerging legal
uncertainties. Comprehensive legal reforms are essential to ensure that
AI-enabled law enforcement can be conducted effectively and in accordance with
fundamental legal principles.
Keywords: : Artificial Intelligence; Law Enforcement; Legal
Uncertainty
INTRODUCTION
Amid globalization and digital transformation that is sweeping
across Indonesia, Indonesia has not escaped the current
wave of technological and information developments. As
internet penetration and smartphone adoption increase across the country, Indonesian people are increasingly connected to the digital world
One technology that is increasingly
dominating and frequently used in various fields is Artificial
Intelligence (AI) technology.
AI refers to the development of computer systems capable of performing tasks that typically require human intelligence,
such as natural language processing, pattern recognition, and decision-making
AI is a technology that has the form of a machine
that can imitate human actions,
can also be developed using
human thinking knowledge, and can carry out human
thinking procedures. AI can carry out
activities in the same way as humans,
which often brings unrest to people's lives
AI currently brings anxiety that AI can often
carry out the same legal
actions as humans. Where the sophistication
of AI can surpass the capabilities of humans
The emerging legal uncertainty regarding AI's ability to perform the same legal
acts as humans can be a source
of significant concern. It is because
AI's sophistication has surpassed humans' capabilities in several aspects
For example, in the case of traffic
accidents involving autonomous vehicles controlled by AI, the question of who is responsible for the accident
becomes complicated. Should the vehicle
owner, the manufacturer of the AI device, or
even the AI itself be held
responsible? Ambiguity about the legal
status of AI may result in difficulties in determining liability and paying compensation in such cases. Additionally, in law enforcement, if AI is used
to monitor and analyze criminal
activity, questions about privacy and fairness may arise
because AI does not understand the social or ethical
context that might influence decision-making. Therefore, it is necessary
for a clear and comprehensive legal framework to address these uncertainties and ensure that the
use of AI in legal contexts occurs fairly and safely.
RESEARCH METHODS
The research employs a normative juridical research method with a legislative
approach and analysis and a
qualitative descriptive analysis approach. Through the legislative
approach, this research will examine
the legal framework governing the enforcement of criminal acts through
the utilization of Artificial Intelligence (AI) technology in Indonesia. The qualitative descriptive analysis approach is used
to depict and analyze the legal uncertainties
arising from the application of AI technology in law enforcement, including related challenges, issues, and implications. Thus, this research
aims to provide an in-depth understanding
of how law and AI technology interact in the context of criminal law enforcement
in Indonesia, as well as to
identify steps that can be
taken to address emerging legal uncertainties.
RESULTS AND DISCUSSION
AI was created to embody intelligence and cleverness in performing tasks like those carried
out by humans related to reasoning, thinking, knowledge, decision-making, and problem-solving.
AI can utilize its knowledge and think like humans
to solve existing problems. Thus, AI, which thinks and acts like humans,
can also engage in legal acts. In the ITE Law, there are no specific rules regarding the definition
or use of AI. If we look at Article 1 Paragraph (1) of the ITE Law, legal subjects
consist of senders, receivers, individuals, corporations, and the government. Thus, AI is not classified
as a legal subject.
Salmond states that in legal theory,
an individual is someone whom
the law considers
to have the ability to have rights and obligations. Any individual with such ability
is considered a legal subject, even if they
are not human. Salmond explains that during slavery,
humans were not considered legal subjects or individuals by the law itself. Instead,
although not human, legal subjects
determined by the law are regarded as legal subjects or individuals with rights and obligations equal to humans.
Therefore, there is legal uncertainty
regarding the position of AI, which brings various impacts on the process of criminal law enforcement in Indonesia. The main impact is
the difficulty in identifying responsibility for the use
of AI for criminal law enforcement in Indonesia. AI producers may be one
of the parties responsible for AI actions that violate
the law. However, in many cases, AI producers may be difficult
to determine, especially if the AI is
a product of joint development or open-source. Additionally, AI users can also be
held responsible, especially if the
user does not use the AI properly
or disregards proper protocols in its implementation. However, assigning responsibility to users can also
be challenging, especially if users
do not fully understand or have full control over AI behavior. In addition to producers and users, there is also
the question of whether AI itself should be responsible
for its actions.
However, attributing responsibility to AI as a non-human
entity also has complicated
legal and ethical implications.
Another impact of using AI in law enforcement is data misuse. AI requires extensive access to individual data to train its
algorithms, which can threaten individual
privacy if the data is
not processed correctly or accessed unlawfully. Data misuse may include
unauthorized or unpermitted
use of data for unauthorized purposes, such as unwanted or discriminatory surveillance. Additionally, the use of AI in law enforcement also has the potential
to reinforce biases in algorithms. AI algorithms tend to make decisions
based on existing training data, which may reflect
biases inherent in that data. It
can result in discrimination or injustice in legal decisions, such as unfair racial or social profiling or disproportionate treatment of individuals.
For example, in law enforcement in Indonesia, AI utilization in driving, such as autopilot usage, can lead
to criminal acts and losses if errors
occur. Although autopilot is designed to improve safety and comfort in driving, the potential for
errors or failures in the technology can lead to criminal
acts and losses. For instance, in cases of vehicle accidents involving autopilot, questions about who is responsible
for the accident
become complicated. Is it the
driver using the autopilot feature, the vehicle manufacturer developing the technology, or even the regulatory system allowing the use of autopilot? Additionally, AI technology failure in detecting emergencies or changes in road conditions can lead to accidents
potentially resulting in loss of life or property, which may be treated
as criminal acts.
Furthermore, the facial recognition technology in AI-based crime monitoring and perpetrator identification systems, by police
data, raises concerns regarding individual privacy and potential data misuse, as well as the tendency for
algorithms to trigger racial or social biases in identifying criminals. Additionally, the use of AI-based
facial recognition technology in public security systems, such as in train stations or airports. Although this technology
is intended to enhance security by detecting individuals involved in criminal activities, there is a potential for errors in facial
identification. AI systems may misidentify individuals not involved in crimes as suspects, or conversely, fail to identify the actual perpetrators.
Such errors can result in criminals
evading surveillance, while innocent individuals may become victims of injustice or discrimination.
These cases raise legal uncertainty
and concerns in determining
accountability in criminal law enforcement in Indonesia. Legal uncertainty arises due to the lack
of clear regulations regarding the use
of AI technology in law enforcement, leading to controversial decisions or actions taken by law enforcement authorities that may result in different
interpretations. Additionally,
concerns about accountability arise due to the difficulty
in determining who is responsible for the errors
or failures of AI technology
in the law enforcement process, whether it be
the manufacturer, user, or the AI technology itself.
The existing regulations in Indonesia currently do not include specific legislative rules governing the use
of AI in law enforcement. Although the ITE Law is regulating
the use of technology at present, this regulation focuses more on technical aspects and electronic transactions rather than addressing the use of AI technology
in the context of law enforcement. Thus, there is
legal uncertainty and a legal vacuum regarding
how AI should be regulated and supervised in law enforcement. Existing regulations do not provide sufficiently clear guidance on the responsibilities, authorities, or limitations of AI
usage by law enforcement agencies. It leaves room
for misuse, uncertainty, and potential violations of human rights and individual privacy of AI technology in law enforcement.
In contrast to other countries such as the United States and several European countries that have issued
the Criminal Justice Information Services (CJIS) Security Policy regulating data security and privacy standards in AI usage. Furthermore, within the European
Union, the General Data Protection Regulation (GDPR) has been issued to regulate privacy and personal data protection, including data used in AI. Additionally, China has The Cybersecurity
Law of the People's Republic of China regulating data and information security, including data used in AI.
Therefore, it can be concluded
that there is a lack of specific
and comprehensive regulations
regarding the use of AI technology in the context of law enforcement in Indonesia, where the ITE Law has not been able
to fully address the challenges and issues arising from the development
of AI technology. There is ambiguity regarding
the responsibility and accountability in the use of AI technology by law enforcement agencies. It raises
the potential for abuse of power
and human rights violations, as well as increasing the risk of injustice in the law enforcement process.
Regulations in Indonesia still face inefficiency
with current emerging challenges where existing regulations have not been fully
capable of addressing challenges such as data privacy, security,
and justice related to the use of AI in law enforcement. AI technology usage in law enforcement often involves the collection, storage, and analysis of individuals' data. However, existing regulations have not provided adequate
protection for the privacy of individuals involved in these processes. It can lead
to the potential misuse of personal data by law enforcement
agencies or other parties, as well as concerns about privacy and human rights violations. AI technology utilization in law enforcement is vulnerable to cyber security risks, such as hacker attacks or data manipulation. Existing regulations have not been able
to ensure that adequate security measures are implemented at all stages of AI technology usage by law enforcement agencies, from data collection to analysis. The use
of AI technology in law enforcement can introduce biases in decision-making or suspect identification. Existing regulations have not effectively addressed this issue and need to strengthen oversight and assessment mechanisms to ensure that the
use of AI technology is conducted fairly
and non-discriminatively.
Considerations for the need to update existing regulations in the use of AI technology
in crime prevention in Indonesia are that AI technology has experienced rapid development in recent years, resulting
in various new applications
that can be used in law
enforcement, such as big data analysis, facial recognition, and automatic text analysis. It also means
that the current use of AI technology poses risks of privacy violations if not
properly regulated. Additionally, there are cyber security risks that need
to be considered, such as hacker attacks or data manipulation that can threaten
the integrity of the system. Another consideration is that the use
of AI algorithms can lead to unintended bias or discrimination in decision-making, which can threaten principles
of justice and human rights.
These conditions demand a swift and appropriate response from the government
to address new challenges arising from technological
advancements. The need for more comprehensive
and detailed regulations becomes increasingly urgent to ensure that the use
of AI technology in law enforcement is not only effective
but also fair, transparent,
and in line with fundamental legal principles. Without timely updates, the risks of legal
uncertainty and potential misuse of AI technology in criminal law may
increase, threatening the integrity of the judicial system and individual rights.
In the development of a new regulatory framework to accommodate the use of AI technology
in crime prevention in Indonesia, several aspects need to be considered. Firstly, there is a need for
a comprehensive review of existing regulations to identify deficiencies and gaps in regulating the use of AI technology
in the realm of criminal law. Subsequently,
clear and detailed rules and guidelines need to be formulated
to govern various aspects of AI technology usage, including but not limited
to data collection, storage, and analysis, as well as algorithm implementation and cybersecurity.
Additionally, effective oversight mechanisms are required to monitor and evaluate the implementation of these regulations. This may involve
the establishment of a specialized body or institution responsible for overseeing the use of AI technology in law enforcement, as well as the development
of reporting systems and enforcement mechanisms to ensure that the
established rules are properly followed by law enforcement agencies.
Therefore, there is a need for
an update to the ITE Law, which currently
does not specifically regulate the use of AI technology
in the context of law enforcement, leading to gaps and legal uncertainties in its utilization. Provisions are needed to regulate the use
of AI technology in its various aspects, ranging from data
collection and processing
to the use of algorithms for analysis and law enforcement purposes. Thus, existing regulations can accommodate the latest technological advancements and anticipate challenges and risks that may arise
in its utilization. There is a need
for provisions that strengthen the protection of individual data privacy in the context of AI technology usage. The use
of AI technology often involves the collection
and analysis of personal data, so existing regulations must provide adequate protection for individual personal data and prevent its misuse.
There is also a need for
regulations governing ethical aspects and principles of justice in its use to avoid
the potential for bias or discrimination
in AI-based decision-making
and to ensure that the use of AI technology
in law enforcement remains in line with fundamental legal values and human rights. Thus,
the revision of the ITE Law can
address legal uncertainties, protect data privacy, and ensure that the
use of AI technology in law enforcement remains consistent with fundamental legal principles and humanitarian values.
In addition to revising the ITE Law, the government
can also establish government regulations or other implementing regulations to regulate the use
of AI technology in law enforcement in more detail, to adjust
regulations with technological developments and the increasingly complex law enforcement
needs. These regulations are expected to specifically address procedures for data collection and processing in the context of AI technology usage by law enforcement
agencies, data security standards to be complied with,
as well as oversight mechanisms and accountability to be applied by relevant
agencies. Additionally, these regulations can address ethical
standards in the use of AI algorithms for decision-making, transparency in AI technology usage, and mechanisms for dispute resolution
related to the use of AI technology in law enforcement. These regulations can also provide
greater flexibility for the government to adjust regulations with ongoing technological
advancements.
With legal updates related to the use of AI technology
in crime prevention in Indonesia, it is
hoped that a more comprehensive and responsive regulatory framework can be created
to address technological advancements and maintain a balance between technological innovation and the protection of individual rights. These legal updates
are expected to help reduce legal uncertainties
that may arise in the use
of AI technology in law enforcement. With clearer and more detailed rules, law enforcement
agencies will have stronger guidelines
to regulate and oversee the use of AI technology
in various aspects of law enforcement. Additionally, legal updates are expected to strengthen privacy protection and human rights in the use
of AI technology so that people can feel
safer and more protected in
the increasingly complex digital environment. Thus, legal updates related
to the use of AI in crime prevention in Indonesia can provide
a solid foundation for effective, transparent, and
fair law enforcement in this digital era.
CONCLUSION
The impact
of the utilization of Artificial Intelligence (AI) technology in the criminal law enforcement
process in Indonesia has significant implications for the effectiveness
and transparency of the legal system. The
use of AI in crime detection, analysis, and prediction can help improve law
enforcement efficiency but also poses
various challenges related to responsibility identification, privacy, and ethics. Legal uncertainties
arising from the use of AI technology
also demand a swift and appropriate response in developing regulations that are suitable for the
context and ongoing technological developments.
The effectiveness
of regulations in Indonesia
in addressing the use of AI technology for criminal law
enforcement still has room for improvement.
Although some regulations exist, they do not specifically
govern the use of AI technology in the context of law enforcement. It creates the
need for more comprehensive and detailed legal updates to accommodate the development of AI technology and anticipate potential risks and challenges.
Ahmad,
A. (1970). PERKEMBANGAN TEKNOLOGI KOMUNIKASI DAN INFORMASI: AKAR
REVOLUSI DAN BERBAGAI STANDARNYA. Jurnal Dakwah Tabligh, 13(1),
137–149. https://doi.org/10.24252/jdt.v13i1.300
Azzahra, A., Savandha,
S. D., & Olubisi, M. G. (2024). Effective
Strategies for Corporate Governance and Risk Management in the Public Sector:
Preventing Corruption and Abuse of Authority. Asian Journal of Engineering,
Social and Health, 3(4), 911–919.
https://doi.org/10.46799/ajesh.v3i4.366
Carrillo,
M. R. (2020). Artificial intelligence: From ethics to law. Telecommunications
Policy, 44(6), 101937.
Dahria, M. (2008). Kecerdasan
Buatan (Artificial Intelligence). Jurnal SAINTIKOM, 5(2).
Desiani, A., & Arhami,
M. (2006). Konsep Kecerdasan
Buatan (D. Hardjono, Ed.). Andi.
Ghatge,
S. K., & Parasar, A. (2023). Application of
Artificial Intelligence (AI), Internet of Things (IoT), and Big Data in
Healthcare, Finance, and Transportation. PriMera
Scientific Medicine and Public Health, 2(2023), 27–36.
Hania,
A. A. (2017). Mengenal Artificial Intelligence,
Machine Learning, Neural Network, dan Deep Learning. Jurnal
Teknologi Indonesia.
Kankanhalli, A., Charalabidis,
Y., & Mellouli, S. (2019). IoT and AI for smart
government: A research agenda. In Government Information Quarterly
(Vol. 36, Issue 2, pp. 304–309). Elsevier.
Khan,
M. A. (2024). Understanding the Impact of Artificial Intelligence (AI) on
Traditional Businesses in Indonesia. Journal of Management Studies and
Development, 3(02), 146–158.
Kusumawardani, Q. D. (2019). HUKUM PROGRESIF DAN
PERKEMBANGAN TEKNOLOGI KECERDASAN BUATAN. Veritas et Justitia, 5(1),
166–190. https://doi.org/10.25123/vej.3270
Kusumawati, R. (2018). Kecerdasan
Buatan Manusia (Artificial
Intelligence); Teknologi Impian Masa Depan. Ulul Albab Jurnal Studi
Islam, 9(2), 257–274. https://doi.org/10.18860/ua.v9i2.6218
Mergel, I., Dickinson, H., Stenvall, J., & Gasco, M.
(2024). Implementing AI in the public sector. Public Management Review,
1–14.
Mikhaylov,
S. J., Esteve, M., & Campion, A. (2018).
Artificial intelligence for the public sector: opportunities and challenges of
cross-sector collaboration. Philosophical Transactions of the Royal Society
a: Mathematical, Physical and Engineering Sciences, 376(2128),
20170357.
Muzykant, V., Burdovskaya,
E., Muzykant, E., & Muqsith,
M. A. (2023). Digital Threats And Challenges To Netizens Generation Media
Education (Indonesian Case). Медиаобразование,
1, 97–106.
Nurjanah, A., Salsabila,
I. N., Azzahra, A., Rahayu,
R., & Marlina, N. (2024). Artificial
Intelligence (AI) Usage In Today’s Teaching And Learning Process: A Review. Syntax
Idea, 6(3), 1517–1523.
https://doi.org/10.46799/syntax-idea.v6i3.3126
Pendey, B. (2023). Artificial Intelligence
And Cyber Security. Journal Transnational Universal Studies, 1(2),
93–99. https://doi.org/10.58631/jtus.v1i2.15
Singh,
M., Joshi, M., Tyagi, K. D., & Tyagi, V. B. (2024). Future Professions in
Agriculture, Medicine, Education, Fitness, Research and Development,
Transport, and Communication. In Topics in Artificial Intelligence Applied
to Industry 4.0 (pp. 181–202). Wiley.
https://doi.org/10.1002/9781394216147.ch10
Stahl,
B. C., Rodrigues, R., Santiago, N., & Macnish,
K. (2022). A European Agency for Artificial Intelligence: Protecting
fundamental rights and ethical values. Computer Law & Security Review,
45, 105661.
Stuurman,
K., & Lachaud, E. (2022). Regulating AI. A label
to complete the proposed Act on Artificial Intelligence. Computer Law &
Security Review, 44, 105657.
Sutojo, T., Mulyanto,
E., & Suhartono, V. (2011). Kecerdasan
Buatan. Andi Offset.
Yudoprakoso, P. W. (2019). KECERDASAN BUATAN
SEBAGAI ALAT BANTU PROSES PENYUSUNAN UNDANG-UNDANG DALAM UPAYA MENGHADAPI
REVOLUSI INDUSTRI 4.0 DI INDONESIA. Simposium
Hukum Indonesia, 1(1), 450–461.
https://api.semanticscholar.org/CorpusID:198723944
|
Copyright holder: Agus Nawawi, Azis Budianto, Rineke Sara (2024) |
|
First publication right: Asian Journal of Engineering, Social and Health
(AJESH) |
|
This article is licensed under: |