The uncontrolled use of artificial intelligence can undermine peace, says Nikos Dendias

“OR Artificial intelligence It is gradually becoming a key factor in defense planning here in our country. It shapes not only a technological reality, but also the architecture of international security, ”he said Minister of National Defense, Nikos Dendiasdeclaring today the launch of the International Crisis Court of Crisis Conference “ATHENS – 25” on “Armed Forces and Crisis Management at the Age of Artificial Intelligence”.

“Defense policies are not implemented unless the adequacy of algorithmic systems and interpretive tools, the Big Data processing capacity and the automated risk assessment is ensured. The integration of Artificial intelligence in military field It has also influenced the ways and means of businesses, “Mr Dendias said, saying that” particular concern, reflection, the development of military systems with mechanical learning potential. “

In the same direction, “these systems may function as” black boxes ” – to use a familiar expression – with non -transparent, evolving, and I think in the wide neighborhood, incomprehensible decision -making mechanisms,” the Minister of National Defenseto add that “the uncontrolled use of artificial intelligence can equally undermine peace, security, stability and worldwide.”

It should be noted that the conference is organized under the auspices of the Ministry of National Defense, GEHEA in collaboration with the Directorate -General for Political Defense & International Relations (GSPAD), at the Amphitheater of the Military School of Evelpidon (BSE).

The conference was attended by the Chief of General Staff General Dimitrios Houpis, ambassadors of foreign countries in Greece, the Chief of General Staff Lieutenant General George Kostidis, Lieutenant General Spyridon Tsiafoutis PN as a spokesman for the Chief of Staff Body-Greek Coast Guard Vice Admiral LS Tryfon Kontizas, the Secretary General of National Security Thanos Dokos, the General Manager of DGS Ambassador ET Michael Spinellis, representative of the Hellenic Police Chief, senior Armed Forces officers, followers of defense, academics and members of the military personnel.

Mr. Dendias in his address, pointed out:

“It is a special honor for me to welcome you to the Military School of Evelpidon, to participate in the two -day international conference” ATHENS 2025 “, entitled” Armed Forces and Crisis Management in the Age of Artificial Intelligence “.

As the rapid evolution of artificial intelligence technologies continues to shape the global security environment, it is imperative that all stakeholders involved in a constructive dialogue, oriented to the future.

Together we have to face not only the complex challenges and moral dimensions that arise from the integration of artificial intelligence into the field of defense, but also to explore the transformative opportunities it presents – from strategic advantages and operational efficiency to increased protection of citizens.

The Athena International Crisis Congress is a meeting of views dedicated to a timely and very flat topic that, of course, concerns our armed forces. The convergence of artificial intelligence with the handling of crisis and by conducting businesses on the field.

Artificial intelligence is gradually becoming a key factor in defense planning here in our homeland.

It shapes not only a technological reality, but also the architecture of international security.

Defense policies are not implemented unless the adequacy of algorithmic systems and interpretive tools, the Big Data processing capacity and the automated risk assessment are ensured.

The integration of artificial intelligence into the military sector has also influenced the ways and means of operations.

The military applications of artificial intelligence have offered a significant, so as not to say absolutely decisive advantages to businesses:

· Faster decision -making due to rapid operation of the Command and Control (Management and Control System).

· Achieving greater accuracy for targets, reduction of offensive losses.

· Detection, reckoning, evaluation, hierarchy of threats.

· Better awareness of real -time business status.

· Supply chain support.

· Strengthening human abilities in complex conditions.

· Effectively processing the enormous volume of data in a minimum time, as well as the training of staff on modern platforms through the production of realistic complex scenarios, which are automatically adjusted, depending on the energy decision of the trainees and the drawing of analyzes, conclusions and evaluations.

However, all of this seems positive but we cannot ignore the fact that technological progress is combined with a number of complex and multi -dimensional legal, moral, technological and political challenges, which also require a deep analysis and a deep evaluation.

The uncontrolled use of artificial intelligence can also undermine peace, security, stability and globally.

Technological capabilities are spreading at a huge speed – you all know. The possibility of using them, and the risk of using them, by non -state actors, and even terrorist organizations, but also by countries, state bodies, who professionally revive and ignore, are also increasing, not to say international law to serve their own geopolitical. They thus undermine regional and international stability and security.

Of particular concern, reflection, the development of military systems with mechanical learning potential.

These systems may function as “black boxes” – to use a familiar expression – with non -transparent, evolving, and I think in the wide neighborhood, incomprehensible decision -making mechanisms.

Also, the possible use of productive artificial intelligence (Generic AI) to military equipment adds yet another level of complexity, not to say uncertainty.

And this is because these systems will probably be able to create autonomously producing new solutions and analyzing themselves by new data.

They will thus be able to adapt to the rapidly changing conditions, but without human supervision.

These are amazing possibilities that require precisely for this reason and a narrow and rigorous control.

A fundamental question arises in front of us: whether it is technically possible an artificial intelligence algorithm, even the most sophisticated, to incorporate complex legal and value concepts of international humanitarian law, such as the principles of discrimination and proportionality.

And apply these absolutely important principles to an evolving rapidly operational environment.

The design must ensure that any use of new technology is governed by a clear institutional framework.

Artificial intelligence is not called upon to replace or replace the human factor, is called upon to strengthen and facilitate it.

So we cannot ignore the increasing risk of complete automation in life and death decisions.

There is a well -known Soviet Officer, a well -known incident of Stanislav Petrov, who in 1983 realized in a timely manner of an incorrect rocket attack by the US at that time to the Soviet Union then.

His reaction, ignoring the data on his part, I think he has rescued humanity, but it will always remind us how close we can be in the deceased.

Those of you who by the way have not been all and a half hours of your life to watch Dr. Strangelove with Peter Selers, I would tell you to do it. Beyond that you will laugh too much, it is an excellent 50 -year -old, half -century -old film that gives us the opportunity to perceive a caustic environment created by subjugation to data without the human factor as a final criterion.

For this reason, it is absolutely necessary to establish clear business boundaries and clear legal restrictions, to ensure that one always remains at the heart of critical decisions, especially when deadly violence is exercised.

No matter how familiarizing it with the exercise of lethal violence, it remains a meteor moral leap for humanity, and in particular – let me say – about Christian humanity.

Acceptance of deadly violence requires the consider and weighing a series of moral parameters that cannot be isolated from human consciousness and personalized crisis.

The man who decides to deprive the life of his fellow man must be able to convince his decision to his creator.

We must, as a final observation, remain a human society.

With these thoughts, I wish everyone a productive, constructive debate. I am sure of this, so that we can form a common, responsible attitude towards rapid developments.

I hope the conclusions of this meeting help us to come up with how we can go from now on.

I wish, good luck and I would like Mr. Ambassador, Mr. Spinellis, to congratulate you and your partners for this event

Thank you very much”.

Source link

Leave a Comment