Technology and Law | Future Landscape with AI

Technology and Law! What is Future Landscape with AI. As the European Union (EU) is scheduled to enact a wide-ranging legislation dedicated to regulating artificial intelligence (AI) systems, the year 2023 will be really important for the world of artificial intelligence and legal developments. The EU Artificial Intelligence Act (European AI Act) is intended to come into force in each of the twenty-seven member states, without each needing to be transposed locally into national law.

At the beginning of 2023, a key parliamentary committee in the European Parliament has improved the process of enactment by approving an artificial intelligence regulation, which was shared in the first-of-its-kind artificial intelligence regulation, the first of its kind in the history.

Technology and Law! What is Future Landscape with AI

The regulation adopts a risk-based approach to regulating artificial intelligence. The European AI Act, which is already in the proposal version, sets out the requirements for developers of “foundation models” such as ChatGPT, including provisions ensuring that educational data does not infringe copyright law.

With this executive approval, the authorities have marked a landmark development in the race to control artificial intelligence, which is developing at an incredible pace. Currently, the European AI Act will be the first law for artificial intelligence systems in the West. China, which only began taking steps in the name of data protection last year, has developed draft rules designed to govern how companies develop manufacturing AI products like ChatGPT. This stems from the fear of losing control over artificial intelligence.

The new regulation takes a risk-based approach to being able to control AI threats, where the liabilities for a system are proportional to the level of risk it poses.

The rules also seek to specify and address some of the requirements for providers of so-called “core models” such as ChatGPT, which has become a major concern for regulators, given the fear that even skilled workers will be displaced in the workforce.

2023 What do the rules in the regulation say? Technology and Law!

Once the regulation comes into force, artificial intelligence placed on the EU market will be subject to the European AI Act on a varying scale based on the risk posed by the intended use of said AI. For example:

Social scoring systems and artificial intelligence that remotely monitor people in real time in public places will be banned in most cases.

AI applications used in medical health devices or for recruitment purposes will be classified as high-risk and subject to strict and wide-ranging conformity assessment procedures.

Their use of deep fakes will be subject to transparency rules to ensure the public knows exactly the nature of the technology they are dealing with.

The European AI Act divides AI applications into four levels of risk: unacceptable risk, high risk, limited risk, and minimum or zero risk.

Unacceptable risky apps are banned by default and provide blocking powers for users.

What can they be? Is Forbiden IA possible!

Artificial intelligence systems that target the subconscious to distort behavior, using manipulative or deceptive techniques

-Artificial intelligence some systems that exploit vulnerabilities of individuals or specific groups!

-Biometric classification systems based on precise characteristics

-Artificial intelligence systems used for social scoring or to assess trustworthiness

-Artificial intelligence systems used for risk assessments that predict criminal or administrative offenses

-Artificial intelligence systems that build or extend facial recognition databases through untargeted scraping

-Artificial intelligence systems that understand emotions in law enforcement, border management, the workplace and all educations.

Some lawmakers have even called for the measures to be made more expensive to ensure they cover ChatGPT.

To this end, requirements are being placed on “basic models” such as large software language models and generative artificial intelligence.

ChatGPT and Developers! Technology and Law!

Developers of the underlying models will need to implement security controls, data governance measures, and risk mitigation measures before making their models public. They will also be asked to ensure that the training data used to inform their systems does not violate copyright law. Providers of such AI models will need to take measures to assess and mitigate risks to fundamental rights, health, safety, the environment, democracy and the rule of law. They will also be subject to data governance requirements, such as examining the relevance of data sources and possible biases.

It is important to stress that although the regulation has been adopted by the deputies in the European Parliament, it is far from being a law in its current form.

So why this issue now happened?

Artificial intelligence models such as Backed OpenAI’s ChatGPT and Google’s Bard, which have also attracted a lot of attention in the European press, have forced technology of technology giant companies such as Microsoft to develop artificial intelligence technology at full speed.

Google recently announced a number of new AI updates, including an advanced language model called PaLM 2, which the company says outperforms other leading systems in some tasks and improvement.

New AI chatbots! Technology and Law!

New AI chatbots like ChatGPT have already impressed many technologists and academics with their ability to generate human-like responses to user requests, supported by large software language models trained on vast amounts of data, and show great promise in terms of employment.

But artificial intelligence technology has been around for years, and it’s being integrated into more apps and systems than you might think. For example, determining which viral videos or pictures of food you see on your TikTok or Instagram feed. At the end of the day, to sum up the intention of tech companies, it’s all about observing your personal data, or online activity, and creating a pattern.

The purpose of the recommendations in the European AI Act content is to provide some guiding rules for companies and organizations using artificial intelligence.

How has the tech industry reacted to this regulation?

Clearly, the new rules have sparked concerns in the tech industry.

The Computer and Communications Industry Association immediately issued a statement, saying it was concerned that the scope of the European AI Act had been too broadened and could capture harmless forms of artificial intelligence. They also warned that seeing broad categories of useful AI applications with little or no risk now face stringent requirements and may even be banned in Europe could hinder innovation.

What Technologists Say! Technology and Law!

The belief that EU rules will set a “global standard” for the European AI Act is actually a reflection of the belief in European law. However, observation of the reactions of other jurisdictions, including China, the United States and the United Kingdom, will certainly be instrumental in finalizing the law.

The long-term reach of the proposed AI rules naturally means that AI players in every corner of the world will have to deal with these rules.

The right question is whether the content of the European AI Act will set the standard as a monopoly for artificial intelligence. China, the U.S. and the U.K., in particular, are defining their own AI policies and regulatory approaches. Undeniably, of course, they will all be closely monitoring the European AI Act negotiations to adapt their own approaches.

ChatGPT to pass testing!

Of course, it is also necessary to listen to the key Brussels-based digital rights campaign groups. European Digital Rights, for example, goes on to say that the law will require chatbot models like ChatGPT to “pass testing, certification and transparency requirements.”

While these transparency requirements won’t eliminate the infrastructure and economic concerns associated with developing broad AI systems, they do require tech companies to disclose the amounts of computing power needed to develop them.

Given the prevailing mood in the world, there seem to be a few initiatives around the world, such as China and the U.S., to regulate generative AI. In other words, Europe is again trying to restrain the US and China with legal regulations.

Threats of AI to fundamental rights and democracy. Technology and Law!

The results that AI produces depend on how it is designed and what data it uses. Both the design and the data can be intentionally or unintentionally biased. For example, some important aspects of a problem may not be programmed into the algorithm, or they may be programmed to reflect and replicate structural biases. Additionally, the use of numbers to represent complex social reality can cause artificial intelligence to appear factual and imprecise. We call this “mathwashing.”

If not done properly, artificial intelligence can lead to decisions influenced by data such as ethnicity, gender, age when hiring or firing, offering loans, and even in criminal prosecutions.

Artificial intelligence can seriously affect the right to privacy and data protection. For example, it can be used in facial recognition equipment or for online tracking and profiling of individuals. In addition, artificial intelligence can combine the pieces of information a person gives with new data, leading to results that the person might not expect.

Equally accessible and inclusive public

Artificial intelligence could also pose a threat to democracy! Instead of creating a pluralistic, equally accessible and inclusive public debate environment, artificial intelligence has been charged with creating online echo chambers based on a person’s previous online behavior and only showing the content that a person wants. Known as deepfakes, it can even be used to create highly realistic fake video, audio, and images that can present financial risks, damage reputation, and make decision-making difficult.

All of this can lead to division and polarization in the public sphere, manipulating elections. Because artificial intelligence can track and profile individuals adhering to specific beliefs or actions, it could also play a role in undermining freedom of assembly and protest.

The Impact of Artificial Intelligence on Business Life. Technology and Law!

The use of artificial intelligence in the workplace is expected to result in the elimination of a large number of jobs. While AI is expected to create and create better jobs, it is arguable that education and training will play a crucial role in preventing long-term unemployment and providing a skilled workforce.

According to Parliament’s Think Tank 2020 forecast, 14% of jobs in OECD countries could be highly automated and another 32% could face significant changes.

However, the European AI Act is likely to play a crucial role in the development of such legislative initiatives around the world and lead the EU back to becoming a standard-setter on the international stage, similar to what has happened with data protection.

Artificial intelligence and Cyber security scale

As with many other EU legislation, compliance with the draft European AI Act will be underpinned by international standards. When it comes to compliance with the cybersecurity requirements set out by the draft European AI Act, additional considerations are being identified. For example, conformity assessment standards, particularly in relation to tools and qualifications, may need to be further developed. In addition, the interaction between different legal initiatives needs to be reflected more in standardization activities.

An example of this is a proposed regulation on horizontal cybersecurity requirements for products with digital elements, referred to as the Cyber Resilience Act.

ENISA is currently examining the need for an EU cybersecurity certification scheme on artificial intelligence and its feasibility. That’s why ENISA collaborates with a wide range of stakeholders, including industry, industry insights and member states, to collect data on AI cybersecurity requirements, AI-related data security, AI risk management and conformity assessment.

The tension line between technology companies and regulatory authorities will continue for a while and the world innovation field will continue to use artificial intelligence more and more. In today’s world where the future is shaped, artificial intelligence will maintain its place at the beginning of innovation areas for a long time, with or without legal govermental control.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Adblock Detected

Merhaba. Sitemiz yoğun bir emeğin ürünüdür! Sitede dolaşmak için lütfen Reklam Engelleyicinizi Kapatın. Please Close The Ads Protector.