Nyheder Our Feedback to EC 10 November 2022
Have your say - Product Liability Directive
You can also get involved in forming EU laws. The European Commission would like to hear your views on laws and policies currently in development. They offer a platform "Have your say" with the list of all new EU initiatives open for public consultation. You need to register to write your feedback. https://ec.europa.eu/info/law/better-regulation/have-your-say
There are always 5 stages of each EU initiative, each stage is open for public consultation for a specific time frame:
In preparation
Call for evidence
Public consultation
Draft act
Commission adoption
Product Liability Directive - Adapting liability rules to the digital age, circular economy and global value chains
About this initiative: Investment in and societal acceptance of emerging technologies require legal certainty and trust.
Feedback period: for stage 5 it is 03 October 2022 - 11 December 2022
Feedback from Europeans for Safe Connections
Europeans for Safe Connection welcome this activity. However, we see that legal liability rules are being addressed with delay at a time when many services and products already use some form of AI algorithms. We expect that when AI is implemented in services and products, the protection of life, health and privacy of consumers or those harmed by the negative impacts of AI will be prioritised over manufacturers and the supply chain, when it comes to AI failures and potential legal liability.
Research and real situations have shown that AI reproduce bias and aggravate discrimination. The press has continuously documented incidents involving failures in services and products based on AI. Impacts have included loss of income, psychological distress, discrimination against most vulnerable individuals or groups due to incorrect evaluation of data or faulty decision logic. The EU Charter of Fundamental Rights includes Article 21 on the right to non-discrimination. There have also been cases of fatal accidents related to AI-enabled autonomous vehicles. Persons harmed by AI systems must be able to take legal action and the court must be properly prepared for such cases. In cases involving self-driving cars, actions against manufacturers are particularly important.
In order to avoid the application of the Law of Liability to persons coming to harm, it is advisable to minimize any foreseeable negative effects of AI failure. Unfortunately, the threat to the functionality of AI based solutions is impacted by the increasing issue of cyber attacks on infrastructure or on consumer devices including connected vehicles, or theft of personal data including medical data. Only services and products that have been thoroughly proven not to harm customers with their own functionality, but also to withstand a complex cyber environment, should be deployed into live operation. We discuss this issue in Proposal 19: (https://signstop5g.eu/en/solutions/protection-of-our-data/proposal-19).
Another problem with AI applications is that they require too much data to learn and refine the results. AI should be designed as non invasive of people's privacy and the process of data acquisition should only take place if people consent to it. People should have the right to opt out of having their personal data evaluated by an AI or to opt out of AI-based services or products. Manufacturers should be obliged to offer an alternative that does not have AI features. Article 22 of the GDPR, formulated as "the right to human intervention in automated decisions", gives the right to refuse to subject a decision concerning an individual to an algorithm. See the Proposal 22: (https://signstop5g.eu/en/solutions/protection-of-our-data/proposal-22).
In addition to the right to privacy, minimizing human exposure to RF-EMF is also important to our initiative. We expect that with the deployment of AI in everyday life, there will be a growing interest from industry and government to expand sensors, surveillance systems, or other data gathering points including Internet of Things (or Internet of Bodies) that can very likely use wireless transmission. We therefore request that wired connectivity of such devices be preferred when deploying AI-related projects, and that unnecessary unwanted exposure and harm to human and animal health through RF-EMF will be avoided when deploying AI.
The digital transformation including AI should not become a targeted means of profit that primarily benefits the technology industry and governance groups at the detriment of users and consumers. In times of financial and resource crisis, there should be considered a reduction in financial costs and energy for projects that are not critical to the population.
Kamil Bartošák
on behalf of Europeans for Safe Connections
This feedback was sent from a wired internet connection
- No use of harmful radiation
- Less electricity consumption
- Increased data security