Dragoș Tudorache

Info
EU MEP; AI Act co-rapporteur
Top
New
  • Should AI systems above a certain capability threshold be required to have interpretable decision-making processes?
    Then the second floor, you have the high-risk applications. And the commission is proposing in an annex, in fact – and I’ll explain why in an annex – identify several sectors where if you develop AI in those sectors, again, because the likelihood of touching upon the interests of individuals is very high, then they could qualify as high risk. And as a result of that, you’ll have to go through certain compliance requirements. And that goes into certain documentation, that it will have to have certain obligations of transparency, you’ll have to put it in a European-wide database, you have to explain the underlying elements of that algorithm to the user. So requirements that will make that high risk, not outside the law. So, it’s not bad AI. But again, because of its impact or potential impact on the rights of individuals, it will have to be better explained, it will have to be more transparent in the way it works, in the way the algorithm actually plays out its effect. Then you have a third smaller floor in the pyramid, which are not high risk, but they are applications that do require a certain level of transparency. For example, deep fakes. That’s the example that actually is being given as a use case by the commission in its proposal. Again, the requirements are lower than for the high risk, but still, it requires certain explainability, certain transparency to comply with. And then you have, as I said, the vast majority of applications that will go in the lower part that will not be regulated at all. source Unverified
    Comment Comment X added 9d ago
Back to home
Terms · Privacy · Contact