The practice of discrimination, that is treating human beings differently on the basis of personal characteristics, e.g. gender, race etc. that are unrelated to the business at hand, e.g. hiring for a job, is as old as mankind. The impact of data mining on discriminatory practices comes in two flavors, a negative and a positive one. On the negative side, data mining can be used to construct biased decision support systems out of past examples. For example one may learn a decision support system for granting loans out of records of past decisions. If the past decisions hide discriminatory practices, then such practices will be automatically embedded in the resulting tool. On the other hand the same kinf of method can be employed to provide, at least a “`prima facie”‘, evidence of the presence of discrimination in past decisions. How to construct a decision support system that, given data possibly flawed by discriminatory practices, still yield discrimination free results is than an interesting research problem.
The paper by Custers discusses the problem of discrimination especially from the point of view of indirect discrimination, with reference to the practice of redlining as the paradigmatical example. Custers introduces also the somewhat surprising “`privacy paradox”‘. The paradox says that the use of data mining techniques for extracting data, which have been hidden for protecting the privacy or for avoiding discrimination, may provide imprecise results that may be even more harmful to people up to the point of “`making them inclined to provide the (more correct) data themselves. Custers anyhow offers also some hope in the possibility of solving both the problems of privacy protection and discrimination avoidance by the “`solution in code”‘ approach, method that is known also as privacy protection and discrimination avoidance “`by design”‘, as it is referred to also in the recent proposals for revising the Data Protection Directive of the European Union. Domingo-Ferrer and Hajan, after a very precise and readable introduction to the notion of discrimination in international laws and its impact on decision support methods, discuss different approaches for avoiding it both in the preprocessing phase and in the post processing phase.
Privacy Observatory is an electronic journal that combines scientific and legal expertise with a focus on issues related to data privacy and data protection. It seeks to cover the entire spectrum of privacy topics: technology, law, business and so on. The main scope of this magazine is to provide a multidisciplinary viewpoint on various privacy aspects in different contexts.
Privacy Observatory Magazine is an invaluable source of articles, news and legal opinion reports for all those working and interested in the fields of data protection and privacy.
Privacy Observatory Magazine was born within the European Coordination Action project MODAP (Mobility, Data Mining, and Privacy).
Privacy Observatory Editors welcome manuscripts on a variety of privacy issues. Contributors who want to propose a manuscript can send their proposal to the editors who will review it for the approval.