Quirine Eijkman

Security & the Rule of Law

Institute of Security and Global Affairs (ISGA), Leiden University
Quirine's current research focuses on access to justice, legal self-reliance, and the (side)effects on the rule of law arising from measures taken to strengthen security. She also teaches at several courses in the realm of human rights and security. Quirine sits on the Advisory Council for the Dutch Section of the International Commission of Jurists (NJCM). She has also a seat on the Advisory Council for Delitelabs, a pre-startup school for refugees and migrants and is a member of the Dutch Helsinki Committee.
Advocating 4 Ethics or Human Rights?: Access to Justice for Communications Surveillance in Practice
By analysing intelligence gathering reform bills this talk discusses from a civil society perspective access to justice for communications surveillance by secrete services. In the aftermath of the WikiLeaks and Snowden revelations sophisticated oversight systems for bulk and targeted interception are being developed across Europe. In the case study of the Netherlands prior judicial consent and a binding complaint procedure has been proposed. However, although checks and balances for among others communications interception and hacking have been created, Dutch oversight mechanisms are less equipped to effectively remedy bulk data intrusions or artificial intelligence practices. Therefore, it remains a question if politicians and lawmakers desire to meet human rights standards. Furthermore, civil society focusses on human rights compliance, but what about advocating for an ethical approach.
​​​Joanna works on the structure and utility of natural and artificial intelligence. She is best known for her work on systems AI and AI ethics. Her current research focuses on human sociality and technological interventions, as well as understanding the causal links between wealth inequality and political polarisation, transparency in AI systems, and machine prejudice deriving from human semantics. At Bath, she founded the Intelligent Systems research group and heads Artificial Models of Natural Intelligence.  Joanna holds degrees in Psychology and Artificial Intelligence from Chicago, Edinburgh & MIT.  
Joanna Bryson 
(tenured Assoc Professor)
University of Bath
There Is No AI Ethics:  The Human Origins of Machine Prejudice


The immense progress of artificial intelligence in recent decades rests on our improved capacity to mine human culture for intelligence our culture has already discovered.  Unfortunately, this process brings the bad as well as the good of being human with it. On the other hand, we now have tools that allow us to better understand what it means to be human, yet that knowledge and those tools by their nature change what it is they examine.  In this talk I will clarify AI, demonstrate machine prejudice, then discuss the impact of ICT in general and AI in particular on society with a focus on governance and the economy.  Work on machine prejudice conducted with Aylin Caliskan-Islam and Arvind Narayanan.

Jason was until last year an Associate Professor of Computational Linguistics at the University of Texas at Austin. As a professor, Jason worked on probabilistic models for categorization and syntax, with a particular emphasis on low-resource languages. He also focused on methods and applications for connecting linguistic objects to geography and time. He has been active in the creation and promotion of open source software for natural language processing: he is one of the co-creators of the Apache OpenNLP Toolkit, and he has contributed many others, including ScalaNLP, Junto, TextGrounder and OpenCCG. Jason received his Ph.D. from the University of Edinburgh in 2002, where his doctoral dissertation was awarded the 2003 Beth Dissertation Prize from the European Association for Logic, Language and Information. His main academic research interests include categorial grammars, parsing, semi-supervised learning, co-reference resolution and text geolocation.
Jason Baldridge

Co-founder, Chief Scientist
People Pattern

Practical and Ethical Considerations in Demographic and Psychographic Analysis​​

Understanding people, how they implicitly and explicitly group, their linguistic patterns, what motivates them and more are all deeply interesting and long-standing questions. Industry and academic developers and researchers today have access to extensive information on people, but the data often lacks many of the core demographic and psychographic variables that pertain to many research questions and which drive some business functions (e.g. marketing). This is certainly true of social media profiles, which typically lack structured demographic information beyond names and locations---and even these are often incomplete or fabricated. As such, there has been a surge of academic and commercial interest in predicting values for gender, age, race, location, interests, personality, and more, given some portion of the information available in data about individuals, including social profiles, customer records, and more.
In the past, efforts to study people was primarily localized to the researcher and the individuals they interacted with or requested surveys from. But today, these questions can be explored at massive scale, using the public and private digital exhaust we all create. Findings are no longer simply interpretive, but instead can be additionally translated into automated programs that analyze gender, personality and more. Such programs are informed by research in natural language processing, computer vision, psychology and related fields, and they can be used for positive, negative, and mixed ends. As researchers, we are arguably still waking up to this reality, and we cannot take a neutral stance regarding the potential benefits and harms of our work. We must grapple with hard questions around privacy rights and think actively and creatively about the wider societal implications and impacts of our work. In my talk, I'll discuss specific practical and ethical aspects of such work in the context of text, graph and image analysis for understanding demographics and psychographics, with a eye toward the potential for positive impact that reduces or minimizes risk to individuals.