You are browsing the archive for Open Knowledge Justice Programme.

Hear a developer’s view on algorithmic decision-making! Justice Programme community meet-up on 14th October

- October 4, 2021 in Open Knowledge Justice Programme

Last month, the Open Knowledge Justice Programme launched a series of free, monthly community meetups to talk about Public Impact Algorithms. We believe that by working together and making connections between activists, lawyers, campaigners, academics and developers, we can better achieve our mission of ensuring algorithms do no harm. For the second meet-up, we’re delighted to be joined for an informal talk by Patricio del Boca, who is a senior developer at Open Knowledge Foundation. He is an Information Systems Engineer and enthusiast of open data and civic technologies. He likes to build and collaborate with different communities to disseminate technical knowledge and participate as a speaker in events to spread the importance of civic technologies. Patricio will share a developer’s perspective on AI and algorithms in decision-making, the potential harms they can cause and the ethical aspects a developer’s work. We will then open up the discussion for all. Whether you’re a new to tech or a seasoned pro, join us on 14th October 2021 between 13:00 and 14:00 GMT to share your experiences, ask questions, or just listen. = = = = =
Register your interest here
= = = = =
More info: www.thejusticeprogramme.org/community

Open Knowledge Justice Programme takes new step on its mission to ensure algorithms cause no harm

- January 27, 2021 in Open Knowledge Foundation, Open Knowledge Justice Programme

Today we are proud to announce a new project for the Open Knowledge Justice Programme – strategic litigation. This might mean we will go to court to make sure public impact algorithms are used fairly, and cause no harm. But it will also include advocacy in the form of letters and negotiation.  The story so far Last year, Open Knowledge Foundation made a commitment to apply our skills and network to the increasingly important topics of artificial intelligence (AI) and algorithms. As a result, we launched the Open Knowledge Justice Programme in April 2020. Our  mission is to ensure that public impact algorithms cause no harm. Public impact algorithms have four key features:
  • they involve automated decision-making
  • using AI and algorithms
  • by governments and corporate entities and
  • have the potential to cause serious negative impacts on individuals and communities.
We aim to make public impact algorithms more accountable by equipping legal professionals, including campaigners and activists, with the know-how and skills they need to challenge the effects of these technologies in their practice. We also work with those deploying public impact algorithms to raise awareness of the potential risks and build strategies for mitigating them. We’ve had some great feedback from our first trainees!  Why are we doing this?  Strategic litigation is more than just winning an individual case. Strategic litigation is ‘strategic’ because it plays a part in a larger movement for change. It does this by raising awareness of the issue, changing public debate, collaborating with others fighting for the same cause and, when we win (hopefully!) making the law fairer for everyone.  Our strategic litigation activities will be grounded in the principle of openness because public impact algorithms are overwhelmingly deployed opaquely. This means that experts that are able to unpick why and how AI and algorithms are causing harm cannot do so and the technology escapes scrutiny.  Vendors of the software say they can’t release the software code they use because it’s a trade secret. This proprietary knowledge, although used to justify decisions potentially significantly impacting people’s lives, remains out of our reach.  We’re not expecting all algorithms to be open. Nor do we think that would necessarily be useful.  But we do think it’s wrong that governments can purchase software and not be transparent around key points of accountability such as its objectives, an assessment of the risk it will cause harm and its accuracy. Openness is one of our guiding principles in how we’ll work too. As far as we are able, we’ll share our cases for others to use, re-use and modify for their own legal actions, wherever they are in the world. We’ll share what works, and what doesn’t, and make learning resources to make achieving algorithmic justice through legal action more readily achievable.  We’re excited to announce our first case soon, so stay tuned! Sign up to our mailing list or follow the Open Knowledge Justice Programme on Twitter to receive updates.

Launching the Open Knowledge Justice Programme

- April 14, 2020 in Featured, Open Knowledge Foundation, Open Knowledge Justice Programme

Supporting legal professionals in the fight for algorithmic accountability Last month, Open Knowledge Foundation made a commitment to apply our unique skills and network to the emerging issues of AI and algorithms. We can now provide you with more details about the work we are planning to support legal professionals (barristers, solicitors, judges, legal activists and campaigners) in the fight for algorithmic accountability.  Algorithmic accountability has become a key issue of concern over the past decade, following the emergence and spread of technologies embedding mass surveillance, biased processes or racist outcomes into public policies, public service delivery or commercial products.  Despite a growing and diverse community of researchers and activists discussing and publishing on the topic, legal professionals across the world have access to very few resources to equip themselves in understanding algorithms and artificial intelligence, let alone enforce accountability. In order to fill this gap, we are pleased to announce today the launch of the Open Knowledge Justice Programme.  The exact shape of the programme will evolve in response to the feedback of the legal community as well as the contribution from domain experts, but the our initial roadmap includes a mix of interventions across our open algorithm action framework as seen below:
Shared definitions Standard resources Literacy
Accountability Contribution to the public debate through participation to conferences, seminar and outreach to experts

Building a global community of legal professionals and civic organisations to build a common understanding of the issues and needs for actions raised by algorithms and AI from a legal perspective
Participation to the elaboration of the European Union’s AI policy Contribution to current UK working groups around algorithms, AI and data governance Participation to other national and international public policy debates to embed accountability in upcoming regulations, in collaboration with our partners Developing open learning content and guides on existing and potential legal of analysis of algorithms and AI in the context of judicial review or other legal challenge
Monitoring Mapping of relevant legislation, case law and ethics guidelines with the help of the community of experts Delivering trainings for legal professionals on algorithm impact investigation and monitoring
Improvement Curation, diffusion and improvement of existing algorithm assessment checklists such as the EU checklist Training and supporting public administration lawyers on algorithmic risk
  How these plans came about These actions build on our past experience developing the open data movement. But we’ve also spent the last six months consulting with legal professionals across the UK. Our key finding is that algorithms are becoming part of legal practice, yet few resources exist for legal professionals to grapple with the issues that they raise.  This is due in part to the lack of a clear legal framework, but mainly because the spread of algorithm-driven services, either public or private, has accelerated much faster than the public debate and public policies have matured. What is an algorithm? What is the difference between algorithms and artificial intelligence? Which laws govern their use in the police force, in public benefit allocation, in banking? Which algorithms should legal professionals be on the lookout for? What kind of experts can help legal professionals investigate algorithms and what kind of questions should be asked of them?  All these questions, although some are seemingly basic, are what lawyers, including judges, are currently grappling with. The Open Knowledge Justice Programme will answer them.  Stay tuned for more on the topic! For comments, contributions or if you want to collaborate with us, you can email us at contact@okfn.org