You are browsing the archive for Meg Foulkes.

Hear a developer’s view on algorithmic decision-making! Justice Programme community meet-up on 14th October

- October 4, 2021 in Open Knowledge Justice Programme

Last month, the Open Knowledge Justice Programme launched a series of free, monthly community meetups to talk about Public Impact Algorithms. We believe that by working together and making connections between activists, lawyers, campaigners, academics and developers, we can better achieve our mission of ensuring algorithms do no harm. For the second meet-up, we’re delighted to be joined for an informal talk by Patricio del Boca, who is a senior developer at Open Knowledge Foundation. He is an Information Systems Engineer and enthusiast of open data and civic technologies. He likes to build and collaborate with different communities to disseminate technical knowledge and participate as a speaker in events to spread the importance of civic technologies. Patricio will share a developer’s perspective on AI and algorithms in decision-making, the potential harms they can cause and the ethical aspects a developer’s work. We will then open up the discussion for all. Whether you’re a new to tech or a seasoned pro, join us on 14th October 2021 between 13:00 and 14:00 GMT to share your experiences, ask questions, or just listen. = = = = =
Register your interest here
= = = = =
More info: www.thejusticeprogramme.org/community

Applications for the CoAct Open Calls on Gender Equality (July 1st, 2021- September 30th, 2021) are open!

- July 28, 2021 in Open Knowledge

CoAct is launching a call for proposals, inviting civil society initiatives to apply for our cascading grants with max. 20.000 Euro to conduct Citizen Social Science research on the topic of Gender Equality. A maximum of four (4) applicants will be selected across three (3) different open calls. Applications from a broad range of backgrounds are welcome, including feminist, LGTBQ+, none-binary and critical masculinity perspectives. Eligible organisations can apply until September 30th, 2021, 11:59 PM GMT. All information for submitting applications is available here: https://coactproject.eu/opencalls/ If selected, CoAct will support your work by
  • providing funding for your project (10 months max), alongside dedicated activities, resources and tools
  • providing a research mentoring program for your team. In collaborative workshops you will be supported to co-design and explore available tools, working together with the CoAct team to achieve your goals.
  • connecting you to a community of people and initiatives, tackling similar challenges and contributing to common aims. You will have the opportunity to discuss your projects with the other grantees and, moreover, will be invited to join CoAct´s broader Citizen Social Science network.
You should apply if you:
  • are an ongoing Citizen Social Science project looking for support, financial and otherwise, to grow and become sustainable;
  • are a community interested in co-designing research to generate new knowledge about gender equality topics, broadly defined;
  • are a not-for-profit organization focusing on community building, increasing the visibility of specific communities, increasing civic participation, and being interested in exploring the use of Citizen Social Science in your work.
Read more about the Open Calls here: https://coactproject.eu/opencalls/

OK Justice Programme secures definitive guidance on the use of algorithms in online exams. Our first win in the fight to ensure that Public Impact Algorithms do no harm!

- July 6, 2021 in Open Knowledge

An independent inquiry adopts nearly all of our recommendations in our first challenge to the misuse of Public Impact Algorithms. Strong guidance given to the UK’s Bar Standards Board on the use of “remote proctoring software” which should now guide others’ use of this technology.

Photo by Jon Tyson on Unsplash

  About The Justice Programme  The Justice Programme is a project of the Open Knowledge Foundation, which works to ensure that public impact algorithms do no harm. Find out more about The Justice Programme here, and Public Impact Algorithms here.   The story so far During the Covid pandemic, many educational institutions started using remote proctoring software to monitor students during their exams – i.e monitoring students using their webcams in combination with facial recognition and behavioral recognition technology. Remote proctoring software invades students’ privacy, and runs a serious risk of replicating discrimination in the use of  opaque algorithmic systems. Read more about it here. In the UK, the Bar Standards Board (BSB) contracted with Pearson Vue to provide such software for the vocational exams for barristers in 2020. The use of remote proctoring software was justified by the BSB on the grounds that it was necessary to ensure the ‘integrity’ of the exams. With funding from the Digital Freedom Fund, we notified the Bar Standards Board (BSB) in the UK that we intended to bring legal action, due to concerns that the procedural protections against the use of opaque systems, namely a data protection impact assessment (and an equality impact assessment) had not been properly conducted, if at all.   The Independent Inquiry In response, the BSB announced that use of remote proctoring software would be paused whilst an independent expert inquiry took place. The inquiry was run by Professor Huxley-Binns, an expert in the topic, working alongside Dr Sarabajaya Kumar, an expert in diversity and disability. The focus of the inquiry was to find out what happened, why it happened, who was to blame and what can be done to prevent it from happening again.   Our submissions to the inquiry The Justice Programme Litigation Team  gave lengthy evidence to the inquiry, culminating in a set of recommendations. When the inquiry’s findings were published, seven out of our eight recommendations were adopted by Professor Huxley-Binns. This is a big achievement! The BSB has agreed to adopt these recommendations in the form of an action plan. The report and action plan should act as a safeguard, in that future students experiencing problems with remote proctoring software can use the recommendations to hold the BSB to account. In a time when so much of new algorithmic decision-making systems are still unregulated, safeguards such as guidelines and recommendations are extremely important. Our recommendations included:
  • putting the voices, needs and experiences of students at the centre of any future procurement and/or deployment of exam solutions based on emerging technologies.
  • consolidating and simplifying the data protection framework, with clear data protection standards for all course and exam providers and the timely use of data protection impact assessments to identify and mitigate the risks before contracts are put in place
  • ensuring open access to the type of technology being used and in all cases ensure it is transparent and explained to the end user.
What’s Next? Use of remote proctoring technology is expanding fast, and this case is only a drop in the ocean. We need to raise awareness of the potential harms of these opaque technologies and challenge further misuse. Here at The Justice Programme we are already researching the use of remote proctoring in migrant language testing in the UK, as well as in student doctors’ vocational exams, where harms have been widely reported. Stay tuned for further news, here and on the OKFJP twitter channel.  

Open Knowledge Justice Programme challenges the use of algorithmic proctoring apps

- February 26, 2021 in Open Knowledge

Today we’re pleased to share more details of the Justice Programmes new strategic litigation project: challenging the (mis)use of remote proctoring software.  

What is remote proctoring?

Proctoring software uses a variety of techniques to ‘watch’ students as they take exams. These exam-invigilating software products claim to detect, and therefore prevent, cheating. Whether this software can actually do what it claims, or not, there is concern that they breach privacy, data and equality rights and that the negative impacts of their use on students are significant and serious. 

Case study: Bar Exams in the UK

In the UK, barristers are lawyers who specialise in courtroom advocacy. The Bar Professional Training Course (BPTC) is run by the professional regulatory body: the Bar Standards Board (BSB). In August 2020, because of COVID 19, the BPTC exams took place remotely, and used a proctoring app from US company Pearson Vue. Students taking exams had to allow their room to be scanned and an unknown, unseen exam invigilator to surveil them.  Students had to submit a scan of their face to verify their identity – and were prohibited from leaving their seat for the duration of the exam. That meant up to 5 hours (!) without a toilet break.  Some students had to relieve themselves in bottles and buckets under their desks whilst maintaining ‘eye contact’ with their faceless invigilator. Muslim women were forced to remove their hijabs – and at least one individual had to withdraw from sitting the exam rather than, as they felt it,  compromise their faith. The software had numerous errors in functionality, including suddenly freezing without warning and deleting text. One third of students were unable to complete their exam due to technical errors.

Our response

The student reports, alongside our insight into the potential harms caused by public impact algorithms, prompted us to take action. We were of the opinion that what students were subjected to breached data, privacy and other legal rights as follows: Data Protection and Privacy Rights
  • Unfair and opaque algorithms. The software used algorithmic decision-making in relation to the facial recognition and/or matching identification of students and behavioural analysis during the exams. The working of these algorithms was unknown and undisclosed.
  • The app’s privacy notices were inadequate. There was insufficient protection of the students’ personal data. For example, students were expressly required to confirm that they had ‘no right to privacy at your current location during the exam testing session’ and to ‘explicitly waive any and all claims asserting a right to individual privacy or other similar claims’. Students were asked to consent to these questions just moments before starting an extremely important exam and without being warned ahead of time.
  • The intrusion involved was disproportionate. The software required all students to carry out a ‘room scan’ (showing the remote proctor around their room). They were then surveilled by an unseen human proctor for the duration of the exam. Many students felt this was unsettling and intrusive.
  • Excessive data collection. The Pearson VUE privacy notice reserved a power of data collection of very broad classes of personal data, including biometric information, internet activity information (gleaned through cookies or otherwise), “inferences about preferences, characteristics, psychological trends, preferences, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes” and protected characteristics.
  • Inadequately limited purposes. Students were required to consent to them disclosing to third parties their personal data “in order to manage day to day business needs’, and to consent to the future use of “images of your IDs for the purpose of further developing, upgrading, and improving our applications and systems”.
  • Unlawful data retention. Pearson VUE’s privacy notice states in relation to data retention that “We will retain your Personal Data for as long as needed to provide our services and for such period of time as instructed by the test sponsor.”
  • Data security risks. Given the sensitivity of the data that was required from students in order to take the exam, high standards of data security are required. Pearson VUE gave no assurances regarding the use of encryption. Instead there was a disclaimer that “Information and Personal Data transmissions to this Site and emails sent to us may not be secure. Given the inherent operation and nature of the Internet, all Internet transmissions are done at the user’s own risk.”
  • Mandatory ‘opt-ins’. The consent sought from students was illusory, as it did not enable students to exert any control over the use of their personal data. If they did not tick all the boxes, they could not participate in the exam. Students could not give a valid consent to the invasion of privacy occasioned by online proctoring when their professional qualification depended on it. They were in effect coerced into surrendering their privacy rights. According to the GDPR, consent must be “freely given and not imposed as a condition of operation”.
  Equality Rights Public bodies in the UK have a legal duty to carefully consider the equalities impacts of the decisions they make. This means that a policy, project or scheme must not unlawfully discriminate against individuals on the basis of a ‘protected characteristic’: their race, religion or belief, disability, sex, gender reassignment, sexual orientation, age, marriage or civil partnership and/or pregnancy and maternity. In our letter to the BSB, we said that the BSB had breached their equality rights duties by using a software that featured facial recognition and/or matching processes, which are widely proven to discriminate against people with dark skin. The facial recognition process also required female students to remove their religious dress, therefore breaching the protections that are afforded to people to observe their religion. Female Muslim students were unable to select being observed by female proctors, despite the negative cultural significance of unknown male proctors viewing them in their homes. We also raised the fact that some people with disabilities or women who were pregnant were unfairly and excessively impacted by the absence of toilet breaks for the duration of the assessment. The use of novel and untested software, we said, had the potential to discriminate against older students with fewer IT skills.

The BSB’s Reply

After we wrote to express these concerns, the BSB:
  • stopped using remote proctoring apps, as was scheduled for the next round of bar exams
  • announced that an inquiry into their use of remote proctoring apps in August 2020 to produce an independent account of the facts, circumstances and reasons as to why things went wrong. The BSB invited us to make submissions to this inquiry, which we have done. You can read them here.

Next steps

Here at the Open Knowledge Justice Programme, we’re delighted that the BSB has paused the use of remote proctoring and keenly await the publication of the findings of the independent inquiry. However, we’ve been recently concerned to discover that the BSB has delegated decision-making authority for the use of remote proctoring apps to individual educational providers – e.g universities, law schools – and that many of these providers are scheduling exams using remote proctoring apps. We hope that the independent inquiry’s findings will conclusively determine that this must not continue.   Sign up to our mailing list or follow the Open Knowledge Justice Programme on Twitter to receive updates.      

What is a public impact algorithm?

- February 4, 2021 in Open Knowledge

Meg Foulkes discusses public impact algorithms and why they matter.

When I look at the picture of the guy, I just see a big Black guy. I don’t see a resemblance. I don’t think he looks like me at all.

This is what Robert Williams said to police when he was presented with the evidence upon which he was arrested for stealing watches in June 2020. Williams had been identified by an algorithm, when Detroit Police ran grainy security footage from the theft through a facial recognition system. Before questioning Williams, or checking for any alibi, he was arrested. It was not until the matter came to trial that Detroit Police admitted that he had been falsely, and solely, charged on the output of an algorithm. It’s correct to say that in many cases, when AI and algorithms go wrong, the impact is pretty innocuous – like when a music streaming service recommends music you don’t like. But often, AI and algorithms go wrong in ways that cause serious harm, as in the case of Robert Williams. Although he had done absolutely nothing wrong, he was deprived of a fundamental right on the basis of a computer output: his liberty. It’s not just on an individual scale that these harms are felt. Algorithms are written by humans, so they can reflect human biases. What algorithms can do is amplify, through automatedly entrenching the bias, this prejudice over a massive scale. The bias isn’t exclusively racialised; last year, an algorithm used to determine exam grades disproportionately downgraded disadvantaged students. Throughout the pandemic, universities have been turning to remote proctoring software that falsely identifies students with disabilities as cheats. For example, those who practice self-stimulatory behavior or ‘stimming’ may get algorithmically flagged again and again for suspicious behavior, or have to disclose sensitive medical information to avoid this. We identify these types of algorithms as ‘public impact algorithms’ to clearly name the intended target of our concern. There is a big difference between the harm caused by inaccurate music suggestions and algorithms that have the potential to deprive us of our fundamental rights. To call out these harms, we have to precisely define the problem. Only then can we  hold the deployers of public impact algorithms to account, and ultimately to achieve our mission of ensuring public impact algorithms do no harm. Sign up to our mailing list or follow the Open Knowledge Justice Programme on Twitter to receive updates.  

Open Knowledge Justice Programme takes new step on its mission to ensure algorithms cause no harm

- January 27, 2021 in Open Knowledge Foundation, Open Knowledge Justice Programme

Today we are proud to announce a new project for the Open Knowledge Justice Programme – strategic litigation. This might mean we will go to court to make sure public impact algorithms are used fairly, and cause no harm. But it will also include advocacy in the form of letters and negotiation.  The story so far Last year, Open Knowledge Foundation made a commitment to apply our skills and network to the increasingly important topics of artificial intelligence (AI) and algorithms. As a result, we launched the Open Knowledge Justice Programme in April 2020. Our  mission is to ensure that public impact algorithms cause no harm. Public impact algorithms have four key features:
  • they involve automated decision-making
  • using AI and algorithms
  • by governments and corporate entities and
  • have the potential to cause serious negative impacts on individuals and communities.
We aim to make public impact algorithms more accountable by equipping legal professionals, including campaigners and activists, with the know-how and skills they need to challenge the effects of these technologies in their practice. We also work with those deploying public impact algorithms to raise awareness of the potential risks and build strategies for mitigating them. We’ve had some great feedback from our first trainees!  Why are we doing this?  Strategic litigation is more than just winning an individual case. Strategic litigation is ‘strategic’ because it plays a part in a larger movement for change. It does this by raising awareness of the issue, changing public debate, collaborating with others fighting for the same cause and, when we win (hopefully!) making the law fairer for everyone.  Our strategic litigation activities will be grounded in the principle of openness because public impact algorithms are overwhelmingly deployed opaquely. This means that experts that are able to unpick why and how AI and algorithms are causing harm cannot do so and the technology escapes scrutiny.  Vendors of the software say they can’t release the software code they use because it’s a trade secret. This proprietary knowledge, although used to justify decisions potentially significantly impacting people’s lives, remains out of our reach.  We’re not expecting all algorithms to be open. Nor do we think that would necessarily be useful.  But we do think it’s wrong that governments can purchase software and not be transparent around key points of accountability such as its objectives, an assessment of the risk it will cause harm and its accuracy. Openness is one of our guiding principles in how we’ll work too. As far as we are able, we’ll share our cases for others to use, re-use and modify for their own legal actions, wherever they are in the world. We’ll share what works, and what doesn’t, and make learning resources to make achieving algorithmic justice through legal action more readily achievable.  We’re excited to announce our first case soon, so stay tuned! Sign up to our mailing list or follow the Open Knowledge Justice Programme on Twitter to receive updates.

Launching the Open Knowledge Justice Programme

- April 14, 2020 in Featured, Open Knowledge Foundation, Open Knowledge Justice Programme

Supporting legal professionals in the fight for algorithmic accountability Last month, Open Knowledge Foundation made a commitment to apply our unique skills and network to the emerging issues of AI and algorithms. We can now provide you with more details about the work we are planning to support legal professionals (barristers, solicitors, judges, legal activists and campaigners) in the fight for algorithmic accountability.  Algorithmic accountability has become a key issue of concern over the past decade, following the emergence and spread of technologies embedding mass surveillance, biased processes or racist outcomes into public policies, public service delivery or commercial products.  Despite a growing and diverse community of researchers and activists discussing and publishing on the topic, legal professionals across the world have access to very few resources to equip themselves in understanding algorithms and artificial intelligence, let alone enforce accountability. In order to fill this gap, we are pleased to announce today the launch of the Open Knowledge Justice Programme.  The exact shape of the programme will evolve in response to the feedback of the legal community as well as the contribution from domain experts, but the our initial roadmap includes a mix of interventions across our open algorithm action framework as seen below:
Shared definitions Standard resources Literacy
Accountability Contribution to the public debate through participation to conferences, seminar and outreach to experts

Building a global community of legal professionals and civic organisations to build a common understanding of the issues and needs for actions raised by algorithms and AI from a legal perspective
Participation to the elaboration of the European Union’s AI policy Contribution to current UK working groups around algorithms, AI and data governance Participation to other national and international public policy debates to embed accountability in upcoming regulations, in collaboration with our partners Developing open learning content and guides on existing and potential legal of analysis of algorithms and AI in the context of judicial review or other legal challenge
Monitoring Mapping of relevant legislation, case law and ethics guidelines with the help of the community of experts Delivering trainings for legal professionals on algorithm impact investigation and monitoring
Improvement Curation, diffusion and improvement of existing algorithm assessment checklists such as the EU checklist Training and supporting public administration lawyers on algorithmic risk
  How these plans came about These actions build on our past experience developing the open data movement. But we’ve also spent the last six months consulting with legal professionals across the UK. Our key finding is that algorithms are becoming part of legal practice, yet few resources exist for legal professionals to grapple with the issues that they raise.  This is due in part to the lack of a clear legal framework, but mainly because the spread of algorithm-driven services, either public or private, has accelerated much faster than the public debate and public policies have matured. What is an algorithm? What is the difference between algorithms and artificial intelligence? Which laws govern their use in the police force, in public benefit allocation, in banking? Which algorithms should legal professionals be on the lookout for? What kind of experts can help legal professionals investigate algorithms and what kind of questions should be asked of them?  All these questions, although some are seemingly basic, are what lawyers, including judges, are currently grappling with. The Open Knowledge Justice Programme will answer them.  Stay tuned for more on the topic! For comments, contributions or if you want to collaborate with us, you can email us at contact@okfn.org

Public Procurement Data in the Philippines and Where to Find It

- March 6, 2019 in fellowship, Reflections from the field

Ben Hur Pintor, our fellow from the class of 2018, here shares his thoughts and research on public procurement data in the Philippines. During the selection process for the 2018 School of Data Fellowship here in the Philippines, I was informed that the selected Fellow will be working with data related to public procurement. As I wasn’t a public procurement expert, I did a little research on the topic. Here, I’d like to share some of the interesting observations that I noticed:

Public Procurement Data in the Philippines

In theory, we expect public procurement in the Philippines to produce a lot of data considering how the process is defined by RA 9184 or the Government Procurement Reform Act. Under the law, public procurement includes all “acquisition of Goods, Consulting Services, and the contracting for Infrastructure Projects by any branch, department, office, agency, or instrumentality of the government” including procurement for projects that are wholly or partly funded by Foreign Loans or Grants pursuant to a Treaty or International or Executive Agreement unless different procurement procedures and guidelines are expressly stated or if the foreign loan and grant is classified as Official Development Assistance (ODA) under RA 8182 or the Official Development Assistance Act. From this definition alone, we can see that almost all government spending falls under public procurement and, thus, it is logical to assume that whenever the government spends, public procurement data should be produced. Aside from the definition of public procurement, the law also provides, as a general rule, that all procurement shall undergo Competitive Bidding except for specific cases when Alternative Methods of Procurement such as Limited Source Bidding, Direct Contracting, Repeat Order, Shopping, and Negotiated Procurement are allowed. These specific cases are subject to the prior approval of the Head of the Procuring Entity (HOPE) and should be justified by the conditions provided by the Act. Most of the time, Competitive Bidding which has the following steps — advertisement, pre-bid conference, eligibility screening of prospective bidders, receipt and opening of bids, evaluation of bids, post-qualification, and award of contract — is followed.

Steps in Public Procurement by Competitive Bidding

Each step in the public procurement process produces its own data — bid posts, pre-procurement and pre-bid conference proceedings, submitted bids, winning bids, information on the bidders, and the awarded contracts to name a few. There are also monitoring and evaluation documents and reports that are regularly created during the implementation of a government project and even after its completion. So with all this public procurement data supposedly being produced, where can it be found?

Where to Find It

The Government Procurement Reform Act or RA 9184 enacted in 2003 is the comprehensive law governing public procurement in the Philippines that put together all procurement rules and procedures covering all forms of government purchases from goods, to consulting, to infrastructure services. It sought to address the complexity and vagueness of public procurement and its susceptibility to abuse and corruption due to multiple procurement laws by simplifying and standardizing the procedures with a focus on transparency and accountability. The law added two interesting features to ensure transparency and accountability:
  1. the creation of an electronic portal which shall serve as the primary and definitive source of information on government procurement (PhilGEPS); and
  2. the establishment of the Government Procurement Policy Board (GPPB).
The PhilGEPS (Philippine Government Electronic Procurement System) is the country’s single, centralized electronic portal that serves as the primary and definitive source of information on government procurement. Government agencies, as well as suppliers, contractors, manufacturers, distributors and consultants, are mandated to register and use the system in the conduct of procurement of goods, civil works, and consulting services. On the website, the government can publish what goods, consulting services, and civil works projects it needs while suppliers, private contractors, and companies can search and view  these procurement opportunities. It features an Electronic Bulletin Board where all procurement opportunities, results of bidding, and related information are posted; a Registry of Manufacturers, Suppliers, Distributors, Contractors and Consultants; and an Electronic Catalogue of common and non-common use goods, supplies, materials and equipment. When fully implemented, the system is also intended have a Virtual Store, Electronic Payment System, and Electronic Bid Submission. The system is managed by the Procurement Service of the Department of Budget and Management.

The PhilGEPS website (version 1.5)

PhilGEPS also releases public procurement data published by different government agencies as mandated by the Government Procurement Reform Act together with other infographics and reports.

Some datasets available in PhilGEPS

 

Standard Reports and Datasets

 

Sample data (Number of Registered Organizations per Year)

Reports, Notices, and Infographics

  The GPPB, as established by the Government Procurement Reform Act, is an independent inter-agency body with private sector representation envisioned as the policy making entity and the governing body overseeing the implementation of procurement reform in the country. Its objectives include the preparation of a generic procurement manual and standard bidding forms for procurement; establishing a sustainable training program to develop the capacity of Government procurement officers and employees; and ensuring the conduct of regular procurement training programs by the procuring entities. It also stores and displays public procurement data submitted to it by procuring entities and regulatory bodies. These include information on Annual Procurement Plans, Procurement Monitoring Reports, List of Blacklisted Suppliers and Constructors, Constructors Performance Evaluation Summaries, Pre-Selected Suppliers and Consultants, List of Observers, and Status of Protests.

GPPB Website and Monitoring Data

Sample data (PDF format)

Aside from the PhilGEPS and GPPB, the different government agencies also publish procurement records on their respective websites in compliance with National Budget Circular No. 542. This Circular is more commonly known in the Philippines as the Transparency Seal Circular because it directs government agencies to have a Transparency Seal visible on their websites where the public can access information related to their agency. Some of the data that the circular requires to be released are: annual reports, approved budget and corresponding targets, major programs and projects, program and project beneficiaries, status of implementation and program/project evaluation and/or assessment reports, annual procurement plans, contracts awarded, and the name of contractors/suppliers/consultants. For example, the Department of Public Works and Highways has a Civil Works page on their website that shows key documents related to the public procurement of civil works projects.

DPWH Civil Works page

Is it Enough?

As highlighted by the Philippine Center for Investigative Journalism (PCIJ) on their report “Public Contracting in the Philippines: Breakthroughs and Barriers” about the infrastructure projects of the Department of Public Works and Highways (DPWH), there are challenges in terms of the completeness and accessibility of public procurement data in the country. Tracking the process from planning to implementation is difficult because not all the documents related to the procurement of infrastructure projects are published. This is compounded by the weak organization of files in agency websites which can confuse those unfamiliar with the procurement process. For example, even though the different documents related to one infrastructure project are available in the DPWH site, they are located on different web pages and are not linked to one another thus preventing users from easily understanding how documents might connect to each other. Aside from this, even though PhilGEPS and the GPPB are good sources of public procurement data, they are only repositories and are dependent on the data submitted to them by procuring entities. This becomes problematic when the procuring entities themselves fail or even refuse to submit their data. Another important thing I noticed about public procurement data in the Philippines is this: Publishing public procurement data in machine-readable formats is not (yet) the norm in the Philippines. If you look at the Government Procurement Reform Act, there is no mention about releasing or publishing procurement data and documents in machine-readable formats. The training programs by the GPPB designed to develop the capacity of procurement officers and employees for both the private sector and the national government agencies, government-owned and controlled corporations, etc do not include parts on working with or publishing machine-readable data. As a result, procuring entities and agencies release data without considering the implications of the format they are releasing it in. In fact, aside from those found in PhilGEPS, most of the public procurement data in the country are in non-machine-readable formats — as PDFs, documents, or even scanned images. Now, the procuring entities releasing the data might not consider this as a problem since compliance with the law only requires them to release the data but from the point of view of a data practitioner analysing public procurement data, a civil society organization creating visualizations in support of its advocacy, a journalist investigating government infrastructure projects, or even just a citizen trying to look for possible evidence of corruption in the procurement process, this adds a lot of extra steps to convert and standardize the data before any meaningful work can be done on it. Steps that could have been skipped had the data been released in a machine-readable format such as a spreadsheet, a comma-separated value  (CSV) file, or JavaScript Object Notation (JSON) file. One of the positive things pointed out by the PCIJ report was the opportunity to standardize, link, and publish more contracting data given by the current trend of government agencies creating or upgrading their information-management systems. This should be supported by efforts to raise awareness and convince the procuring entities, journalists, CSOs, and citizens of the benefits of releasing machine-readable data. Public procurement data should not be released just for the sake of releasing it. It should be released for the purpose of ensuring transparency, accountability, and equitability in the procurement process. To do this, it is imperative that the documents and information for each step in the procurement process, from planning to implementation, should be released in an open, transparent, and timely manner. Public procurement data should also serve the purpose of encouraging citizens, individuals, and organizations to keep themselves informed and engaged in how public money is spent. Towards this end, it is important to release data in formats such as spreadsheets, CSV, or JSON that make it easier for stakeholders to analyse, share, and re-use the data. One of the ways to ensure that data is easily shareable, analysable, and reusable is by following a standard like the Open Contracting Data Standard (OCDS). Of course, simply following a standard is not enough and could even be counterproductive when done without the right preparation. It is equally important to study how a standard complements the process and how it can be integrated with the current system.

Sources

Civil Works – Department of Public Works and Highways. http://www.dpwh.gov.ph/dpwh/business/procurement/civil_works/awarded_contracts Open Contracting Data Standard. Open Contracting Partnership. http://standard.open-contracting.org/latest/en/ Philippine Transparency Seal – Department of Budget and Management. https://www.dbm.gov.ph/index.php/about-us/philippine-transparency-seal Public Contracting in the Philippines: Breakthroughs and Barriers. Philippine Center for Investigative Journalism (PCIJ) with support from Hivos and Article 19. http://pcij.org/wp-content/uploads/2018/01/PCIJ.-Open-Contracting-in-Philippines-Report_01102018_b.pdf RA 9184 (Government Procurement Reform Act). https://www.gppb.gov.ph/laws/laws/RA_9184.pdf The 2016 Revised Implementing Rules and Regulations of RA 9184. https://www.gppb.gov.ph/laws/laws/RevisedIRR.RA9184.pdf The Government Procurement Policy Board. https://www.gppb.gov.ph/ The Philippine Government Electronic Procurement System. https://www.philgeps.gov.ph/ The Procurement Service. http://main.ps-philgeps.gov.ph/   Flattr this!

Using the procurement process as a lens for assessing audit reports: what to watch out for

- March 4, 2019 in fellowship, Reflections from the field

Odanga Madung, our 2018 Fellow, was fortunate to collaborate with the Institute of Economic Affairs in Kenya on their recent study into public procurement. In this article, Odanga reflects on his experiences and offers some tips for those tackling similar work.     The Institute of Economic Affairs (IEA) in Kenya recently carried out a study entitled ‘Public Procurement in Kenya: An Analysis of the Auditor General’s Reports’. I was fortunate enough to contribute as part of my fellowship with the School of Data. The Auditor General’s Office (OAG) was established in Kenya in 2004 under an Act of Parliament. Its aim is to provide independent oversight over how the Kenyan Government and its agencies spend taxpayers’ money. The audit process involves obtaining the accounts of a government entity, scrutinising them against proposed budget plans and contractual obligations, then providing a professional opinion on the state of the accounts. The OAG opinions consist of three types:
  • Unqualified: represents a clean bill of health. This means that the Auditor did not find any problem with the documentation and the entity has managed its funds properly.
  • Qualified: occurs when the Auditor General has found some problems but they are not pervasive. The auditor received all the information required for audit, but it revealed gaps in adherence to procedures and budgets.
  • Adverse: occurs when the auditor general is able to review the ministry’s documentation, but found pervasive problems and considerable changes will be necessary in order to rectify. This kind of finding should be of concern to oversight bodies.
  • Disclaimer: when the auditor is unable to review fully the ministry’s documentation because there is a substantial amount of information that the ministry has not made available. The record keeping is so bad that the auditor cannot give an opinion.
The IEA’s study looked at the Auditor General’s report through the lens of public procurement. They analysed the OAG’s reports by using the OC framework of the tender process, i.e. placing each violation in either the Pre Tender, Tender or Post Award stages. As a result, it highlighted what steps are often breached when the OAG does not give an unqualified opinion to a state entity’s accounts.   This was a much needed breath of fresh air in the corruption conversation that Kenyans are currently having. Mainly because it focussed on the how (the methods) rather than the what (numbers, figures and personalities) of corruption. I say this as corruption is not something that just happens, it is engineered.   The main finding of the study was that majority of procurement breaches tend to happen in the post award stage. A process that the IEA states often lends itself to the least public scrutiny and transparency in comparison to the other parts of the tender process in Kenya. This is very important in the Kenyan context because at the heart of the corruption problem in Kenya is the Tender process. However, very few Kenyans understand what it looks like. Few Kenyans also understand how the Tender process is used in the plunder of public funds. The reason the problems in the above paragraph exist are twofold:
  • Firstly, how the Kenyan media covers stories about corruption. They tend to focus on the figures lost and the personalities involved rather than how the money was stolen. This may be because media practitioners feel that is what will sell newspapers as opposed to producing reporting that may drive significant action both publicly and legislatively. It’s no surprise then that Kenya’s corruption coverage ends up echoing tabloid reporting. The fundamentals in understanding how corruption happens are missing at large.
  • Secondly, lack of public awareness on the intricacies of the tender process leads to lack of accountability demands from them. This is in part due to lack of government outreach and the current coverage afforded by the media.
  IEA’s report sought to address the problems above. The points below are some key lessons learned from my collaboration: It is important to define the professional opinions the Auditor gives and provide examples of what may lead to specific outcomes. The Auditor General’s report is a very technical document. The majority of ordinary citizens either tend to misunderstand or have no knowledge about the content at all. Given that they are a target audience for these reports, a key task when doing the research was to define the opinions that the auditor gives in a simple manner. Providing examples as to what each opinion meant was also important. Lack of a clear definition also lends itself to misinterpretation from the press, something that may lead to unintended consequences down the line.

Descriptions provided in the IEA report of the auditor opinions.

              For relatability, try to show how much expenditure each Opinion represents. This gives a clearer picture to audiences about how much of public spending comes under threat due to procurement violation in specific cases. Multiple levels of procurement breaches may occur and it may be worthwhile to highlight serial offenders. Corruption is something that is engineered to escape the prevailing systems of accountability in a country. IEA found that many procurement violations occurred at multiple stages of the tender process. In some cases they found unsupported expenditure leading to exaggerated prices for products, or single sourcing leading to incomplete projects that have been fully paid for. It is therefore important to highlight how many violations occur at multiple levels when carrying out such a study. Try your best to advocate for machine readability of report releases in machine readable formats to reduce errors that could be caused in transcribing. One of the biggest hurdles experienced in working with government reports currently in Kenya (and this would probably be the case in a lot of other African countries) is that the reports are produced in the form of scanned PDFs. It makes the process time consuming and error ridden due to transcription of the documents. This problem is something that we see being widespread across government institutions. As we press for better systems of accountability, making sure that accessibility of information is easier should be part of it. If you encounter such a problem, I would recommend using sandwich pdf (https://www.sandwichpdf.com/) to try to make majority of the text recognizable.

An example of one of the outputs from the Auditor General’s Office.

  Media houses have a habit of misinterpreting or exaggerating the findings found in such reports. Training them and holding them accountable for their reporting is important. Journalists and CSOs are a key conduit of this kind of information to the public. However, we have found cases where a lot of them do not understand the terminologies and reasoning contained in Audit reports. What this means is in an attempt to simplify the information for the public, a lot of it gets lost in translation. The IEA had an open forum with journalists explaining how to go about reading the report they wrote and the Auditor General’s as well. To conclude, the IEA did an amazing study that used the Open Contracting framework on the tender process to analyze the auditor general’s report. Corruption is a problem plaguing the developing world. However, audit and oversight organisations are gaining more powers and prominence in these countries. Looking at the information provided by them could reveal a lot about how corruption happens around the world. If you do decide to undertake a study like the one IEA did, the above points I mentioned should help you come up with a study that becomes an effective advocacy tool against corruption.   Flattr this!

“Not a scary concept”: Reflections from the Standard Group Data Conference

- October 10, 2018 in Event report, fellowship

In his first piece for the School of Data blog, our 2018 Fellow, Kelvin Wellington, reflects on his experiences at the Standard Group Conference in Accra in July 2018. To date, the conversation around open data has been firmly centred in its importance and the implications of championing the cause. Is it a cause worth fighting for, and are policymakers doing the right thing by opening up data to the public eye? As citizens, is it important to know the finer details of how our country is run? These are questions that were lingering in my mind during an open data presentation that was part of a data conference held by the Standard Group in Accra, Ghana a few days ago. I will attempt to dissect some findings from this session. What can open data do? Open data should not be a scary concept, and should be embraced. It should not be seen as a means of taking off ‘protective shields’ on data. When we talk about open data, we should be looking at the following:
  • Empowerment: open data can give citizens of a country a stronger voice on public services they use and create a channel of dialogue between the citizen and local authorities or government.
  • Transparency: open data should be the next frontier in citizens’ quest for transparency. Freedom of information enables citizens to make informed decisions regarding their government, and allows us to better understand our world.
  • Participation: open data should bring about inclusiveness; from data providers to users. Everyone has a part to play in innovating with data and making a difference through building data-driven solutions.
  Who should be driving open data? Ideally, policy makers should be the driving force for open data in any setting. Policy makers in Ghana are, however, driving at a turtle’s pace. The public should be weighing in on the conversation as well, but at the moment thoughts are too scattered to produce a collective force. The private sector should also be heavily involved in the Open Data push since they have access to huge amounts of data. In addition, the policies and procedures should be open as well, not just data. Ultimately, it is up to governments, public bodies, community groups, citizens and businesses to facilitate the growth of open data, propagate its benefits and see that it achieves its full potential. The Open Data Initiative in Ghana has stalled with the online platform lacking in up-to-date data, and data unavailable for a good number of industries. Financial constraints have been pointed out as a major issue, and as citizens, we owe it to our country to challenge authorities to resolve this. Why is it important? We spend our time talking about making decisions without focusing on making data-driven decisions. If data is not being processed into knowledge and that knowledge does not become wisdom, then the purpose of data in itself is dead. Opening data gives us all a chance to contribute to creating more knowledge and making wiser decisions. Data has become a gold standard, and keeping an ‘open culture’ makes for a healthier ecosystem for policymakers and citizens alike.     Flattr this!