
Justicia: A Stochastic SAT Approach to Formally Verify Fairness
As a technology ML is oblivious to societal good or bad, and thus, the f...
read it

Verifying Fairness Properties via Concentration
As machine learning systems are increasingly used to make real world leg...
read it

Generating Correctness Proofs with Neural Networks
Foundational verification allows programmers to build software which has...
read it

Developing BugFree Machine Learning Systems With Formal Mathematics
Noisy data, nonconvex objectives, model misspecification, and numerical...
read it

Grammar Based Directed Testing of Machine Learning Systems
The massive progress of machine learning has seen its application over a...
read it

ML + FV = ? A Survey on the Application of Machine Learning to Formal Verification
Formal Verification (FV) and Machine Learning (ML) can seem incompatible...
read it

Fixes That Fail: SelfDefeating Improvements in MachineLearning Systems
Machinelearning systems such as selfdriving cars or virtual assistants...
read it
Verification of ML Systems via Reparameterization
As machine learning is increasingly used in essential systems, it is important to reduce or eliminate the incidence of serious bugs. A growing body of research has developed machine learning algorithms with formal guarantees about performance, robustness, or fairness. Yet, the analysis of these algorithms is often complex, and implementing such systems in practice introduces room for error. Proof assistants can be used to formally verify machine learning systems by constructing machine checked proofs of correctness that rule out such bugs. However, reasoning about probabilistic claims inside of a proof assistant remains challenging. We show how a probabilistic program can be automatically represented in a theorem prover using the concept of reparameterization, and how some of the tedious proofs of measurability can be generated automatically from the probabilistic program. To demonstrate that this approach is broad enough to handle rather different types of machine learning systems, we verify both a classic result from statistical learning theory (PAClearnability of decision stumps) and prove that the null model used in a Bayesian hypothesis test satisfies a fairness criterion called demographic parity.
READ FULL TEXT
Comments
There are no comments yet.