Where technologies work in ways that are ‘opaque’ – such that their functioning cannot be effectively understood or explained – it raises challenges for predicting specific outcomes and ensuring adequate accountability. Such challenges are particularly acute in the context of autonomy in weapons because the outcomes involved include severe harms. In the civilian space, policy and legal responses to new technologies have recognised these challenges and have imposed obligations for ‘explicability’ both as a system requirement and as part of any response to people who experience harm from automated data processing. In the context of autonomy in weapons systems, establishing a legal requirement for ‘explicability’ (as once component of a legal response) would prohibit certain forms of system functioning. It would also provide a basis for scrutiny of technologies under development (such as in national weapon reviews) and would facilitate legal judgements and accountability around the use of systems that are not prohibited.