With reference to the UNIDIR paper the rationale for the draft wording is as follows:
Definitions should capture non-lethal as well as lethal systems hence AWS rather than LAWS.
Definitions also should capture existing systems not just future ones. Speaking of AWS in the future tense is a mistake in my view that just kicks the can down the road. Crude and simple AWS existed in the American Civil War.
Regarding definitional approaches I favour the human-centred approach and the task/functions approach. The tech falls into place once we are clear on those matters. I accept the sequencing the UNIDIR paper suggests.
I make no distinction between autonomous and automated. Machines are machines and human are humans. You are either delegating a critical function of targeting to a machine or you are not.
My definition of autonomy is based on that in George Bekey's book Autonomous Robots (p. 1). Unlike the British definition of autonomy this is a definition that captures existing weapons. The British definition is a future tense definition of autonomy.
I favour the Dutch working definition of an AWS and their concept of the "wider" loop.
I favour the ICRC "critical functions" approach. I define three critical functions of targeting - define, select and engage.
I regard the French working definition of a LAWS as a minimal guide to what to ban in Draft B.
Draft A draws the line of tolerance at a human in the narrower loop of select and engage.
Draft B draws the line of tolerance at a human in the wider loop of define, select and engage.
The only way to find out if Draft A is achievable is to talk to the diplomats.
If that cannot win consensus, Draft B might.
I would say getting something is better than getting nothing. Treaties have a habit of growing in scope once enacted.
Tweet feedback to @sean_welsh77.