Synthesis from Satisficing and Temporal Goals

S. Bansal, L. E. Kavraki, M. V. Vardi, and A. Wells, “Synthesis from Satisficing and Temporal Goals,” in Proceedings of the AAAI Conference on Artifical Intelligence, 2022, vol. 36, no. 9, pp. 9679–9686.

Abstract

Reactive synthesis from high-level specifications that combine hard constraints expressed in Linear Temporal Logic (LTL) with soft constraints expressed by discounted-sum (DS) rewards has applications in planning and reinforcement learning. An existing approach combines techniques from LTL synthesis with optimization for the DS rewards but has failed to yield a sound algorithm. An alternative approach combining LTL synthesis with satisficing DS rewards (rewards that achieve a threshold) is sound and complete for integer discount factors, but, in practice, a fractional discount factor is desired. This work extends the existing satisficing approach, presenting the first sound algorithm for synthesis from LTL and DS rewards with fractional discount factors. The utility of our algorithm is demonstrated on robotic planning domains.

Publisher: http://dx.doi.org/10.1609/aaai.v36i9.21202

PDF preprint: http://kavrakilab.org/publications/bansal2022-synthesis.pdf