They don't do anything to suppress errors, they're just trying to find a gate sequence (chosen from a pre-determined small set of primitives) that will perform a specific operation in minimal time, which will also minimize the impact of the noise.

Unfortunately, given the current error rates, this alone isn't enough. They entirely omit dynamically corrected gates, which doesn't "consume" (require) even a single additional qubit, which will be employed in any useful implementation alongside a surface code variant.

Another issue is, why restrict yourself to a finite set of primitive gates? In any implementation, quantum gates are parametrized by continuous variables, giving infinitely many choices. This is why their results are for sure not optimal: there will be faster solutions if they drop this arbitrary discretization (they're doing this most likely simplify the problem).

There reference is probably this: https://arxiv.org/pdf/1807.00800.pdf

They don't do anything to suppress errors, they're just trying to find a gate sequence (chosen from a pre-determined small set of primitives) that will perform a specific operation in minimal time, which will also minimize the impact of the noise.

Unfortunately, given the current error rates, this alone isn't enough. They entirely omit dynamically corrected gates, which doesn't "consume" (require) even a single additional qubit, which will be employed in any useful implementation alongside a surface code variant.

Another issue is, why restrict yourself to a finite set of primitive gates? In any implementation, quantum gates are parametrized by continuous variables, giving infinitely many choices. This is why their results are for sure not optimal: there will be faster solutions if they drop this arbitrary discretization (they're doing this most likely simplify the problem).