We revisit the smooth and strongly convex separable optimization problem with linear coupling constraints, where the objective functions and coupling matrices are stored across the nodes of a decentralized communication network. This problem has attracted a lot of interest due to applications in power systems, distributed control, federated learning, etc., and a number of optimization methods with various convergence guarantees have been proposed for solving it. In this work, we develop theoretical lower bounds on the communication, matrix-vector multiplication, and gradient computation complexities of solving the problem. Our findings suggest that current algorithms may be inefficient, as their theoretical performances fall short of these lower bounds by a substantial margin. Consequently, we close this gap by developing the first distributed gradient method, whose theoretical complexities match the lower bounds. The proposed algorithm substantially outperforms the existing algorithms in practice, as corroborated by our preliminary experiments with distributed linear regression and vertical federated learning problems.