When using dynamic SQL, you have two choices: EXEC() / EXECUTE(), or sp_executesql. Early on my career, I used EXEC() a lot, because it was much easier. I've since learned that sp_executesql is a lot better, for two main reasons:
DECLARE @foo NVARCHAR(32) = N'string''; DROP TABLE dbo.table; --'''; DECLARE @sql NVARCHAR(MAX) = N'SELECT foo FROM dbo.table WHERE foo = ''' + @foo + ''''; EXEC(@sql);
… or …
DECLARE @foo NVARCHAR(32) = N'string''; DROP TABLE dbo.table; --'''; DECLARE @sql NVARCHAR(MAX) = N'SELECT foo FROM dbo.table WHERE foo = @foo;'; EXEC sp_executesql @sql, N'@foo NVARCHAR(32)', @foo;
In the latter case, @foo is a parameter all the way through, making it quite difficult to expose yourself to SQL injection. The results will literally look for the following value:
string'; DROP TABLE dbo.table; --'
Which will return an empty result instead of trying to execute two commands on the server.
Now, can you get bitten by parameter sniffing still in either case? Of course. But you can't solve that by switching between EXEC and sp_executesql; it is something you'll need to use OPTIMIZE FOR or RECOMPILE query hints, or maybe even the optimize for ad hoc workloads setting. This behavior is quite similar to how bind variables in Oracle work.
There are going to be cases where your parameters can't simply be passed in directly, such as variable table names. The threat for SQL injection still exists there, but the performance issues are not really relevant – if the query is going to be pulling from a different table potentially each time, you should definitely be using RECOMPILE.
The bottom line is this: query plan reuse is one of your very best friends; concatenation is the long-lost acquaintance from high school that just might steal your lunch.