Since I am still afraid of SSIS, and because I am dealing mostly with CSV files and table structures that are relatively simple and require only one of the three letters in the acronym "ETL," I find myself using BULK INSERT a lot. I have been meaning to switch to using CLR, since I am doing a lot of file system querying using xp_cmdshell, but I haven't had the chance to really explore it yet. I know, a lot of you are probably thinking, wow, look at all those bad habits. But for every person thinking that way, I am sure there is someone thinking, yes, I am still using BULK INSERT as well. So, let me tell you about two little problems I came across recently, and how I worked around them.
The Phantom Carriage Return
In one system, we get log files from a bunch of application servers running Linux. The other day we built a new log file output mechanism using the same code (we are removing, as much as we can, the applications' dependency on the database). However, the same code outputting the same type of log file (just containing different data) was generating the following errors when I tried to BULK INSERT any file with more than one row:
Bulk load data conversion error (truncation) for row 1, column <last column number> (<last column name >).
I asked the developer if anything had changed with the constant he was using for carriage returns, and they said no, "\n" was still in place. No matter how I changed the code, I was still getting the error:
... (WITH ROWTERMINATOR = '\r\n', ... ); ... (WITH ROWTERMINATOR = '\n\r', ... ); ... (WITH ROWTERMINATOR = '\n\n', ... ); ... (WITH ROWTERMINATOR = '\r', ... );
Nothing would work. I had half a mind to go and look at the Perl code on the Linux box, but before I got to that point, and on a total lark, I tried this version of the code instead:
DECLARE @b nvarchar(2048); SET @b = 'BULK INSERT ... WITH (ROWTERMINATOR = ''' + CHAR(10) + ''', ... );'
Lo and behold; if I used the CHAR() code from SQL embedded in the string, it worked! For some of you, the addition of dynamic SQL will make you wag your finger at me even more. But for anyone who is polling for new files to load, you are probably already using dynamic SQL to construct these statements anyway. I haven't really dug into why this worked but "\n" didn't, I just knew it was working and could move onto the next most important task.
Using BULK INSERT Against a Linked Server
When you try to use BULK INSERT against a four-part name, e.g.:
BULK INSERT [OtherServer].[MyDatabase].[dbo].[foo] ... ;
You get the following error:
Invalid object name 'foo'.
I complained about the misleading error message in Connect #525287: BULK INSERT to linked server gives misleading error message. In that Connect item, I mentioned that I had an easy workaround, and had a follow-up from "Pbaggett" asking me about my workaround (though I just noticed the comment today), so I thought it was a perfect opportunity to make this post a little meatier.
In the system I am working in, there is a central table that describes various tables that exist on various systems in the environment. Most of the tables that are a target for BULK INSERT are local, but a few are remote. So, this central table contains information about the server if it exists; otherwise it is NULL. As described above, all BULK INSERT statements are constructed dynamically anyway, since the target tables aren't always inserted to, and the file names and number of files are volatile. Anyway, the trick here is to execute a SQL statement *on* the remote server when necessary (using a dynamically built, and then nested, sp_executesql call). I'll illustrate what I mean here with some sample code (for brevity, I've left placeholders for error handling, loops, etc.):
DECLARE @LinkedServer nvarchar(255) @Database nvarchar(128), @Table nvarchar(255), @FilePath nvarchar(2048), @sql nvarchar(MAX); -- this would actually be in two loops: -- one to get all of the target tables, -- and the other to get all of the relevant -- files for each target table. SELECT @LinkedServer = N'[OtherServer]', @Database = N'[MyDB]', @Table = N'[dbo].[foo]', @FilePath = N'\\FileServer\file.csv'; SET @sql = N'BULK INSERT ' + @Database + '.' + @Table + ' FROM ''' + @FilePath + ''' ' + 'WITH (FIELDTERMINATOR = ''|'', ROWTERMINATOR = ''\n'');'; SET @sql = N'sys.sp_executesql N''' + REPLACE(@sql, '''', '''''') + ''';'; BEGIN TRY -- sp_testlinkedserver goes here SET @sql = N'EXEC ' + COALESCE(@Server + '...', '') + @sql; PRINT @sql; -- I've commented out the EXEC here; PRINT should -- give you the gist of what's going on. --EXEC master.sys.sp_executesql @sql; END TRY BEGIN CATCH -- error handling END CATCH
Of course there are some complications here… doubling up the apostrophes and trying to debug the final SQL statement can be a pain; you also need to be sure that all of the remote servers are able to see the same network shares using the context that the linked server uses. But so far this workaround has allowed me to keep my log polling / loading process central, even while the distribution of data has scaled out to other servers.
@Peso ROCKS Thanks for the 0x0A tip.
Thank you very much @Peso
Setting '0x0A' as ROWTERMINATOR worked like a charm!!! This issue was driving me crazy.
Cheers!
Peso. Thanx! The 0x0A worked for me!
@Mauricio stop using EXEC @cmd, either use:
EXEC(@Cmd); — those parentheses are important
Or better yet, use
EXEC sp_executesql @Cmd;
Hi Aaron,
Could you give me a hand with a error of this topic?
– I want to execute a sp from my client machine (just one server)
DECLARE @Tabla nvarchar(50)
DECLARE @RutaArchivo nvarchar(200)
DECLARE @RutaFormato nvarchar(200)
DECLARE @Cmd nvarchar(500)
— @Ruta = '\\DESARROLLO4\Users\MAURICIO\Downloads\Publico\'
SET @Tabla = '[Predial].[IGACTmp]'
SET @RutaArchivo = @Ruta + 'IGAC.dat'
SET @RutaFormato = @Ruta + 'IGAC_Format.XML'
SET @Cmd = N'BULK INSERT ' + @Tabla + ' FROM "' + @RutaArchivo +
"' WITH (FORMATFILE = "' + @RutaFormato + "', ROWTERMINATOR = "\n")'
EXEC @Cmd
—————————-
Error message:
The Name 'BULK INSERT [Predial].[IGACTmp] FROM '\\DESARROLLO4\Users\MAURICIO\Downloads\Publico\IGAC.dat' WITH (FORMATFILE = '\\DESARROLLO4\Users\MAURICIO\Downloads\Publico\IGAC_Format.XML', ROWTERMINATOR = '\n')' is not a valid identifier
—————————-
I'll apreciate your help. thanks is advance
Usually when I find a tech article that helps me with a coding problem, I find that everyone else hit that problem 8+ years ago.
So it's nice to seem I'm only a day behind this time!
This blog post saved my morning! I was having the 'phantom carriage return' problem. What a PITA! Spent about 45 minutes fussing with hex-mode in Emacs trying to figure this out, changing from DOS to Unix terminators, etc. etc. Thanks!
BTW, it's a total mystery why in over 15 years of SQL development the SSMS developers haven't just swiped working code from the Excel codebase to read CSVs properly …
Hi Aaron,
I am probably missing something, but my first reaction is this:
Why not just do it in C#? This is a very simple task; should you have any problems, you can just debug; you have full control, and you don;t have to guess your way around other people's bugs and issues – you can have your own unique ones ;).
I am half serious 😉
Happened to us to for some year ago.
Ended up using 0x0A as rowterminator as it doesn't need dynamic sql.
It seems hex always works, and especially with ascii values below 32.