Rest Api to Upload Large Files to Salesforce
Introduction
Sending large files to an MVC/Spider web-API Server can be problematic. This commodity is nigh an alternative. The approach used is to break a large file up into modest chunks, upload them, then merge them back together on the Server via file transfer past partitioning. The article shows how to ship files to an MVC Server from both a webpage using JavaScript, and a Spider web-course httpClient, and tin be implemented using either MVC or Web API.
In my experience, the larger the file you lot need to upload to a website/API, the bigger the potential problems yous encounter. Fifty-fifty when you put the correct settings in place, adapt your spider web.config, make certain you use the right multiplier for maxRequestLength and maxAllowedContentLength and of course don't forget nigh executionTimeout (eek!), things tin can still get incorrect. Connections can neglect when the file is *about* transferred, servers unexpectedly (Murphy'south constabulary) run out of space, etc., the list goes on. The diagram beneath demonstrates the basic concept discussed in this commodity.
Background
The concept for this solution is very elementary. The attached code works (I have information technology started in product), and tin can be improved by you in many ways. For instance, for the purposes of this article the original big file is broken into app. 1mb chunks, and uploaded to the server sequentially, one chunk at a time. This could, for example, be made more efficient by threading, and sending chunks in parallel. It could also be made more than robust past adding fault tolerance, machine-resume into a residual-api architecture etc. I leave you to implement these features yourself if yous need them.
The lawmaking consists of two parts - the initial file-split/partitioning into chunks, and the last merge of the chunks back into the original file. I will demonstrate the file-split using both C# in a web-course, and JavaScript, and the file-merge using C# server-side.
File split
The concept of splitting a file is very bones. We transverse the file in a binary stream, from position zero, upward to the concluding byte in the file, copying out chunks of binary data along the way and transferring these. Generally we gear up an arbitrary (or carefully thought out!) chunk size to extract, and use this every bit the amount of information to accept at a time. Anything left over at the end is the final clamper.
In the example below, a chunk size of 128b is set up. For the file shown, this gives us 3 x 128b chunks, and i ten 32b. In this example in that location are iv file chunks resulting from the split up and to transfer to the server.
C# File Split
The accompanying demo "WinFileUpload" is a unproblematic Windows forms application. Its sole function is to demonstrate splitting a sample big file (50 MB) in C#, and using a HTTPClient to post the file to a web-server (in this instance, an MVC Server).
For this C# instance, I have a class called Utils that takes some input variables such as maximum file chunk size, temporary folder location, and the name of the file to split. To split the file into chunks, we call the method "SplitFile". SplitFile works its way through the input file and breaks information technology into carve up file chunks. Nosotros then upload each file clamper it using "UploadFile".
- Utils ut = new Utils();
- ut.FileName ="hs-2004-15-b-full_tif.bmp" ;
- ut.TempFolder = Path.Combine(CurrentFolder,"Temp" );
- ut.MaxFileSizeMB = ane;
- ut.SplitFile();
- foreach ( string File in ut.FileParts)
- {
- UploadFile(File);
- }
- MessageBox.Bear witness("Upload complete!" );
The file upload method takes an input file-name, and uses a HTTPClient to upload the file. Annotation the fact that we are sending MultiPartFormData to behave the payload.
- public bool UploadFile( string FileName)
- {
- bool rslt = false ;
- using (var client = new HttpClient())
- {
- using (var content = new MultipartFormDataContent())
- {
- var fileContent =new ByteArrayContent(Arrangement.IO.File.ReadAllBytes(FileName));
- fileContent.Headers.ContentDisposition =new
- ContentDispositionHeaderValue("zipper" )
- {
- FileName = Path.GetFileName(FileName)
- };
- content.Add(fileContent);
- var requestUri ="http://localhost:8170/Home/UploadFile/" ;
- try
- {
- var event = client.PostAsync(requestUri, content).Effect;
- rslt =true ;
- }
- catch (Exception ex)
- {
- rslt =false ;
- }
- }
- }
- render rslt;
- }
So, that's the supporting lawmaking out of the way. One of the critical things to be aware of adjacent is the file naming convention that is being used. It consists of the original file-name, plus a code-parsable tail "_part." that will be used server-side to merge the unlike file chunks dorsum into a single contiguous file again. This is simply the convention I put together - you can change it to your ain requirements, but be sure you lot are consistent with it.
The convention for this example is,
Name = original proper noun + ".part_N.Ten" (N = file role number, X = total files).
Here is an example of a picture show file split into three parts.
- MyPictureFile.jpg.part_1.3
- MyPictureFile.jpg.part_2.3
- MyPictureFile.jpg.part_3.iii
It doesn't thing what guild the file chunks are sent to the Server. The important thing is that some convention, like the above is used, so that the Server knows (a) what file part it is dealing with and (b) when all parts have been received and can be merged back into one large original file again.
Next, here is the meat of the C# lawmaking that scans the file, creating multiple chunk files ready to transfer.
- public bool SplitFile()
- {
- bool rslt = false ;
- string BaseFileName = Path.GetFileName(FileName);
- int BufferChunkSize = MaxFileSizeMB * (1024 * 1024);
- const int READBUFFER_SIZE = 1024;
- byte [] FSBuffer = new byte [READBUFFER_SIZE];
- using (FileStream FS = new FileStream(FileName, FileMode.Open, FileAccess.Read, FileShare.Read))
- {
- int TotalFileParts = 0;
- if (FS.Length < BufferChunkSize)
- {
- TotalFileParts = 1;
- }
- else
- {
- float PreciseFileParts = (( float )FS.Length / ( float )BufferChunkSize);
- TotalFileParts = (int )Math.Ceiling(PreciseFileParts);
- }
- int FilePartCount = 0;
- while (FS.Position < FS.Length)
- {
- string FilePartName = String.Format( "{0}.part_{1}.{2}" ,
- BaseFileName, (FilePartCount + 1).ToString(), TotalFileParts.ToString());
- FilePartName = Path.Combine(TempFolder, FilePartName);
- FileParts.Add(FilePartName);
- using (FileStream FilePart = new FileStream(FilePartName, FileMode.Create))
- {
- int bytesRemaining = BufferChunkSize;
- int bytesRead = 0;
- while (bytesRemaining > 0 && (bytesRead = FS.Read(FSBuffer, 0,
- Math.Min(bytesRemaining, READBUFFER_SIZE))) > 0)
- {
- FilePart.Write(FSBuffer, 0, bytesRead);
- bytesRemaining -= bytesRead;
- }
- }
- FilePartCount++;
- }
- }
- return rslt;
- }
That'south it for the C# client-side - nosotros will run across the event and how to handle things server-side later on in the article. Adjacent, let'due south look at how to do the same thing in Javascript, from a web-browser.
JavaScript File Split
NB - The JavaScript code, and the C# Merge code are contained in the attached demo file "MVCServer"
In our browser, we take an input command of blazon "file", and a button to phone call a method that initiates the file-split up and data transfer.
- < input blazon = "file" id = "uploadFile" proper name = "file" /> < a class = "btn btn-primary" href = "#" id = "btnUpload" > Upload file </ a >
On document set up, we bind to the click effect of the button to call the main method.
- $(document).ready( function () {
- $('#btnUpload' ).click( part () {
- UploadFile($('#uploadFile' )[0].files);
- }
- )
- });
Our UploadFile method does the work of splitting the file into chunks, and every bit in our C# instance, passing the chunks off to another method for transfer. The main difference here is that in C#, we created individual files, in our JavaScript example, nosotros are taking the chunks from an assortment instead.
- office UploadFile(TargetFile)
- {
- var FileChunk = [];
- var file = TargetFile[0];
- var MaxFileSizeMB = 1;
- var BufferChunkSize = MaxFileSizeMB * (1024 * 1024);
- var ReadBuffer_Size = 1024;
- var FileStreamPos = 0;
- var EndPos = BufferChunkSize;
- var Size = file.size;
- while (FileStreamPos < Size)
- {
- FileChunk.push(file.piece(FileStreamPos, EndPos));
- FileStreamPos = EndPos;
- EndPos = FileStreamPos + BufferChunkSize;
- }
- var TotalParts = FileChunk.length;
- var PartCount = 0;
- while (chunk = FileChunk.shift())
- {
- PartCount++;
- var FilePartName = file.proper name + ".part_" + PartCount + "." + TotalParts;
- UploadFileChunk(chunk, FilePartName);
- }
- }
The UploadFileChunk takes the office of the file handed by the previous method, and posts information technology to the Server in a similar manner to the C# instance.
- office UploadFileChunk(Chunk, FileName)
- {
- var FD =new FormData();
- FD.suspend('file' , Chunk, FileName);
- $.ajax({
- type:"Mail service" ,
- url:'http://localhost:8170/Home/UploadFile/' ,
- contentType:simulated ,
- processData:false ,
- information: FD
- });
- }
File merge
NB -The JavaScript code, and the C# Merge code are independent in the fastened demo file "MVCServer"
Over on the Server, be that MVC or Web-API, nosotros receive the private file chunks and need to merge them dorsum together again into the original file.
The commencement thing we exercise is put a standard Post handler in place to receive the file chunks being posted upwards to the Server. This code takes the input stream, and saves it to a temp folder using the file-name created by the client (C# or JavaScript). Once the file is saved, the code then calls the "MergeFile" method which checks if it has enough file chunks available yet to merge the file together. Annotation that this is merely the method I take used for this commodity. You may make up one's mind to handle the merge trigger differently, for example, running a task on a timer every few minutes, passing off to some other process, etc. It should exist inverse depending on your own required implementation.
- [HttpPost]
- public HttpResponseMessage UploadFile()
- {
- foreach ( cord file in Request.Files)
- {
- var FileDataContent = Asking.Files[file];
- if (FileDataContent != goose egg && FileDataContent.ContentLength > 0)
- {
- var stream = FileDataContent.InputStream;
- var fileName = Path.GetFileName(FileDataContent.FileName);
- var UploadPath = Server.MapPath("~/App_Data/uploads" );
- Directory.CreateDirectory(UploadPath);
- string path = Path.Combine(UploadPath, fileName);
- try
- {
- if (Organisation.IO.File.Exists(path))
- Organisation.IO.File.Delete(path);
- using (var fileStream = Arrangement.IO.File.Create(path))
- {
- stream.CopyTo(fileStream);
- }
- Shared.Utils UT =new Shared.Utils();
- UT.MergeFile(path);
- }
- catch (IOException ex)
- {
- }
- }
- }
- render new HttpResponseMessage()
- {
- StatusCode = Organisation.Net.HttpStatusCode.OK,
- Content =new StringContent( "File uploaded." )
- };
- }
Each time we call the MergeFile method, it get-go checks to run across if we take all of the file chunk parts required to merge the original file back together once again. It determines this by parsing the file-names. If all files are present, the method sorts them into the correct order, and then appends one to some other until the original file that was split, is back together over again.
- public bool MergeFile( string FileName)
- {
- bool rslt = false ;
- string partToken = ".part_" ;
- cord baseFileName = FileName.Substring(0, FileName.IndexOf(partToken));
- string trailingTokens = FileName.Substring(FileName.IndexOf(partToken) + partToken.Length);
- int FileIndex = 0;
- int FileCount = 0;
- int .TryParse(trailingTokens.Substring(0, trailingTokens.IndexOf( "." )), out FileIndex);
- int .TryParse(trailingTokens.Substring(trailingTokens.IndexOf( "." ) + one), out FileCount);
- cord Searchpattern = Path.GetFileName(baseFileName) + partToken + "*" ;
- cord [] FilesList = Directory.GetFiles(Path.GetDirectoryName(FileName), Searchpattern);
- if (FilesList.Count() == FileCount)
- {
- if (!MergeFileManager.Instance.InUse(baseFileName))
- {
- MergeFileManager.Instance.AddFile(baseFileName);
- if (File.Exists(baseFileName))
- File.Delete(baseFileName);
- List<SortedFile> MergeList =new Listing<SortedFile>();
- foreach ( string File in FilesList)
- {
- SortedFile sFile =new SortedFile();
- sFile.FileName = File;
- baseFileName = File.Substring(0, File.IndexOf(partToken));
- trailingTokens = File.Substring(File.IndexOf(partToken) + partToken.Length);
- int .TryParse(trailingTokens.
- Substring(0, trailingTokens.IndexOf("." )), out FileIndex);
- sFile.FileOrder = FileIndex;
- MergeList.Add together(sFile);
- }
- var MergeOrder = MergeList.OrderBy(s => due south.FileOrder).ToList();
- using (FileStream FS = new FileStream(baseFileName, FileMode.Create))
- {
- foreach (var chunk in MergeOrder)
- {
- try
- {
- using (FileStream fileChunk =
- new FileStream(chunk.FileName, FileMode.Open))
- {
- fileChunk.CopyTo(FS);
- }
- }
- catch (IOException ex)
- {
- }
- }
- }
- rslt =true ;
- MergeFileManager.Instance.RemoveFile(baseFileName);
- }
- }
- return rslt;
- }
Using the file split on the client-side, and file-merge on the server-side, we now have a very workable solution for uploading large files in a more secure style than simply sending up in one large block of data. For testing, I used some large image files converted to a BMP from a hubble pic hither.
That'due south it - Happy uploading !
collinscapper1986.blogspot.com
Source: https://www.c-sharpcorner.com/article/upload-large-files-to-mvc-webapi-using-partitioning/
Post a Comment for "Rest Api to Upload Large Files to Salesforce"