Search Results

Search found 34382 results on 1376 pages for 'default browser'.

Page 170/1376 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • How do I remove nitrogen back to Ubuntu's default wallpaper manager?

    - by jonalmeida
    I was using nitrogen when I using openbox, but now I'm trying to stop it from setting the wallpaper so I can use the Ubuntu default (i.e via the System Settings panel). What I've done so far: sudo apt-get remove nitrogen Removed nitrogen configs at ~/.config/nitrogen/ Checked to make sure that draw_background in /desktop/gnome/background was checked So far it still doesn't work. I can't right-click on the desktop, so I'm guessing that I've missed something to get Nautilus back to drawing icons on the screen. Any help is much appreciated. Thanks!

    Read the article

  • How can I make Google show unit conversions by default?

    - by bUbUKid
    When I search for "4 inches in g" on my Windows Firefox I immediately get a unit conversion done by Google that shows up before the actual search results. On my Ubuntu 12.04 system this does not work though. I tried Firefox and Chromium and have no script blockers installed. I also switched off AdBlock Plus for testing but to no avail. I realize that this is not really Ubuntu doing something wrong but: Are there any settings I can modify to make Google show these results? I use them quite frequently and I believe (though I cannot test it anymore) that this used to work on my last Ubuntu System. Maybe there are some script sources that Ubuntu has disabled by default or something like that?

    Read the article

  • How to enter the Default Keyring password via the command line?

    - by Jerkofalltrades
    Is there a way to enter the default keyring password using the command line? For instance: You have a remote setup of Ubuntu 10.10 thats set to auto login. You don't want to remove the keyring password. All right the system boots up and logs in automatically, then asks for the keyring password now at this point you can create ssh connections but you can't remote desktop. What can you do to enter the keyring password at this point? Also, to better clarify, this is from a remote connection using the command line.

    Read the article

  • How do I configure the default applications on the launcher on a LiveCD?

    - by stlsaint
    How can one edit unity bar default apps within a livecd? In other words if you boot ubuntu 12.04 livecd you will see in the unity bar, firefox, libreoffice, Ubuntu software center, etc. Well I need to customize a 12.04 livecd so that upon boot you will see my own selected apps ie: chromium, ubuntu-tweak, etc. Please dont link me to remastersys or myunity or ubuntu-tweak or ccsm. No graphical applications to be used. The iso is being built via chroot meaning i need the actual file(s) location: /usr/share/unity-2d.....something along those lines.

    Read the article

  • Assigning a colour to imorted obj. files that are being used as default material

    - by Salino
    I am having a problem with assigning a colour to the different meshes that I have on one object. The technique that I have used is the first approach on this site. Is it possible to export a simulation (animation) from Blender to Unity? So what I would like to do is the following. I have about 107 meshes that are different frames from my shape key animation of my blender model. What I would like to have is that the first mesh will be bright green and up to the 40th mesh the colour turns to be white /greyish... the best would be if I could assign every mesh by hand a colour, however they are all default materials. And if I assign the object a colour, the whole "animation" is going to be in that colour

    Read the article

  • Is there any way to get spotlight or media browser in OSX (Snow Leopard) to index and recognize meta

    - by jaydles
    It seems silly to go to all the trouble to assign "Face" data to thousands of photos, but not make it possible to use that data to locate them outside of that application. I know that that metadata is stored in the "library" database for Aperture/iphoto, rather than on the actual files (which is too bad). And I can even potentially see why it might create challenges for spotlight to use it, since spotlight if presumably a file index system, not a media organizer, but surely the media browser used across the other OSX apps is intended to use it? The media browser's whole purpose seems to be to let you easily locate and reference the items you organize in one of the ilife apps (iphoto or Aperture, in this case) from the others (say, imovie, or Mail). It's particularly vexing since the photo app on the iphone sorts by faces by default. Additionally, the mac-based media browser does access smart albums and folders, so you could establish a workaround by creating a smart album for each "face" or place, or tag, and access them that way, but it seems like there must be an easier way. Am I missing something?

    Read the article

  • SharePoint 2010 BDC Model Deployment Issue: The default web application could not be determined.

    Yesterday I tried to deploy a Business Data Connectivity Model project created in Visual Studio 2010 to my SharePoint 2010 test server (all RTM versions), but during the deployment of the solution, SharePoint threw my following error: Add Solution:  Adding solution 'BCSDemo2.wsp'...  Deploying solution 'BCSDemo2.wsp'...Error occurred in deployment step 'Add Solution': The default web application could not be determined. Set the SiteUrl property in feature BCSDemo2_Feature1 to the URL of...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SharePoint 2010 BDC Model Deployment Issue: The default web application could not be determined.

    Yesterday I tried to deploy a Business Data Connectivity Model project created in Visual Studio 2010 to my SharePoint 2010 test server (all RTM versions), but during the deployment of the solution, SharePoint threw my following error: Add Solution:  Adding solution 'BCSDemo2.wsp'...  Deploying solution 'BCSDemo2.wsp'...Error occurred in deployment step 'Add Solution': The default web application could not be determined. Set the SiteUrl property in feature BCSDemo2_Feature1 to the URL of...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Redirecting to a default (or last visited) subdirectory. Does Google like this?

    - by andufo
    i have a site that has 3 web applications, lets use this example: example.com/nicy example.com/mash example.com/zoken The main application is nicy, so if the user comes for the first time (or if Google indexes my site) that will be the default choice. This is the code placed inside example.com/index.php <? header('HTTP/1.1 301 Moved Permanently'); header('Location: http://example.com/nicy'); die(); ?> Is this solution SEO friendly for Google to index the nicy subdirectory as the main entrance page for the domain? (because of the 301 redirect). Thanks!

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • What browser is sending user agent beginning mozilla/5.0+, tramslates & into &amp;

    - by Patrick
    We've got a website which has been running for a few years now. One of our customers has just started having an intermittent problem. Looking at our iis6.0 logs the service works correctly when they have a user agent beginning "mozilla/4.0+" but fails when the user agent begins "mozilla/5.0+". The particular customer only started having this problem on Wednesday. Does anyone know the browser/upgrade which changes the 4.0 to 5.0? The actual problem caused is that an "&" in a url parameter list is being encoded as "&amp;". Anyone seen anything similar? We have other users sending from browsers with the 5.0+ user agent without trouble. Sorry about the tags but don't have the rep to create new ones. Thanks in advance, Patrick Edit: hi Viper_sb, It is most probably a custom script (I'm primarily a c++ developer so don't really understand). Our site services requests from other customer developed sites, this one was done in Java script as far as I know. we're actually getting a variety of user agents (presumably depending on which of our customers customers is accessing the service), here's a few: Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+fr;+rv:1.9.1.11)+Gecko/20100701+Firefox/3.5.11 Mozilla/5.0+(Windows;+U;+Windows+NT+5.1;+en-US)+AppleWebKit/533.4+(KHTML,+like+Gecko)+Chrome/5.0.375.126+Safari/533.4 302 0 0 Mozilla/5.0+(Macintosh;+U;+PPC+Mac+OS+X;+fr)+AppleWebKit/523.12+(KHTML,+like+Gecko)+Version/3.0.4+Safari/523.12 Mozilla/5.0+(Windows;+U;+Windows+NT+5.1;+en-US;+rv:1.9.2.8)+Gecko/20100722+Firefox/3.6.8 Mozilla/5.0+(Windows;+U;+Windows+NT+5.1;+fr;+rv:1.9.2.8)+Gecko/20100722+Firefox/3.6.8+(.NET+CLR+3.5.30729)

    Read the article

  • How can I make my FireFox Browser AddOn Backwards Compatible?

    - by bwheeler96
    I'm making a firefox browser add-on, and I just got all of the code working fine, but it will only install in FireFox 16, and I want it to be compatible at least from 10+, has anyone dealt with this issue? I have my package.json pointing to my install.rdf, and my install.rdf clearly states target applications. Is there any additional setup I need? here is my package.json { "name": "firefox-ext", "license": "MPL 2.0", "author": "", "version": "0.1", "fullName": "firefox-ext", "id": "jid1-AMCw25iQJof53w", "description": "a basic add-on" } and here is my install.rdf. <?xml version="1.0"?> <RDF xmlns="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:em="http://www.mozilla.org/2004/em-rdf#"> <Description about="urn:mozilla:install-manifest"> <em:id>jid1-AMCw25iQJof53w</em:id> <em:name>Generic App</em:name> <em:version>1.0</em:version> <em:type>2</em:type> <em:creator>Brian Wheeler</em:creator> <em:description>Good Stuff</em:description> <em:targetApplication> <Description> <em:id>{ec8030f7-c20a-464f-9b0e-13a3a9e97384}</em:id> <em:minVersion>1.0</em:minVersion> <em:maxVersion>19.0</em:maxVersion> </Description> </em:targetApplication> </Description> </RDF> I'm using the CFX CLI Tools to make this, so everything has been built and tested with cfx init, cfx run, and cfx xpi I just can't figure out compatibility with anything other than 16. Also, major bonus points if someone could explain the advantages of a rapid release cycle, because it really seems to have shot 3rd party Mozilla developers in the foot in terms of software compatibility. Thanks, -Brian

    Read the article

  • Why does my browser take me to Scour.com? (redirect virus)

    - by Paula DiTallo
    The "scour" or Rootkit.Win32.TDSS virus has a long history which can be found here: http://en.wikipedia.org/wiki/Scour Here is the primary symptom: after searching for something in your web browser using google, one of the results that you click on redirects you to scour.com. If you've executed ClamWin, Malwarebytes, McAfee, Norton, etc. to find and isolate the virus without any luck--this isn't really a surprise, since this virus attaches to existing system drivers. I only know of one reliable package that will remove this without ill effects--like adding new spyware. This package is called TDSSKiller. I have seen multiple websites that claim to have this software available, but the one that I know is reliable is located here: http://support.kaspersky.com/viruses/solutions?qid=208280684 Once you go to Kaspersky's tech support site, the TDSSKiller zip file is available for downloading. When you execute this software, you will be able to "cure" or repair the infected driver. Remember to jot down the name of the driver for future reference--should you need to reinstall the driver from a "same-as" working computer, or your install disk if the repair is ineffective. The driver that happened to get infected on my computer was the tcpip.sys driver. This caused my win sockets to loose their ip addresses. In most other instances, less critical drivers such as HDAudBus.sys are infected. In my case, I was not through correcting my computer problems until I corrected the broken WinSock issue and loaded an earlier version of the tcpip.sys driver from: C:\WINDOWS\ServicePackFiles\i386 which I placed in: C:\WINDOWS\system32\drivers Don't forget to reboot your computer after your repair! Once you download TDSSKiller and cure/repair your infected driver(s), the redirect on google searches should disappear .

    Read the article

  • Browser language detection & content ranking for new language on the same site.

    - by Arnaud
    I've been reading a lot about it but it's still really hard to make up my mind. My understand is that if your website provide a link to the other language, this should not be an issue for google as long as your links are clear and clean, google will be able to make his way through it. The website was orginaly in french and I added the english version and I'm just worry that english speaker will just leave if the site is not in the correct language, for the home page I just wanted to get the value from the browser and redirect it to /fr/ or /en/ for the first page. (using php this will be very easy) Could you guys have a look at it and tell me what you think about it http://tinyurl.com/bpc5bn9 I don't want to get it wrong and lost my ranking with google. Also the website has good rank on the french side and the english has been online for 2 weeks and only get few visit a day, is that because all the back link refer to /fr/ and google is cleaver enough to decide that they are 2 differantes website and the back link will have to point to /en/ to increase the ranking value? Or will take few more weeks for the website to grow? Thanks for your hep

    Read the article

  • How to install Tor (Web Browser) in Ubuntu 12.10?

    - by Zignd
    I would like to install the Tor, but I'm having some problems. I know that someone will say "This question is a exactly duplication of How to install tor?", but it's not, because the another question can not be applied to Ubuntu 12.10 as the deb command is not available anymore. I did a research and even at the Tor's Official Website the available resource can not be applied to Ubuntu 12.10. I tried to use the deb command (as the above question says: deb http://deb.torproject.org/torproject.org <DISTRIBUTION> main) and the Terminal says deb: command not found and when I try to install it says E: Unable to locate package deb. I've also tried to use the ppa: ubun-tor, but it's not compatible with Quantal Quetzal, because it's too old. I've also tried to use sudo apt-get install tor, but browser icon don't shows up after installation and if you try to use the command tor in the Terminal I get the following error message: Nov 26 10:59:25.731 [notice] Tor v0.2.3.22-rc (git-4a0c70a817797420) running on Linux. Nov 26 10:59:25.731 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning Nov 26 10:59:25.731 [notice] Read configuration file "/etc/tor/torrc". Nov 26 10:59:25.737 [notice] Initialized libevent version 2.0.19-stable using method epoll (with changelist). Good. Nov 26 10:59:25.737 [notice] Opening Socks listener on 127.0.0.1:9050 Nov 26 10:59:25.737 [warn] Could not bind to 127.0.0.1:9050: Address already in use. Is Tor already running? Nov 26 10:59:25.737 [warn] Failed to parse/validate config: Failed to bind one of the listener ports. Nov 26 10:59:25.737 [err] Reading config failed--see warnings above. Thanks in advance.

    Read the article

  • When writing tests for a Wordpress plugin, should i run them inside wordpress or in a normal browser?

    - by Nicola Peluchetti
    I have started using BDD for a wordpress plugin i'm working on and i'm rewriting the js codebase to do tests. I've encountered a few problems but i'm going steady now, i was wondering if i had the right approach, because i'm writing test that should pass in a normal browser environment and not inside wordpress. I choose to do this because i want my plugin to be totally indipendent from the wordpress environment, i'm using requirejs in a way that i don't expose any globals and i'm loading my version of jQuery that doesn't override the one that ships with Wordpress. In this way my plugin would work the same on every wordpress version and my code would not break if they cheange the jQuery version or someone use my plugin on an old wordpress version. I wonder if this is the right approach or if i should always test inside the environment i'm working in. Since wordpress implies some globals i had to write some function purely for testing purpose, like "get_ajax_url": function() { if( typeof window.ajaxurl === "undefined" ) { return "http://localhost/wordpress/wp-admin/admin-ajax.php"; } else { return window.ajaxurl; } }, but apart from that i got everything working right. What do you think?

    Read the article

  • Disallow robots.txt from being accessed in a browser but still accessible by spiders?

    - by Michael Irigoyen
    We make use of the robots.txt file to prevent Google (and other search spiders) from crawling certain pages/directories in our domain. Some of these directories/files are secret, meaning they aren't linked (except perhaps on other pages encompassed by the robots.txt file). Some of these directories/files aren't secret, we just don't want them indexed. If somebody browses directly to www.mydomain.com/robots.txt, they can see the contents of the robots.txt file. From a security standpoint, this is not something we want publicly available to anybody. Any directories that contain secure information are set behind authentication, but we still don't want them to be discoverable unless the user specifically knows about them. Is there a way to provide a robots.txt file but to have it's presence masked by John Doe accessing it from his browser? Perhaps by using PHP to generate the document based on certain criteria? Perhaps something I'm not thinking of? We'd prefer a way to centrally do it (meaning a <meta> tag solution is less than ideal).

    Read the article

  • Spark View Engine: How to set default master page name?

    - by Dave
    I use Spark View Engine with nested master pages. I have Application.spark which defines the basic layout of the website. Then there are several other masters which themselves use Application.spark as master page (Default.spark, SinlgeColumn.spark, Gallery.spark, ...) If no master page is specified in a view file, then automatically Application.spark is choosen by the Spark View Engine. Since almost all my pages use "Default.spark" as master, is there a way to configure this globally? The other possibilities would be: Set the master in each spark file individually <use master="Default" />. But that's really annoying. Rename my master files (Default.spark <- Application.spark) but that really doesn't make any sense in naming.

    Read the article

  • How to set the default page for a .CHM file in HTML Help Workshop?

    - by mengchew0113
    When I open the help (.chm) application, I could see table of contents. By default, the first entry in the file is selected, however I couldn't see the corresponding page data. Instead, I see "This program cannot display the web page" (the default error message that comes in IE7).The page is displayed only when I click on any of the contents on the left side. Is there a way of showing the page by default without clicking on the entry? The following code is the .hhp file. [OPTIONS] Compatibility = 1.1 or later Compiled file=Config.chm Contents file=Config.hhc Default topic=D:\apps\bin\Debug\html\Databases.htm Language=0x409 English (United States) Display compile progress=No Title=ETL_Config Documentation [FILES] D:\apps\bin\Debug\html\Databases.htm D:\apps\bin\Debug\html\InstanceInformation.htm

    Read the article

  • Wordpress: How to override all default theme CSS so your custom one is loaded the last?

    - by mickael
    I have a problem where I've been able to include a custom css in the section of my wordpress theme with the following code: function load_my_style_wp_enqueue_scripts() { wp_register_style('my_styles_css', includes_url("/css/my_styles.css")); wp_enqueue_style('my_styles_css'); } add_action('wp_enqueue_scripts','load_my_style_wp_enqueue_scripts'); But the order in the source code is as follows: <link rel='stylesheet' id='my_styles_css-css' href='http://...folderA.../my_styles.css?ver=3.1' type='text/css' media='all' /> <link rel="stylesheet" id="default-css" href="http://...folderB.../default.css" type="text/css" media="screen,projection" /> <link rel="stylesheet" id="user-css" href="http://...folderC.../user.css" type="text/css" media="screen,projection" /> I want my_styles_css to be the last file to load, overriding default and user files. I've tried to achieve this by modifying wp_enqueue_style in different ways, but without any success. I've tried: wp_enqueue_style('my_styles_css', array('default','user')); or wp_enqueue_style('my_styles_css', false, array('default','user'), '1.0', 'all'); I've seen some related questions without answer or with these last 2 methods that are still failing for me. The function above is part of a plugin that I've got enabled in my wordpress installation.

    Read the article

  • How can I set the default value for a custom "Number" field in SharePoint?

    - by UnhipGlint
    I created a custom field for a content type I am creating using the XML below. <field ID="{GUID}" Required="False" DisplayName="Likes" Name="Likes" Type="Number" SourceID="http://schemas.microsoft.com/sharepoint/v3"><default>0</default></field> The field is meant to be used as a counter of sorts, and will be incremented programmatically. But, I can't get the value to default to "0" when a new item is created. However, for some reason, when I create a new column manually using the Site Collection settings page and configure it to default to "0" it works as it should. So far, I've tried the following tactics: I removed the "default" element from the field definition, and set the "DefaultValue" attribute on the content type definition. I exported a definition for the manually-created, working column (using an Imtech STSADM tool). Then, I added it to my field definitions XML and modified the IDs so that I could add it to my content type. When I did this, it still didn't work, even though it was exported from a working column! Any idea why this isn't working for me?

    Read the article

  • Add not null DateTime column to SQLite without default value?

    - by acidzombie24
    It looks like i cant add a not null constraint or remove a default constraint. I would like to add a datetime column to a table and have all the values set to anything (perhaps 1970 or year 2000) but it seems like i cant use not null without a default and i cant remove a default once added in. So how can i add this column? (once again just a plain datetime not null)

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >