Daily Archives

Articles indexed Monday July 1 2013

Page 9/21 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • copy C'tor with operator= | C++

    - by user2266935
    I've got this code here: class DerivedClass : public BaseClass { SomeClass* a1; Someclass* a2; public: //constructors go here ~DerivedClass() { delete a1; delete a2;} // other functions go here .... }; My first question is as follows: Can I write an "operator=" to "DerivedClass" ? (if your answer is yes, could you show me how?) My second question is: If the answer to the above is yes, could you show me how to make an "copy c'tor" using the "operator=" that you wrote beforehand (if that is even possible)? Your help would be much appreciated !

    Read the article

  • Ways to make (relatively) safe assumptions about the type of concrete subclasses?

    - by Kylotan
    I have an interface (defined as a abstract base class) that looks like this: class AbstractInterface { public: bool IsRelatedTo(const AbstractInterface& other) const = 0; } And I have an implementation of this (constructors etc omitted): class ConcreteThing { public: bool IsRelatedTo(const AbstractInterface& other) const { return m_ImplObject.has_relationship_to(other.m_ImplObject); } private: ImplementationObject m_ImplObject; } The AbstractInterface forms an interface in Project A, and the ConcreteThing lives in Project B as an implementation of that interface. This is so that code in Project A can access data from Project B without having a direct dependency on it - Project B just has to implement the correct interface. Obviously the line in the body of the IsRelatedTo function cannot compile - that instance of ConcreteThing has an m_ImplObject member, but it can't assume that all AbstractInterfaces do, including the other argument. In my system, I can actually assume that all implementations of AbstractInterface are instances of ConcreteThing (or subclasses thereof), but I'd prefer not to be casting the object to the concrete type in order to get at the private member, or encoding that assumption in a way that will crash without a diagnostic later if this assumption ceases to hold true. I cannot modify ImplementationObject, but I can modify AbstractInterface and ConcreteThing. I also cannot use the standard RTTI mechanism for checking a type prior to casting, or use dynamic_cast for a similar purpose. I have a feeling that I might be able to overload IsRelatedTo with a ConcreteThing argument, but I'm not sure how to call it via the base IsRelatedTo(AbstractInterface) method. It wouldn't get called automatically as it's not a strict reimplementation of that method. Is there a pattern for doing what I want here, allowing me to implement the IsRelatedTo function via ImplementationObject::has_relationship_to(ImplementationObject), without risky casts? (Also, I couldn't think of a good question title - please change it if you have a better one.)

    Read the article

  • Disable MOD_PHP in vhosts and activate suphp

    - by mezgani
    I need to desactivate mod_php on a vhost and let it working for other vhosts, I need to disable it in order to activate suphp. here is the vhost config : Options +Indexes ServerName www.native.org ServerAlias native.org DocumentRoot /home/user/www/native/current ServerAdmin [email protected] UseCanonicalName Off CustomLog /var/log/apache2/native_access.log combined ErrorLog /var/log/apache2/native_error.log <Directory /home/user/www/native/current> RemoveHandler .php AllowOverride All Options FollowSymLinks Order allow,deny allow from all </Directory> suPHP_Engine on SuexecUserGroup user native <IfModule mod_suphp.c> suPHP_UserGroup user native AddHandler x-httpd-php .php .php3 .php4 .php5 suPHP_AddHandler x-httpd-php </IfModule> NB: mod_php is activated by default for all vhosts

    Read the article

  • Floating point equality and tolerances

    - by doron
    Comparing two floating point number by something like a_float == b_float is looking for trouble since a_float / 3.0 * 3.0 might not be equal to a_float due to round off error. What one normally does is something like fabs(a_float - b_float) < tol. How does one calculate tol? Ideally tolerance should be just larger than the value of one or two of the least significant figures. So if the single precision floating point number is use tol = 10E-6 should be about right. However this does not work well for the general case where a_float might be very small or might be very large. How does one calculate tol correctly for all general cases? I am interested in C or C++ cases specifically.

    Read the article

  • Microsoft MVP 2013 - ASP.NET/IIS

    - by hajan
    Microsoft MVP 2013 I AM VERY PLEASED TO ANNOUNCE THAT I'VE BEEN AWARDED MICROSOFT MVP 2013 - ASP.NET/IIS I'm honored and it feels great to see this kind of appreciation for what we do in community.This is my third year in a row being Microsoft MVP and getting the email from Microsoft feels exactly the same as the very first one... I'm pleased and really happy to be awarded again.And, here is part of the email message I got: Dear Hajan Selmani, Congratulations! We are pleased to present you with the 2013 Microsoft® MVP Award! This award is given to exceptional technical community leaders who actively share their high quality, real world expertise with others. We appreciate your outstanding contributions in ASP.NET/IIS technical communities during the past year. I would like to say a great THANK YOU to everyone who supports me in the quest of sharing and caring about others in community. A special THANK YOU to Microsoft who brings us this opportunity to encourage our work and increase our enthusiasm to create better community and make great impact through the products and technologies they innovate. Thanks to Yulia Belyanina & Alessandro Teglia for their leadership! Thanks to my family, friends, colleagues, students, acquaintances and all stakeholders who are directly or indirectly involved in my network and deserve respect for my success to getting awarded again with the most prestigious award in community, Microsoft MVP. THANK YOU! Hajan

    Read the article

  • 2013 Microsoft ASP.NET/IIS MVP

    - by Vincent Maverick Durano
    Originally posted on: http://geekswithblogs.net/dotNETvinz/archive/2013/07/01/2013-microsoft-asp.netiis-mvp.aspxI am very honored to have received this award again. This is my fifth year in a row now and it feels really great! ;) That past year was a really blast and had a great time with the MVP Global Summit, was able to create and published new versions of my open-source controls at Codeplex, technical forum contributions, blogging,writing articles and speaking. I’m glad and  very happy that I made it again this year despite of all the busy stuffs at work and life, I still manage to contribute to the ASP.NET community. BIG thanks to God, Microsoft, my MVP lead Lilian Quek, Clarisse Ng our SEA MVP Program Specialist, my family, my great Boss, readers and friends who have supported me. Technorati Tags: MVP,ASP.NET,Community

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Is there a command line two-factor authentication verification code generator?

    - by dan
    I manage a server with two-factor authentication. I have to use the Google Authenticator iPhone app to get the 6-digit verification code to enter after entering the normal server password. The setup is described here: http://www.mnxsolutions.com/security/two-factor-ssh-with-google-authenticator.html I would like a way to get the verification code using just my laptop and not from my iphone. There must be a way to seed a command line app that generates these verification codes and gives you the code for the current 30-second window. Is there a program that can do this?

    Read the article

  • httpd high cpu usage slowing down server response

    - by max
    my client has a image sharing website with about 100.000 visitor per day it has been slowed down considerably since this morning when i checked processes i've notice high cpu usage from http .... some has suggested ddos attack ... i'm not a webmaster and i've no idea whts going on top top - 20:13:30 up 5:04, 4 users, load average: 4.56, 4.69, 4.59 Tasks: 284 total, 3 running, 281 sleeping, 0 stopped, 0 zombie Cpu(s): 12.1%us, 0.9%sy, 1.7%ni, 69.0%id, 16.4%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 16037152k total, 15875096k used, 162056k free, 360468k buffers Swap: 4194288k total, 888k used, 4193400k free, 14050008k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 4151 apache 20 0 277m 84m 3784 R 50.2 0.5 0:01.98 httpd 4115 apache 20 0 210m 16m 4480 S 18.3 0.1 0:00.60 httpd 12885 root 39 19 4296 692 308 S 13.0 0.0 11:09.53 gzip 4177 apache 20 0 214m 20m 3700 R 12.3 0.1 0:00.37 httpd 2219 mysql 20 0 4257m 198m 5668 S 11.0 1.3 42:49.70 mysqld 3691 apache 20 0 206m 14m 6416 S 1.7 0.1 0:03.38 httpd 3934 apache 20 0 211m 17m 4836 S 1.0 0.1 0:03.61 httpd 4098 apache 20 0 209m 17m 3912 S 1.0 0.1 0:04.17 httpd 4116 apache 20 0 211m 17m 4476 S 1.0 0.1 0:00.43 httpd 3867 apache 20 0 217m 23m 4672 S 0.7 0.1 1:03.87 httpd 4146 apache 20 0 209m 15m 3628 S 0.7 0.1 0:00.02 httpd 4149 apache 20 0 209m 15m 3616 S 0.7 0.1 0:00.02 httpd 12884 root 39 19 22336 2356 944 D 0.7 0.0 0:19.21 tar 4054 apache 20 0 206m 12m 4576 S 0.3 0.1 0:00.32 httpd another top top - 15:46:45 up 5:08, 4 users, load average: 5.02, 4.81, 4.64 Tasks: 288 total, 6 running, 281 sleeping, 0 stopped, 1 zombie Cpu(s): 18.4%us, 0.9%sy, 2.3%ni, 56.5%id, 21.8%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 16037152k total, 15792196k used, 244956k free, 360924k buffers Swap: 4194288k total, 888k used, 4193400k free, 13983368k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 4622 apache 20 0 209m 16m 3868 S 54.2 0.1 0:03.99 httpd 4514 apache 20 0 213m 20m 3924 R 50.8 0.1 0:04.93 httpd 4627 apache 20 0 221m 27m 4560 R 18.9 0.2 0:01.20 httpd 12885 root 39 19 4296 692 308 S 18.9 0.0 11:51.79 gzip 2219 mysql 20 0 4257m 199m 5668 S 18.3 1.3 43:19.04 mysqld 4512 apache 20 0 227m 33m 4736 R 5.6 0.2 0:01.93 httpd 4520 apache 20 0 213m 19m 4640 S 1.3 0.1 0:01.48 httpd 4590 apache 20 0 212m 19m 3932 S 1.3 0.1 0:00.06 httpd 4573 apache 20 0 210m 16m 3556 R 1.0 0.1 0:00.03 httpd 4562 root 20 0 15164 1388 952 R 0.7 0.0 0:00.08 top 98 root 20 0 0 0 0 S 0.3 0.0 0:04.89 kswapd0 100 root 39 19 0 0 0 S 0.3 0.0 0:02.85 khugepaged 4579 apache 20 0 209m 16m 3900 S 0.3 0.1 0:00.83 httpd 4637 apache 20 0 209m 15m 3668 S 0.3 0.1 0:00.03 httpd ps aux [root@server ~]# ps aux | grep httpd root 2236 0.0 0.0 207524 10124 ? Ss 15:09 0:03 /usr/sbin/http d -k start -DSSL apache 3087 2.7 0.1 226968 28232 ? S 20:04 0:06 /usr/sbin/http d -k start -DSSL apache 3170 2.6 0.1 221296 22292 ? R 20:05 0:05 /usr/sbin/http d -k start -DSSL apache 3171 9.0 0.1 225044 26768 ? R 20:05 0:17 /usr/sbin/http d -k start -DSSL apache 3188 1.5 0.1 223644 24724 ? S 20:05 0:03 /usr/sbin/http d -k start -DSSL apache 3197 2.3 0.1 215908 17520 ? S 20:05 0:04 /usr/sbin/http d -k start -DSSL apache 3198 1.1 0.0 211700 13000 ? S 20:05 0:02 /usr/sbin/http d -k start -DSSL apache 3272 2.4 0.1 219960 21540 ? S 20:06 0:03 /usr/sbin/http d -k start -DSSL apache 3273 2.0 0.0 211600 12804 ? S 20:06 0:03 /usr/sbin/http d -k start -DSSL apache 3279 3.7 0.1 229024 29900 ? S 20:06 0:05 /usr/sbin/http d -k start -DSSL apache 3280 1.2 0.0 0 0 ? Z 20:06 0:01 [httpd] <defun ct> apache 3285 2.9 0.1 218532 21604 ? S 20:06 0:04 /usr/sbin/http d -k start -DSSL apache 3287 30.5 0.4 265084 65948 ? R 20:06 0:43 /usr/sbin/http d -k start -DSSL apache 3297 1.9 0.1 216068 17332 ? S 20:06 0:02 /usr/sbin/http d -k start -DSSL apache 3342 2.7 0.1 216716 17828 ? S 20:06 0:03 /usr/sbin/http d -k start -DSSL apache 3356 1.6 0.1 217244 18296 ? S 20:07 0:01 /usr/sbin/http d -k start -DSSL apache 3365 6.4 0.1 226044 27428 ? S 20:07 0:06 /usr/sbin/http d -k start -DSSL apache 3396 0.0 0.1 213844 16120 ? S 20:07 0:00 /usr/sbin/http d -k start -DSSL apache 3399 5.8 0.1 215664 16772 ? S 20:07 0:05 /usr/sbin/http d -k start -DSSL apache 3422 0.7 0.1 214860 17380 ? S 20:07 0:00 /usr/sbin/http d -k start -DSSL apache 3435 3.3 0.1 216220 17460 ? S 20:07 0:02 /usr/sbin/http d -k start -DSSL apache 3463 0.1 0.0 212732 15076 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3492 0.0 0.0 207660 7552 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3493 1.4 0.1 218092 19188 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3500 1.9 0.1 224204 26100 ? R 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3501 1.7 0.1 216916 17916 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3502 0.0 0.0 207796 7732 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3505 0.0 0.0 207660 7548 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3529 0.0 0.0 207660 7524 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3531 4.0 0.1 216180 17280 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3532 0.0 0.0 207656 7464 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3543 1.4 0.1 217088 18648 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3544 0.0 0.0 207656 7548 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3545 0.0 0.0 207656 7560 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3546 0.0 0.0 207660 7540 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3547 0.0 0.0 207660 7544 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3548 2.3 0.1 216904 17888 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3550 0.0 0.0 207660 7540 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3551 0.0 0.0 207660 7536 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3552 0.2 0.0 214104 15972 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3553 6.5 0.1 216740 17712 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3554 6.3 0.1 216156 17260 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3555 0.0 0.0 207796 7716 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3556 1.8 0.0 211588 12580 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3557 0.0 0.0 207660 7544 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3565 0.0 0.0 207660 7520 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3570 0.0 0.0 207660 7516 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3571 0.0 0.0 207660 7504 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL root 3577 0.0 0.0 103316 860 pts/2 S+ 20:08 0:00 grep httpd httpd error log [Mon Jul 01 18:53:38 2013] [error] [client 2.178.12.67] request failed: error reading the headers, referer: http://akstube.com/image/show/27023/%D9%86%DB%8C%D9%88%D8%B4%D8%A7-%D8%B6%DB%8C%D8%BA%D9%85%DB%8C-%D9%88-%D8%AE%D9%88%D8%A7%D9%87%D8%B1-%D9%88-%D9%87%D9%85%D8%B3%D8%B1%D8%B4 [Mon Jul 01 18:55:33 2013] [error] [client 91.229.215.240] request failed: error reading the headers, referer: http://akstube.com/image/show/44924 [Mon Jul 01 18:57:02 2013] [error] [client 2.178.12.67] Invalid method in request [Mon Jul 01 18:57:02 2013] [error] [client 2.178.12.67] File does not exist: /var/www/html/501.shtml [Mon Jul 01 19:21:36 2013] [error] [client 127.0.0.1] client denied by server configuration: /var/www/html/server-status [Mon Jul 01 19:21:36 2013] [error] [client 127.0.0.1] File does not exist: /var/www/html/403.shtml [Mon Jul 01 19:23:57 2013] [error] [client 151.242.14.31] request failed: error reading the headers [Mon Jul 01 19:37:16 2013] [error] [client 2.190.16.65] request failed: error reading the headers [Mon Jul 01 19:56:00 2013] [error] [client 151.242.14.31] request failed: error reading the headers Not a JPEG file: starts with 0x89 0x50 also there is lots of these in the messages log Jul 1 20:15:47 server named[2426]: client 203.88.6.9#11926: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 20:15:47 server named[2426]: client 203.88.6.9#26255: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 20:15:48 server named[2426]: client 203.88.6.9#20093: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 20:15:48 server named[2426]: client 203.88.6.9#8672: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:07 server named[2426]: client 203.88.6.9#39352: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:08 server named[2426]: client 203.88.6.9#25382: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:08 server named[2426]: client 203.88.6.9#9064: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:09 server named[2426]: client 203.88.23.9#35375: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:09 server named[2426]: client 203.88.6.9#61932: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:09 server named[2426]: client 203.88.23.9#4423: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:09 server named[2426]: client 203.88.6.9#40229: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#46128: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.6.10#62128: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#35240: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.6.10#36774: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#28361: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.6.10#14970: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#20216: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.10#31794: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#23042: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.6.10#11333: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.10#41807: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#20092: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.6.10#43526: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:15 server named[2426]: client 203.88.23.9#17173: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:15 server named[2426]: client 203.88.23.9#62412: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:15 server named[2426]: client 203.88.23.10#63961: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:15 server named[2426]: client 203.88.23.10#64345: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:15 server named[2426]: client 203.88.23.10#31030: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:16 server named[2426]: client 203.88.6.9#17098: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:16 server named[2426]: client 203.88.6.9#17197: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:16 server named[2426]: client 203.88.6.9#18114: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:16 server named[2426]: client 203.88.6.9#59138: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:17 server named[2426]: client 203.88.6.9#28715: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:48:33 server named[2426]: client 203.88.23.9#26355: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:34 server named[2426]: client 203.88.23.9#34473: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:34 server named[2426]: client 203.88.23.9#62658: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:34 server named[2426]: client 203.88.23.9#51631: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:35 server named[2426]: client 203.88.23.9#54701: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:36 server named[2426]: client 203.88.6.10#63694: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:36 server named[2426]: client 203.88.6.10#18203: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:37 server named[2426]: client 203.88.6.10#9029: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:38 server named[2426]: client 203.88.6.10#58981: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:38 server named[2426]: client 203.88.6.10#29321: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:47 server named[2426]: client 119.160.127.42#42355: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:49 server named[2426]: client 119.160.120.42#46285: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:53 server named[2426]: client 119.160.120.42#30696: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:54 server named[2426]: client 119.160.127.42#14038: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:55 server named[2426]: client 119.160.120.42#33586: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:56 server named[2426]: client 119.160.127.42#55114: query (cache) 'xxxmaza.com/A/IN' denied

    Read the article

  • Determining where a file is being cached

    - by RoadieRich
    I've got a debian webserver running apache, from which I regularly download an executable using IE 8 on an XP virtual machine (over LAN). However, I realised that somewhere along the line that I've been repeatedly running the same version (and wondering why my changes weren't being displayed). A Ctrl+F5 in IE will let me download the new version (although the page is always updated with a simple F5). This makes me suspect that the caching is happening in windows/IE, but I'm not certain. Wherever it's happening, is there an easy way to prevent it at the server? Eventually, we're hoping to offer the software to the entire company, and we'd like to avoid having to tell everyone to do a Ctrl+F5 every time there's an updated version.

    Read the article

  • Virtual host doesn't read .htaccess

    - by Charlie
    I just created virtual host: <VirtualHost myvirtualhost:80> ServerAdmin webmaster@myvirtualhost ServerName myvirtualhost DocumentRoot /home/myname/sites/public_html <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/myname/sites/public_html/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> It works, but it cant read .htacces file in public_html: DirectoryIndex otherindex.php I tried change all AllowOverride to All, but I get 500 error. How can I fix this ? thanks.

    Read the article

  • Server setup scripts, patches and migrations

    - by Ben Swinburne
    I have written some scripts which I use to configure various servers in a uniform way. Each time I deploy a server I run the relevant scripts so that I know they're all configured the same. I then have some patch scripts, which are changes to the originals which I can then run to ensure that modifications to the original set up can be run on each server. E.g. disable.sh - Disable SELinux etc to ensure other scripts all run correctly general.sh - Jailkit, AV, Repos, RKHunter, security tweaks, uninstall unused bits etc web.sh - Installs and configures Apache2 001_update_nr_licence_key.sh - Update a licence key for a piece of software which has changed since its install in general.sh I can run the first 3 without a problem, but when it comes to running patches I am a bit stuck. Is there a sensible way of doing these with some software? My current thought is write to a log file the role of the server be it web or db for example and then note the name of the patch which has run. It could then iterate through a folder to find all patches for that role which it has not yet run and execute them. This seems a bit long winded however. Could someone advise me as to the best way I can keep my servers uniform?

    Read the article

  • Seperate external and intranet portals using the same functions .htaccess

    - by jezzipin
    We are currently struggling with setting up rules for a .htaccess file for a website built upon our company product. The product is built using PLSQL and procedures can be accessed using URLs. We use this functionality to present different options to our users. These options can be injected into HTML pages using replacement tags. So, the tag [user_menu] is always replaced with: /wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} for external sites and /intranet/wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} for internal sites. The issue we are having is twofold. We need to write our .htaccess rules so that the user can access the functionality whether they are internal or external. So, the links should work as follows: http://www.example.com/wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} or http://www.example.com/internal/wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} This is the other problem. As you can see for the internal link above, the procedure needs to be prefixed with internal instead or intranet. We cannot change this in our standard tags as this will affect other sites so we need to achieve this also using htaccess. Could anyone assist with this issue? I apologise if this is brief or confusing but it's something i've never done before and have been given the task of doing. I apologise for the lack of code that will be posted above however I am a front end developer and have been left to make these changes having no prior experience of .htaccess to please bare with me.

    Read the article

  • php.ini settings change not taking effect for large file uploads

    - by user51347
    My server was just reprovisioned, and my application which uploads large (100M+) files now breaks upon re-installation. the symptom is quite consistent : smaller files (8mb in my tests) upload just fine. Larger files cut off at quite close to the same point every time given a particular computer. a file that fails at 26% will fail at just about the same every time. One that takes 1:40s to fail will take within 2 seconds of that every time, before failure. I have set my php.ini settings extravagently : post_max_size = 512M upload_max_filesize = 512M max_input_time = 3600 max_execution_time 3600 Is there possibly a setting at the Apache Level which would override PHP?

    Read the article

  • vsftpd allow anonymous log-in

    - by user1817081
    I'm setting up a ftp server, that will allow anonymous to READ/WRITE to the server. Here is my configuration. anonymous_enable=YES local_enable=YES write_enable=YES anon_upload_enable=YES anon_mkdir_write_enable=YES xferlog_enable=YES connect_from_port_20=YES xferlog_file=/var/log/xferlog xferlog_std_format=YES ftpd_banner=Welcome to blah FTP service. listen=YES pam_service_name=vsftpd userlist_enable=NO tcp_wrappers=YES no_anon_password=YES In my /var/ftp/ i set the permission to 755. When I tried to set it to 777 i got the following error, when i tried to log in. 500 OOPS: vsftpd: refusing to run with writeable anonymous root login failed. Do i need to set up anything else to allow READ/WRITE for anonymous?

    Read the article

  • connections on port 80 suddenly refused / server not responding

    - by user1394013
    my dedicated server stopped responding to requests on port 80 today out of sudden, i havent touched anything in more than a month. its ubuntu 10, varnish + nginx + php-fpm, only 1 website. load is at 0. i messaged my ISP if they changed something but no reply yet. i tried to access the site via http://web-sniffer.net/ and it times out on port 80, but if i connect directly to nginx on port 8080 it loads just fine. for normal users, it doesnt load on neither of these in normal browser. any tips what to check or what could be causing this?

    Read the article

  • Best way to provide redundant switching/links to server

    - by Myles Gray
    We have 3x ESX hosts and 2x SANS that we wish to move to a redundant 10G networking infrastructure. We have 4x Dell PowerConnect 8024F's to provide our backbone and are configured as so (only core switches relevant to this question): So the questions are: 1) Do the interconnects between the 4x 8024F's need to be LAG'd or just STP'd 2) As the NICs on the servers are split across 2 switches, does any special configuration need to be done here or on the switches? 3) If a link or switch fails will the switches automatically find a new path to the Server/SAN?

    Read the article

  • Issues related to storing photos in Active Directory?

    - by Joe_Jones_442Hemi
    Are their any known issues with storing employee photos in AD provided you store them in the compliant sizes and formats? Is there a critical mass that you break or could corrupt AD databases? I'm trying to understand some of the server teams deep concerns with our intent to store employee photos in AD... they fear it will corrupt the database or replication issues will occur globally, etc. We're about a 3,000 employee company.

    Read the article

  • Include page url in php error_log

    - by scatteredbomb
    I'm working on sorting out all my php errors. I've just setup so that all errors go into one log so I can view them. The problem is that I can't really see what page someone trying to access that gave them an error. Of course I can see the file where the error was generated, but these are often includes used on other pages - so I can't really see what URL a person was attempting to view when this error was displayed, so I can't really replicate the problem. Any suggestions? Is it possible to include the actual page URL?

    Read the article

  • How to install ia32-libs on Wheezy?

    - by javano
    I have seen a couple of questions on ServerFault relating to installing ia32-libs on a 64bit machine but the solutions aren't working for me (I don't think any of these questions where for Wheezy specifically I'm not sure how to proceed); root@server:/home/# apt-get install -f ia32-libs Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-i386 php5 : Depends: libapache2-mod-php5 (>= 5.4.4-14+deb7u2) but it is not going to be installed or libapache2-mod-php5filter (>= 5.4.4-14+deb7u2) but it is not going to be installed or php5-cgi (>= 5.4.4-14+deb7u2) but it is not going to be installed or php5-fpm (>= 5.4.4-14+deb7u2) but it is not going to be installed php5-mysql : Depends: phpapi-20100525 E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. root@server:/home/# sudo apt-get install ia32-libs-i386 Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ia32-libs-i386:i386 : Depends: freeglut3:i386 (>= 2.6.0-1) but it is not going to be installed Depends: lesstif2:i386 (>= 1:0.95.2-1) but it is not going to be installed Depends: libacl1:i386 (>= 2.2.49-4) but it is not going to be installed Depends: libasyncns0:i386 (>= 0.3-1.1) but it is not going to be installed Depends: libattr1:i386 (>= 1:2.4.44-2) but it is not going to be installed Depends: libaudio2:i386 (>= 1.9.2-4) but it is not going to be installed Depends: libaudiofile1:i386 (>= 0.2.6-8) but it is not going to be installed Depends: libavahi-client3:i386 (>= 0.6.27-2+squeeze1) but it is not going to be installed Depends: libavahi-common3:i386 (>= 0.6.27-2+squeeze1) but it is not going to be installed Depends: libbsd0:i386 (>= 0.2.0-1) but it is not going to be installed Depends: libcap2:i386 (>= 1:2.19-3) but it is not going to be installed Depends: libcomerr2:i386 (>= 1.41.12-4stable1) but it is not going to be installed Depends: libcups2:i386 (>= 1.4.4-7+squeeze1) but it is not going to be installed Depends: libcurl3:i386 (>= 7.21.0-2) but it is not going to be installed Depends: libdbus-1-3:i386 (>= 1.2.24-4+squeeze1) but it is not going to be installed Depends: libdirectfb-1.2-9:i386 (>= 1.2.10.0-4) but it is not going to be installed Depends: libdrm-intel1:i386 (>= 2.4.21-1~squeeze3) but it is not going to be installed Depends: libdrm-radeon1:i386 (>= 2.4.21-1~squeeze3) but it is not going to be installed Depends: libdrm2:i386 (>= 2.4.21-1~squeeze3) but it is not going to be installed Depends: libedit2:i386 (>= 2.11-20080614-2) but it is not going to be installed Depends: libesd0:i386 (>= 0.2.41-8) but it is not going to be installed Depends: libexif12:i386 (>= 0.6.19-1) but it is not going to be installed Depends: libexpat1:i386 (>= 2.0.1-7) but it is not going to be installed Depends: libflac8:i386 (>= 1.2.1-2+b1) but it is not going to be installed Depends: libfltk1.1:i386 (>= 1.1.10-2+b1) but it is not going to be installed Depends: libfontconfig1:i386 (>= 2.8.0-2.1) but it is not going to be installed Depends: libfreetype6:i386 (>= 2.4.2-2.1+squeeze3) but it is not going to be installed Depends: libgcrypt11:i386 (>= 1.4.5-2) but it is not going to be installed Depends: libgdbm3:i386 (>= 1.8.3-9) but it is not going to be installed Depends: libgl1-mesa-dri:i386 (>= 7.7.1-5) but it is not going to be installed Depends: libgl1-mesa-glx:i386 (>= 7.7.1-5) but it is not going to be installed Depends: libglu1-mesa:i386 (>= 7.7.1-5) but it is not going to be installed Depends: libgnutls26:i386 (>= 2.8.6-1) but it is not going to be installed Depends: libgpg-error0:i386 (>= 1.6-1) but it is not going to be installed Depends: libgphoto2-2:i386 (>= 2.4.6-3) but it is not going to be installed Depends: libgphoto2-port0:i386 (>= 2.4.6-3) but it is not going to be installed Depends: libgssapi-krb5-2:i386 (>= 1.8.3+dfsg-4squeeze2) but it is not going to be installed Depends: libice6:i386 (>= 2:1.0.6-2) but it is not going to be installed Depends: libidn11:i386 (>= 1.15-2) but it is not going to be installed Depends: libieee1284-3:i386 (>= 0.2.11-6) but it is not going to be installed Depends: libjack-jackd2-0:i386 (>= 1.9.5~dfsg-14) but it is not going to be installed or libjack0:i386 (>= 1:0.118+svn3796-7) but it is not going to be installed Depends: libjpeg62:i386 (>= 6b1-1) but it is not going to be installed Depends: libjpeg8:i386 (>= 8b-1) but it is not going to be installed Depends: libk5crypto3:i386 (>= 1.8.3+dfsg-4squeeze2) but it is not going to be installed Depends: libkeyutils1:i386 (>= 1.4-1) but it is not going to be installed Depends: libkrb5-3:i386 (>= 1.8.3+dfsg-4squeeze2) but it is not going to be installed Depends: libkrb5support0:i386 (>= 1.8.3+dfsg-4squeeze2) but it is not going to be installed Depends: liblcms1:i386 (>= 1.18.dfsg-1.2+b3) but it is not going to be installed Depends: libltdl7:i386 (>= 2.2.6b-2) but it is not going to be installed Depends: liblzo2-2:i386 (>= 2.03-2) but it is not going to be installed Depends: libmpg123-0:i386 (>= 1.12.1-3) but it is not going to be installed Depends: libnspr4-0d:i386 (>= 4.8.6-1) but it is not going to be installed Depends: libnss3-1d:i386 (>= 3.12.8-1+squeeze4) but it is not going to be installed Depends: libogg0:i386 (>= 1.2.0~dfsg-1) but it is not going to be installed Depends: libopenal1:i386 (>= 1:1.12.854-2) but it is not going to be installed Depends: libpam0g:i386 (>= 1.1.1-6.1+squeeze1) but it is not going to be installed Depends: libpng12-0:i386 (>= 1.2.44-1+squeeze1) but it is not going to be installed Depends: libpopt0:i386 (>= 1.16-1) but it is not going to be installed Depends: libpulse0:i386 (>= 0.9.21-3+squeeze1) but it is not going to be installed Depends: libsamplerate0:i386 (>= 0.1.7-3) but it is not going to be installed Depends: libsane:i386 (>= 1.0.21-9) but it is not going to be installed Depends: libsasl2-2:i386 (>= 2.1.23.dfsg1-7) but it is not going to be installed Depends: libsdl1.2debian:i386 (>= 1.2.15) but it is not going to be installed Depends: libselinux1:i386 (>= 2.0.96-1) but it is not going to be installed Depends: libsigc++-2.0-0c2a:i386 (>= 2.2.4.2-1) but it is not going to be installed Depends: libsm6:i386 (>= 2:1.1.1-1) but it is not going to be installed Depends: libsndfile1:i386 (>= 1.0.21-3+squeeze1) but it is not going to be installed Depends: libsqlite3-0:i386 (>= 3.7.3-1) but it is not going to be installed Depends: libssh2-1:i386 (>= 1.2.6-1) but it is not going to be installed Depends: libssl1.0.0:i386 (>= 1) but it is not going to be installed Depends: libstdc++5:i386 (>= 1:3.3.6-20) but it is not going to be installed Depends: libsvga1:i386 (>= 1:1.4.3-29) but it is not going to be installed Depends: libsysfs2:i386 (>= 2.1.0+repack-1) but it is not going to be installed Depends: libtasn1-3:i386 (>= 2.7-1) but it is not going to be installed Depends: libtdb1:i386 (>= 1.2.1-2+b1) but it is not going to be installed Depends: libtiff4:i386 (>= 3.9.4-5+squeeze3) but it is not going to be installed Depends: libts-0.0-0:i386 (>= 1.0-7) but it is not going to be installed Depends: libusb-0.1-4:i386 (>= 2:0.1.12-16) but it is not going to be installed Depends: libuuid1:i386 (>= 2.17.2-9) but it is not going to be installed Depends: libvorbis0a:i386 (>= 1.3.1-1) but it is not going to be installed Depends: libvorbisenc2:i386 (>= 1.3.1-1) but it is not going to be installed Depends: libvorbisfile3:i386 (>= 1.3.1-1) but it is not going to be installed Depends: libwrap0:i386 (>= 7.6.q-19) but it is not going to be installed Depends: libx11-6:i386 (>= 2:1.3.3-4) but it is not going to be installed Depends: libx86-1:i386 (>= 1.1+ds1-6) but it is not going to be installed Depends: libxau6:i386 (>= 1:1.0.6-1) but it is not going to be installed Depends: libxaw7:i386 (>= 2:1.0.7-1) but it is not going to be installed Depends: libxcb-render-util0:i386 (>= 0.3.6-1) but it is not going to be installed Depends: libxcb-render0:i386 (>= 1.6-1) but it is not going to be installed Depends: libxcb1:i386 (>= 1.6-1) but it is not going to be installed Depends: libxcomposite1:i386 (>= 1:0.4.2-1) but it is not going to be installed Depends: libxcursor1:i386 (>= 1:1.1.10-2) but it is not going to be installed Depends: libxdamage1:i386 (>= 1:1.1.3-1) but it is not going to be installed Depends: libxdmcp6:i386 (>= 1:1.0.3-2) but it is not going to be installed Depends: libxext6:i386 (>= 2:1.1.2-1) but it is not going to be installed Depends: libxfixes3:i386 (>= 1:4.0.5-1) but it is not going to be installed Depends: libxft2:i386 (>= 2.1.14-2) but it is not going to be installed Depends: libxi6:i386 (>= 2:1.3-6) but it is not going to be installed Depends: libxinerama1:i386 (>= 2:1.1-3) but it is not going to be installed Depends: libxml2:i386 (>= 2.7.8.dfsg-2+squeeze1) but it is not going to be installed Depends: libxmu6:i386 (>= 2:1.0.5-2) but it is not going to be installed Depends: libxmuu1:i386 (>= 2:1.0.5-2) but it is not going to be installed Depends: libxp6:i386 (>= 1:1.0.0.xsf1-2) but it is not going to be installed Depends: libxpm4:i386 (>= 1:3.5.8-1) but it is not going to be installed Depends: libxrandr2:i386 (>= 2:1.3.0-3) but it is not going to be installed Depends: libxrender1:i386 (>= 1:0.9.6-1) but it is not going to be installed Depends: libxslt1.1:i386 (>= 1.1.26-6) but it is not going to be installed Depends: libxss1:i386 (>= 1:1.2.0-2) but it is not going to be installed Depends: libxt6:i386 (>= 1:1.0.7-1) but it is not going to be installed Depends: libxtst6:i386 (>= 2:1.1.0-3) but it is not going to be installed Depends: libxv1:i386 (>= 2:1.0.5-1) but it is not going to be installed Depends: libxxf86vm1:i386 (>= 1:1.1.0-2) but it is not going to be installed Depends: odbcinst1debian2:i386 (>= 2.2.14p2-1) but it is not going to be installed Depends: libodbc1:i386 but it is not going to be installed Depends: xaw3dg:i386 (>= 1.5+E-18) but it is not going to be installed php5 : Depends: libapache2-mod-php5 (>= 5.4.4-14+deb7u2) but it is not going to be installed or libapache2-mod-php5filter (>= 5.4.4-14+deb7u2) but it is not going to be installed or php5-cgi (>= 5.4.4-14+deb7u2) but it is not going to be installed or php5-fpm (>= 5.4.4-14+deb7u2) but it is not going to be installed php5-mysql : Depends: phpapi-20100525 E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. root@server:/home/# dpkg --print-architecture amd64 root@server:/home/# dpkg --print-foreign-architectures i386 root@server:/home/# lsb_release -a No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux 7.1 (wheezy) Release: 7.1 Codename: wheezy root@server:/home/# uname -a Linux servername 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1 x86_64 GNU/Linux root@server:/home/# cat /etc/apt/sources.list deb http://ftp.us.debian.org/debian stable main contrib non-free

    Read the article

  • How can I find which logon script is being run?

    - by user2517266
    I'm having an issue with network drives. Suddenly some computers and users aren't getting their mapped network drives from the logon script. I am NOT a domain admin, I don't have permission to login to the domain controller. And I know very little about Active Directory. The issue seems random, some users this day, different users tomorrow. Some computers run fine and some won't map no matter who logs in. They are mixed OS's XP (SP3), Vista, and 7. I was looking at the domain in windows explorer and I have found the batch file(s) that maps the drives in several locations, how do I know which one is actually being ran? The .bat file is located in \DOMAIN\NETLOGON\script.bat and \DOMAIN\SYSVOL\DOMAIN\scripts\script.bat and \DOMAIN\SYSVOL\DOMAIN\policies\GUID(Right? It's a crazy string)\User\Scripts\Logon\script.bat So, how can I figure out which one is actually being ran per computer or user? Cause they are all slightly different from each other and one of them doesn't map properly. Do all the files in NETLOGON get ran? Cause there are 15+ files in there. Or is it specified in Group Policy which one(s) get ran? EDIT: I am able to access a program called Active Directory Users and Computers, but the properties tab for any user is blank for the logon script.

    Read the article

  • Encryption over gigabit carrier ethernet

    - by Roy
    I would like to encrypt traffic between two data centres. Communication between the sites is provided as a standard provider bridge (s-vlan/802.1ad), so that our local vlan tags (c-vlan/802.1q) are preserved on the trunk. The communication traverse several layer 2 hops in the provider network. Border switches on both sides are Catalyst 3750-X with the MACSec service module, but I assume MACSec is out of the question, as I don't see any way to ensure L2 equality between the switches over a trunk, although it may be possible over a provider bridge. MPLS (using EoMPLS) would certainly allow this option, but is not available in this case. Either way, equipment can always be replaced to accommodate technology and topology choices. How do I go about finding viable technology options that can provide layer 2 point-to-point encryption over ethernet carrier networks?

    Read the article

  • fastcgi-mono-server2 vs fastcgi-mono-server4

    - by Phill
    Not sure if this is a silly question or not. Basically I'm figuring out how to run Mono on Linux, and I'm a Linux no0b. I've got everything up and running, but confused about fastcgi-mono-server. A lot of sites reference fastcgi-mono-server2 while other sites reference fastcgi-mono-server4 When I run: fastcgi-mono-server /version fastcgi-mono-server2.exe 2.10.0.0 I get the same version number for both. If I look at the Mono version Mono JIT compiler version 2.10.8.1 I'm wondering if the version on the mono-server corresponds to the mono version, and not the mono-server version. Is fastcgi-mono-server4 just a newer version?

    Read the article

  • Why is domU faster than dom0 on IO?

    - by Paco
    I have installed debian 7 on a physical machine. This is the configuration of the machine: 3 hard drives using RAID 5 Strip element size: 1M Read policy: Adaptive read ahead Write policy: Write Through /boot 200 MB ext2 / 15 GB ext3 SWAP 10GB LVM rest (~500GB) emphasized text I installed postgresql, created a big database (over 1GB). I have an SQL request that takes a lot of time to run (a SELECT statement, so it only reads data from the database). This request takes approximately 5.5 seconds to run. Then, I installed XEN, created a domU, with another debian distro. On this OS, I also installed postgresql, with the same database. The same SQL request takes only 2.5 seconds to run. I checked the kernel on both dom0 and domU. uname-a returns "Linux debian 3.2.0-4-amd64 #1 SMP Debian 3.2.41-2+deb7u2 x86_64 GNU/Linux" on both systems. I checked the kernel parameters, which are approximately the same. For those that are relevant, I changed their values to make them match on both systems using sysctl. I saw no changes (the requests still take the same amount of time). After this, I checked the file systems. I used ext3 on domU. Still no changes. I installed hdparm, and ran hdparm -Tt on both systems, on all my partitions on both systems, and I get similar results. Now, I am stuck, I don't know what is different, and what could be the cause of such a big difference. Additional Info: Debian runs on a Dell server PowerEdge 2950 postgresql: 9.1.9 (both dom0 and domU) xen-linux-system: 3.2.0 xen-hypervisor: 4.1 Thanks EDIT: As Krzysztof Ksiezyk suggested, it might be due to some file caching system. I ran the dd command to test both the read and write speed. Here is domU: root@test1:~# dd if=/dev/zero of=/root/dd count=5MB bs=1MB ^C2020+0 records in 2020+0 records out 2020000000 bytes (2.0 GB) copied, 18.8289 s, 107 MB/s root@test1:~# dd if=/root/dd of=/dev/null count=5MB bs=1MB 2020+0 records in 2020+0 records out 2020000000 bytes (2.0 GB) copied, 15.0549 s, 134 MB/s And here is dom0: root@debian:~# dd if=/dev/zero of=/root/dd count=5MB bs=1MB ^C1693+0 records in 1693+0 records out 1693000000 bytes (1.7 GB) copied, 8.87281 s, 191 MB/s root@debian:~# dd if=/root/dd of=/dev/null count=5MB bs=1MB 1693+0 records in 1693+0 records out 1693000000 bytes (1.7 GB) copied, 0.501509 s, 3.4 GB/s What can be the cause of this caching system? And how can we "fix" it? Can we apply it to dom0? EDIT 2: I switched my virtual disk type. To do so I followed this article. I did a dd if=/dev/vg0/test1-disk of=/mnt/test1-disk.img bs=16M Then in /etc/xen/test1.cfg, I changed the disk parameter to use file: instead of phy: it should have removed the file caching, but I still get the same numbers (domU being much faster for Postgres)

    Read the article

  • Exchange over HTTP

    - by Rob
    I have recently setup a brand new install of SBS 2011 and it is working well. Exchange is running as advertised and all users are happy. Now, there are 2 users who would like to work outside of the office and require email setting up in Outlook. No problem - Exchange over HTTP. However for some reason it's not working. They can access Outlook Web Access okay, but for some reason Exchange over HTTP / HTTPS isn't working. The error message I receive in Outlook is: "The name cannot be resolved. The connection to Microsoft Exchange is unavailable. Outlook must be online or connected to complete this action." I've tried temporarily turning off the Firewall on both the server and the client but this doesn't help at all. Is there something I'm missing or is there a permission that needs enabling to allow Exchange over HTTP to work?? Many thanks

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >