Search Results

Search found 94339 results on 3774 pages for 'system data'.

Page 701/3774 | < Previous Page | 697 698 699 700 701 702 703 704 705 706 707 708  | Next Page >

  • What's the fastest lookup algorithm for a pair data structure (i.e, a map)?

    - by truncheon
    In the following example a std::map structure is filled with 26 values from A - Z (for key) and 0 – 26 for value. The time taken (on my system) to lookup the last entry (10000000 times) is roughly 250 ms for the vector, and 125 ms for the map. (I compiled using release mode, with O3 option turned on for g++ 4.4) But if for some odd reason I wanted better performance than the std::map, what data structures and functions would I need to consider using? I apologize if the answer seems obvious to you, but I haven't had much experience in the performance critical aspects of C++ programming. UPDATE: This example is rather trivial and hides the true complexity of what I'm trying to achieve. My real world project is a simple scripting language that uses a parser, data tree, and interpreter (instead of a VM stack system). I need to use some kind of data structure (perhaps map) to store the variables names created by script programmers. These are likely to be pretty randomly named, so I need a lookup method that can quickly find a particular key within a (probably) fairly large list of names. #include <ctime> #include <map> #include <vector> #include <iostream> struct mystruct { char key; int value; mystruct(char k = 0, int v = 0) : key(k), value(v) { } }; int find(const std::vector<mystruct>& ref, char key) { for (std::vector<mystruct>::const_iterator i = ref.begin(); i != ref.end(); ++i) if (i->key == key) return i->value; return -1; } int main() { std::map<char, int> mymap; std::vector<mystruct> myvec; for (int i = 'a'; i < 'a' + 26; ++i) { mymap[i] = i - 'a'; myvec.push_back(mystruct(i, i - 'a')); } int pre = clock(); for (int i = 0; i < 10000000; ++i) { find(myvec, 'z'); } std::cout << "linear scan: milli " << clock() - pre << "\n"; pre = clock(); for (int i = 0; i < 10000000; ++i) { mymap['z']; } std::cout << "map scan: milli " << clock() - pre << "\n"; return 0; }

    Read the article

  • I would like to run my Java program on System Startup on Mac OS/Windows. How can I do this?

    - by Misha Koshelev
    Here is what I came up with. It works but I was wondering if there is something more elegant. Thank you! Misha package com.mksoft.fbbday; import java.io.BufferedReader; import java.io.InputStreamReader; import java.io.IOException; import java.net.URI; import java.net.URISyntaxException; import java.io.File; import java.io.FileWriter; import java.io.PrintWriter; public class RunOnStartup { public static void install(String className,boolean doInstall) throws IOException,URISyntaxException { String osName=System.getProperty("os.name"); String fileSeparator=System.getProperty("file.separator"); String javaHome=System.getProperty("java.home"); String userHome=System.getProperty("user.home"); File jarFile=new File(RunOnStartup.class.getProtectionDomain().getCodeSource().getLocation().toURI()); if (osName.startsWith("Windows")) { Process process=Runtime.getRuntime().exec("reg query \"HKCU\\Software\\Microsoft\\Windows\\CurrentVersion\\Explorer\\Shell Folders\" /v Startup"); BufferedReader in=new BufferedReader(new InputStreamReader(process.getInputStream())); String result="",line; while ((line=in.readLine())!=null) { result+=line; } in.close(); result=result.replaceAll(".*REG_SZ[ ]*",""); File startupFile=new File(result+fileSeparator+jarFile.getName().replaceFirst(".jar",".bat")); if (doInstall) { PrintWriter out=new PrintWriter(new FileWriter(startupFile)); out.println("@echo off"); out.println("start /min \"\" \""+javaHome+fileSeparator+"bin"+fileSeparator+"java.exe -cp "+jarFile+" "+className+"\""); out.close(); } else { if (startupFile.exists()) { startupFile.delete(); } } } else if (osName.startsWith("Mac OS")) { File startupFile=new File(userHome+"/Library/LaunchAgents/com.mksoft."+jarFile.getName().replaceFirst(".jar",".plist")); if (doInstall) { PrintWriter out=new PrintWriter(new FileWriter(startupFile)); out.println(""); out.println(""); out.println(""); out.println(""); out.println(" Label"); out.println(" com.mksoft."+jarFile.getName().replaceFirst(".jar","")+""); out.println(" ProgramArguments"); out.println(" "); out.println(" "+javaHome+fileSeparator+"bin"+fileSeparator+"java"); out.println(" -cp"); out.println(" "+jarFile+""); out.println(" "+className+""); out.println(" "); out.println(" RunAtLoad"); out.println(" "); out.println(""); out.println(""); out.close(); } else { if (startupFile.exists()) { startupFile.delete(); } } } } }

    Read the article

  • Need to display my code in stack trace. I see only System... lines.

    - by Tony_Henrich
    Sometimes when I get an Exception while debugging in Visual Studio, all the code lines in the stack trace belong to the System.. name space. It doesn't display my line of code which was responsible for the exception. Is there a way to be make the stack trace show more lines or to ignore stack trace from the system namespace? Something like Just-My-Code for stack trace?

    Read the article

  • C++ function will not return

    - by Mike
    I have a function that I am calling that runs all the way up to where it should return but doesn't return. If I cout something for debugging at the very end of the function, it gets displayed but the function does not return. fetchData is the function I am referring to. It gets called by outputFile. cout displays "done here" but not "data fetched" I know this code is messy but can anyone help me figure this out? Thanks //Given an inode return all data of i_block data char* fetchData(iNode tempInode){ char* data; data = new char[tempInode.i_size]; this->currentInodeSize = tempInode.i_size; //Loop through blocks to retrieve data vector<unsigned int> i_blocks; i_blocks.reserve(tempInode.i_blocks); this->currentDataPosition = 0; cout << "currentDataPosition set to 0" << std::endl; cout << "i_blocks:" << tempInode.i_blocks << std::endl; int i = 0; for(i = 0; i < 12; i++){ if(tempInode.i_block[i] == 0) break; i_blocks.push_back(tempInode.i_block[i]); } appendIndirectData(tempInode.i_block[12], &i_blocks); appendDoubleIndirectData(tempInode.i_block[13], &i_blocks); appendTripleIndirectData(tempInode.i_block[14], &i_blocks); //Loop through all the block addresses to get the actual data for(i=0; i < i_blocks.size(); i++){ appendData(i_blocks[i], data); } cout << "done here" << std::endl; return data; } void appendData(int block, char* data){ char* tempBuffer; tempBuffer = new char[this->blockSize]; ifstream file (this->filename, std::ios::binary); int entryLocation = block*this->blockSize; file.seekg (entryLocation, ios::beg); file.read(tempBuffer, this->blockSize); //Append this block to data for(int i=0; i < this->blockSize; i++){ data[this->currentDataPosition] = tempBuffer[i]; this->currentDataPosition++; } data[this->currentDataPosition] = '\0'; } void outputFile(iNode file, string filename){ char* data; cout << "File Transfer Started" << std::endl; data = this->fetchData(file); cout << "data fetched" << std::endl; char *outputFile = (char*)filename.c_str(); ofstream myfile; myfile.open (outputFile,ios::out|ios::binary); int i = 0; for(i=0; i < file.i_size; i++){ myfile << data[i]; } myfile.close(); cout << "File Transfer Completed" << std::endl; return; }

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • error about ACPI _OSC request failed (AE_NOT_FOUND)

    - by Yavuz Maslak
    I have ubuntu server 11.10 64 bit I see an error in kernel.log. This error comes out when the server reboot. some port of grep APCI in kernel.log; Dec 5 09:08:51 www kernel: [ 0.588605] pci0000:00: Requesting ACPI _OSC control (0x1d) Dec 5 09:08:51 www kernel: [ 0.588667] pci0000:00: ACPI _OSC request failed (AE_NOT_FOUND), returned control mask: 0x1d Dec 5 09:08:51 www kernel: [ 0.588746] ACPI _OSC control for PCIe not granted, disabling ASPM Which hardware may be cause this error ? root@www:# grep -r ACPI /var/log/kern.log Dec 5 09:08:51 www kernel: [ 0.000000] BIOS-e820: 00000000bf780000 - 00000000bf798000 (ACPI data) Dec 5 09:08:51 www kernel: [ 0.000000] BIOS-e820: 00000000bf798000 - 00000000bf7dc000 (ACPI NVS) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: RSDP 00000000000fb1a0 00014 (v00 ACPIAM) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: RSDT 00000000bf780000 00040 (v01 022410 RSDT1405 20100224 MSFT 00000097) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: FACP 00000000bf780200 00084 (v01 022410 FACP1405 20100224 MSFT 00000097) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: DSDT 00000000bf7804b0 0C359 (v01 A1279 A1279001 00000001 INTL 20060113) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: FACS 00000000bf798000 00040 Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: APIC 00000000bf780390 000D8 (v01 022410 APIC1405 20100224 MSFT 00000097) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: MCFG 00000000bf780470 0003C (v01 022410 OEMMCFG 20100224 MSFT 00000097) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: OEMB 00000000bf798040 00072 (v01 022410 OEMB1405 20100224 MSFT 00000097) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: HPET 00000000bf78f4b0 00038 (v01 022410 OEMHPET 20100224 MSFT 00000097) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: OSFR 00000000bf78f4f0 000B0 (v01 022410 OEMOSFR 20100224 MSFT 00000097) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: SSDT 00000000bf798fe0 00363 (v01 DpgPmm CpuPm 00000012 INTL 20060113) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: Local APIC address 0xfee00000 Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: PM-Timer IO Port: 0x808 Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: Local APIC address 0xfee00000 Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x84] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x85] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x86] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x87] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x09] lapic_id[0x88] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0a] lapic_id[0x89] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0b] lapic_id[0x8a] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0c] lapic_id[0x8b] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0d] lapic_id[0x8c] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0e] lapic_id[0x8d] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0f] lapic_id[0x8e] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x10] lapic_id[0x8f] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: IOAPIC (id[0x01] address[0xfec00000] gsi_base[0]) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: IOAPIC (id[0x03] address[0xfec8a000] gsi_base[24]) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: IRQ0 used by override. Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: IRQ2 used by override. Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: IRQ9 used by override. Dec 5 09:08:51 www kernel: [ 0.000000] Using ACPI (MADT) for SMP configuration information Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: HPET id: 0x8086a301 base: 0xfed00000 Dec 5 09:08:51 www kernel: [ 0.009507] ACPI: Core revision 20110413 Dec 5 09:08:51 www kernel: [ 0.499129] PM: Registering ACPI NVS region at bf798000 (278528 bytes) Dec 5 09:08:51 www kernel: [ 0.500749] ACPI: bus type pci registered Dec 5 09:08:51 www kernel: [ 0.502747] ACPI: EC: Look up EC in DSDT Dec 5 09:08:51 www kernel: [ 0.503788] ACPI: Executed 1 blocks of module-level executable AML code Dec 5 09:08:51 www kernel: [ 0.520435] ACPI: SSDT 00000000bf7980c0 00F20 (v01 DpgPmm P001Ist 00000011 INTL 20060113) Dec 5 09:08:51 www kernel: [ 0.520863] ACPI: Dynamic OEM Table Load: Dec 5 09:08:51 www kernel: [ 0.520990] ACPI: SSDT (null) 00F20 (v01 DpgPmm P001Ist 00000011 INTL 20060113) Dec 5 09:08:51 www kernel: [ 0.521308] ACPI: Interpreter enabled Dec 5 09:08:51 www kernel: [ 0.521366] ACPI: (supports S0 S1 S3 S4 S5) Dec 5 09:08:51 www kernel: [ 0.521611] ACPI: Using IOAPIC for interrupt routing Dec 5 09:08:51 www kernel: [ 0.522622] PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in ACPI motherboard resources Dec 5 09:08:51 www kernel: [ 0.554150] ACPI: No dock devices found. Dec 5 09:08:51 www kernel: [ 0.554267] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 5 09:08:51 www kernel: [ 0.555231] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 5 09:08:51 www kernel: [ 0.588224] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0._PRT] Dec 5 09:08:51 www kernel: [ 0.588398] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P1._PRT] Dec 5 09:08:51 www kernel: [ 0.588451] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P4._PRT] Dec 5 09:08:51 www kernel: [ 0.588473] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P6._PRT] Dec 5 09:08:51 www kernel: [ 0.588492] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P7._PRT] Dec 5 09:08:51 www kernel: [ 0.588512] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P8._PRT] Dec 5 09:08:51 www kernel: [ 0.588540] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.NPE1._PRT] Dec 5 09:08:51 www kernel: [ 0.588559] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.NPE3._PRT] Dec 5 09:08:51 www kernel: [ 0.588579] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.NPE7._PRT] Dec 5 09:08:51 www kernel: [ 0.588605] pci0000:00: Requesting ACPI _OSC control (0x1d) Dec 5 09:08:51 www kernel: [ 0.588667] pci0000:00: ACPI _OSC request failed (AE_NOT_FOUND), returned control mask: 0x1d Dec 5 09:08:51 www kernel: [ 0.588746] ACPI _OSC control for PCIe not granted, disabling ASPM Dec 5 09:08:51 www kernel: [ 0.597666] ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 6 7 10 11 12 14 *15) Dec 5 09:08:51 www kernel: [ 0.598142] ACPI: PCI Interrupt Link [LNKB] (IRQs *5) Dec 5 09:08:51 www kernel: [ 0.598336] ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 6 7 10 *11 12 14 15) Dec 5 09:08:51 www kernel: [ 0.598810] ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 6 7 *10 11 12 14 15) Dec 5 09:08:51 www kernel: [ 0.599284] ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 6 7 10 11 12 *14 15) Dec 5 09:08:51 www kernel: [ 0.599762] ACPI: PCI Interrupt Link [LNKF] (IRQs *3 4 6 7 10 11 12 14 15) Dec 5 09:08:51 www kernel: [ 0.600236] ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 6 *7 10 11 12 14 15) Dec 5 09:08:51 www kernel: [ 0.600709] ACPI: PCI Interrupt Link [LNKH] (IRQs 3 *4 6 7 10 11 12 14 15) Dec 5 09:08:51 www kernel: [ 0.601931] PCI: Using ACPI for IRQ routing Dec 5 09:08:51 www kernel: [ 0.628146] pnp: PnP ACPI init Dec 5 09:08:51 www kernel: [ 0.628211] ACPI: bus type pnp registered Dec 5 09:08:51 www kernel: [ 0.628417] pnp 00:00: Plug and Play ACPI device, IDs PNP0a08 PNP0a03 (active) Dec 5 09:08:51 www kernel: [ 0.628859] system 00:01: Plug and Play ACPI device, IDs PNP0c01 (active) Dec 5 09:08:51 www kernel: [ 0.628915] pnp 00:02: Plug and Play ACPI device, IDs PNP0200 (active) Dec 5 09:08:51 www kernel: [ 0.628951] pnp 00:03: Plug and Play ACPI device, IDs PNP0b00 (active) Dec 5 09:08:51 www kernel: [ 0.628975] pnp 00:04: Plug and Play ACPI device, IDs PNP0800 (active) Dec 5 09:08:51 www kernel: [ 0.629004] pnp 00:05: Plug and Play ACPI device, IDs PNP0c04 (active) Dec 5 09:08:51 www kernel: [ 0.629229] system 00:06: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 5 09:08:51 www kernel: [ 0.629779] system 00:07: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 5 09:08:51 www kernel: [ 0.629849] pnp 00:08: Plug and Play ACPI device, IDs PNP0103 (active) Dec 5 09:08:51 www kernel: [ 0.629901] pnp 00:09: Plug and Play ACPI device, IDs INT0800 (active) Dec 5 09:08:51 www kernel: [ 0.630030] system 00:0a: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 5 09:08:51 www kernel: [ 0.630254] system 00:0b: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 5 09:08:51 www kernel: [ 0.630304] pnp 00:0c: Plug and Play ACPI device, IDs PNP0303 PNP030b (active) Dec 5 09:08:51 www kernel: [ 0.630359] pnp 00:0d: Plug and Play ACPI device, IDs PNP0f03 PNP0f13 (active) Dec 5 09:08:51 www kernel: [ 0.630492] system 00:0e: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 5 09:08:51 www kernel: [ 0.630986] system 00:0f: Plug and Play ACPI device, IDs PNP0c01 (active) Dec 5 09:08:51 www kernel: [ 0.631078] pnp: PnP ACPI: found 16 devices Dec 5 09:08:51 www kernel: [ 0.631135] ACPI: ACPI bus type pnp unregistered Dec 5 09:08:51 www kernel: [ 0.726291] ACPI: Power Button [PWRB] Dec 5 09:08:51 www kernel: [ 0.726452] ACPI: Power Button [PWRF] Dec 5 09:08:51 www kernel: [ 0.726527] ACPI: acpi_idle yielding to intel_idle Dec 7 21:45:22 www kernel: [ 0.000000] BIOS-e820: 00000000bf780000 - 00000000bf798000 (ACPI data) Dec 7 21:45:22 www kernel: [ 0.000000] BIOS-e820: 00000000bf798000 - 00000000bf7dc000 (ACPI NVS) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: RSDP 00000000000fb1a0 00014 (v00 ACPIAM) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: RSDT 00000000bf780000 00040 (v01 022410 RSDT1405 20100224 MSFT 00000097) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: FACP 00000000bf780200 00084 (v01 022410 FACP1405 20100224 MSFT 00000097) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: DSDT 00000000bf7804b0 0C359 (v01 A1279 A1279001 00000001 INTL 20060113) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: FACS 00000000bf798000 00040 Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: APIC 00000000bf780390 000D8 (v01 022410 APIC1405 20100224 MSFT 00000097) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: MCFG 00000000bf780470 0003C (v01 022410 OEMMCFG 20100224 MSFT 00000097) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: OEMB 00000000bf798040 00072 (v01 022410 OEMB1405 20100224 MSFT 00000097) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: HPET 00000000bf78f4b0 00038 (v01 022410 OEMHPET 20100224 MSFT 00000097) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: OSFR 00000000bf78f4f0 000B0 (v01 022410 OEMOSFR 20100224 MSFT 00000097) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: SSDT 00000000bf798fe0 00363 (v01 DpgPmm CpuPm 00000012 INTL 20060113) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: Local APIC address 0xfee00000 Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: PM-Timer IO Port: 0x808 Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: Local APIC address 0xfee00000 Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x84] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x85] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x86] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x87] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x09] lapic_id[0x88] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0a] lapic_id[0x89] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0b] lapic_id[0x8a] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0c] lapic_id[0x8b] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0d] lapic_id[0x8c] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0e] lapic_id[0x8d] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0f] lapic_id[0x8e] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x10] lapic_id[0x8f] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: IOAPIC (id[0x01] address[0xfec00000] gsi_base[0]) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: IOAPIC (id[0x03] address[0xfec8a000] gsi_base[24]) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: IRQ0 used by override. Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: IRQ2 used by override. Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: IRQ9 used by override. Dec 7 21:45:22 www kernel: [ 0.000000] Using ACPI (MADT) for SMP configuration information Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: HPET id: 0x8086a301 base: 0xfed00000 Dec 7 21:45:22 www kernel: [ 0.009505] ACPI: Core revision 20110413 Dec 7 21:45:22 www kernel: [ 0.499203] PM: Registering ACPI NVS region at bf798000 (278528 bytes) Dec 7 21:45:22 www kernel: [ 0.500819] ACPI: bus type pci registered Dec 7 21:45:22 www kernel: [ 0.503121] ACPI: EC: Look up EC in DSDT Dec 7 21:45:22 www kernel: [ 0.504162] ACPI: Executed 1 blocks of module-level executable AML code Dec 7 21:45:22 www kernel: [ 0.520821] ACPI: SSDT 00000000bf7980c0 00F20 (v01 DpgPmm P001Ist 00000011 INTL 20060113) Dec 7 21:45:22 www kernel: [ 0.521247] ACPI: Dynamic OEM Table Load: Dec 7 21:45:22 www kernel: [ 0.521374] ACPI: SSDT (null) 00F20 (v01 DpgPmm P001Ist 00000011 INTL 20060113) Dec 7 21:45:22 www kernel: [ 0.521691] ACPI: Interpreter enabled Dec 7 21:45:22 www kernel: [ 0.521748] ACPI: (supports S0 S1 S3 S4 S5) Dec 7 21:45:22 www kernel: [ 0.521993] ACPI: Using IOAPIC for interrupt routing Dec 7 21:45:22 www kernel: [ 0.523002] PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in ACPI motherboard resources Dec 7 21:45:22 www kernel: [ 0.554533] ACPI: No dock devices found. Dec 7 21:45:22 www kernel: [ 0.554649] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 7 21:45:22 www kernel: [ 0.555620] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 7 21:45:22 www kernel: [ 0.588224] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0._PRT] Dec 7 21:45:22 www kernel: [ 0.588398] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P1._PRT] Dec 7 21:45:22 www kernel: [ 0.588451] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P4._PRT] Dec 7 21:45:22 www kernel: [ 0.588473] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P6._PRT] Dec 7 21:45:22 www kernel: [ 0.588492] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P7._PRT] Dec 7 21:45:22 www kernel: [ 0.588512] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P8._PRT] Dec 7 21:45:22 www kernel: [ 0.588540] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.NPE1._PRT] Dec 7 21:45:22 www kernel: [ 0.588559] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.NPE3._PRT] Dec 7 21:45:22 www kernel: [ 0.588579] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.NPE7._PRT] Dec 7 21:45:22 www kernel: [ 0.588606] pci0000:00: Requesting ACPI _OSC control (0x1d) Dec 7 21:45:22 www kernel: [ 0.588667] pci0000:00: ACPI _OSC request failed (AE_NOT_FOUND), returned control mask: 0x1d Dec 7 21:45:22 www kernel: [ 0.588746] ACPI _OSC control for PCIe not granted, disabling ASPM Dec 7 21:45:22 www kernel: [ 0.597661] ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 6 7 10 11 12 14 *15) Dec 7 21:45:22 www kernel: [ 0.598137] ACPI: PCI Interrupt Link [LNKB] (IRQs *5) Dec 7 21:45:22 www kernel: [ 0.598331] ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 6 7 10 *11 12 14 15) Dec 7 21:45:22 www kernel: [ 0.598804] ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 6 7 *10 11 12 14 15) Dec 7 21:45:22 www kernel: [ 0.599278] ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 6 7 10 11 12 *14 15) Dec 7 21:45:22 www kernel: [ 0.599756] ACPI: PCI Interrupt Link [LNKF] (IRQs *3 4 6 7 10 11 12 14 15) Dec 7 21:45:22 www kernel: [ 0.600230] ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 6 *7 10 11 12 14 15) Dec 7 21:45:22 www kernel: [ 0.600704] ACPI: PCI Interrupt Link [LNKH] (IRQs 3 *4 6 7 10 11 12 14 15) Dec 7 21:45:22 www kernel: [ 0.601926] PCI: Using ACPI for IRQ routing Dec 7 21:45:22 www kernel: [ 0.624115] pnp: PnP ACPI init Dec 7 21:45:22 www kernel: [ 0.624179] ACPI: bus type pnp registered Dec 7 21:45:22 www kernel: [ 0.624382] pnp 00:00: Plug and Play ACPI device, IDs PNP0a08 PNP0a03 (active) Dec 7 21:45:22 www kernel: [ 0.624821] system 00:01: Plug and Play ACPI device, IDs PNP0c01 (active) Dec 7 21:45:22 www kernel: [ 0.624875] pnp 00:02: Plug and Play ACPI device, IDs PNP0200 (active) Dec 7 21:45:22 www kernel: [ 0.624911] pnp 00:03: Plug and Play ACPI device, IDs PNP0b00 (active) Dec 7 21:45:22 www kernel: [ 0.624933] pnp 00:04: Plug and Play ACPI device, IDs PNP0800 (active) Dec 7 21:45:22 www kernel: [ 0.624962] pnp 00:05: Plug and Play ACPI device, IDs PNP0c04 (active) Dec 7 21:45:22 www kernel: [ 0.625186] system 00:06: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 7 21:45:22 www kernel: [ 0.625733] system 00:07: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 7 21:45:22 www kernel: [ 0.625803] pnp 00:08: Plug and Play ACPI device, IDs PNP0103 (active) Dec 7 21:45:22 www kernel: [ 0.625856] pnp 00:09: Plug and Play ACPI device, IDs INT0800 (active) Dec 7 21:45:22 www kernel: [ 0.625984] system 00:0a: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 7 21:45:22 www kernel: [ 0.626206] system 00:0b: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 7 21:45:22 www kernel: [ 0.626256] pnp 00:0c: Plug and Play ACPI device, IDs PNP0303 PNP030b (active) Dec 7 21:45:22 www kernel: [ 0.626312] pnp 00:0d: Plug and Play ACPI device, IDs PNP0f03 PNP0f13 (active) Dec 7 21:45:22 www kernel: [ 0.626445] system 00:0e: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 7 21:45:22 www kernel: [ 0.626936] system 00:0f: Plug and Play ACPI device, IDs PNP0c01 (active) Dec 7 21:45:22 www kernel: [ 0.627027] pnp: PnP ACPI: found 16 devices Dec 7 21:45:22 www kernel: [ 0.627084] ACPI: ACPI bus type pnp unregistered Dec 7 21:45:22 www kernel: [ 0.722086] ACPI: Power Button [PWRB] Dec 7 21:45:22 www kernel: [ 0.722246] ACPI: Power Button [PWRF] Dec 7 21:45:22 www kernel: [ 0.722320] ACPI: acpi_idle yielding to intel_idle

    Read the article

  • OWB 11gR2 - Early Arriving Facts

    - by Dawei Sun
    A common challenge when building ETL components for a data warehouse is how to handle early arriving facts. OWB 11gR2 introduced a new feature to address this for dimensional objects entitled Orphan Management. An orphan record is one that does not have a corresponding existing parent record. Orphan management automates the process of handling source rows that do not meet the requirements necessary to form a valid dimension or cube record. In this article, a simple example will be provided to show you how to use Orphan Management in OWB. We first import a sample MDL file that contains all the objects we need. Then we take some time to examine all the objects. After that, we prepare the source data, deploy the target table and dimension/cube loading map. Finally, we run the loading maps, and check the data in target dimension/cube tables. OK, let’s start… 1. Import MDL file and examine sample project First, download zip file from here, which includes a MDL file and three source data files. Then we open OWB design center, import orphan_management.mdl by using the menu File->Import->Warehouse Builder Metadata. Now we have several objects in BI_DEMO project as below: Mapping LOAD_CHANNELS_OM: The mapping for dimension loading. Mapping LOAD_SALES_OM: The mapping for cube loading. Dimension CHANNELS_OM: The dimension that contains channels data. Cube SALES_OM: The cube that contains sales data. Table CHANNELS_OM: The star implementation table of dimension CHANNELS_OM. Table SALES_OM: The star implementation table of cube SALES_OM. Table SRC_CHANNELS: The source table of channels data, that will be loaded into dimension CHANNELS_OM. Table SRC_ORDERS and SRC_ORDER_ITEMS: The source tables of sales data that will be loaded into cube SALES_OM. Sequence CLASS_OM_DIM_SEQ: The sequence used for loading dimension CHANNELS_OM. Dimension CHANNELS_OM This dimension has a hierarchy with three levels: TOTAL, CLASS and CHANNEL. Each level has three attributes: ID (surrogate key), NAME and SOURCE_ID (business key). It has a standard star implementation. The orphan management policy and the default parent setting are shown in the following screenshots: The orphan management policy options that you can set for loading are: Reject Orphan: The record is not inserted. Default Parent: You can specify a default parent record. This default record is used as the parent record for any record that does not have an existing parent record. If the default parent record does not exist, Warehouse Builder creates the default parent record. You specify the attribute values of the default parent record at the time of defining the dimensional object. If any ancestor of the default parent does not exist, Warehouse Builder also creates this record. No Maintenance: This is the default behavior. Warehouse Builder does not actively detect, reject, or fix orphan records. While removing data from a dimension, you can select one of the following orphan management policies: Reject Removal: Warehouse Builder does not allow you to delete the record if it has existing child records. No Maintenance: This is the default behavior. Warehouse Builder does not actively detect, reject, or fix orphan records. (More details are at http://download.oracle.com/docs/cd/E11882_01/owb.112/e10935/dim_objects.htm#insertedID1) Cube SALES_OM This cube is references to dimension CHANNELS_OM. It has three measures: AMOUNT, QUANTITY and COST. The orphan management policy setting are shown as following screenshot: The orphan management policy options that you can set for loading are: No Maintenance: Warehouse Builder does not actively detect, reject, or fix orphan rows. Default Dimension Record: Warehouse Builder assigns a default dimension record for any row that has an invalid or null dimension key value. Use the Settings button to define the default parent row. Reject Orphan: Warehouse Builder does not insert the row if it does not have an existing dimension record. (More details are at http://download.oracle.com/docs/cd/E11882_01/owb.112/e10935/dim_objects.htm#BABEACDG) Mapping LOAD_CHANNELS_OM This mapping loads source data from table SRC_CHANNELS to dimension CHANNELS_OM. The operator CHANNELS_IN is bound to table SRC_CHANNELS; CHANNELS_OUT is bound to dimension CHANNELS_OM. The TOTALS operator is used for generating a constant value for the top level in the dimension. The CLASS_FILTER operator is used to filter out the “invalid” class name, so then we can see what will happen when those channel records with an “invalid” parent are loading into dimension. Some properties of the dimension operator in this mapping are important to orphan management. See the screenshot below: Create Default Level Records: If YES, then default level records will be created. This property must be set to YES for dimensions and cubes if one of their orphan management policies is “Default Parent” or “Default Dimension Record”. This property is set to NO by default, so the user may need to set this to YES manually. LOAD policy for INVALID keys/ LOAD policy for NULL keys: These two properties have the same meaning as in the dimension editor. The values are set to the same as the dimension value when user drops the dimension into the mapping. The user does not need to modify these properties. Record Error Rows: If YES, error rows will be inserted into error table when loading the dimension. REMOVE Orphan Policy: This property is used when removing data from a dimension. Since the dimension loading type is set to LOAD in this example, this property is disabled. Mapping LOAD_SALES_OM This mapping loads source data from table SRC_ORDERS and SRC_ORDER_ITEMS to cube SALES_OM. This mapping seems a little bit complicated, but operators in the red rectangle are used to filter out and generate the records with “invalid” or “null” dimension keys. Some properties of the cube operator in a mapping are important to orphan management. See the screenshot below: Enable Source Aggregation: Should be checked in this example. If the default dimension record orphan policy is set for the cube operator, then it is recommended that source aggregation also be enabled. Otherwise, the orphan management processing may produce multiple fact rows with the same default dimension references, which will cause an “unstable rowset” execution error in the database, since the dimension refs are used as update match attributes for updating the fact table. LOAD policy for INVALID keys/ LOAD policy for NULL keys: These two properties have the same meaning as in the cube editor. The values are set to the same as in the cube editor when the user drops the cube into the mapping. The user does not need to modify these properties. Record Error Rows: If YES, error rows will be inserted into error table when loading the cube. 2. Deploy objects and mappings We now can deploy the objects. First, make sure location SALES_WH_LOCAL has been correctly configured. Then open Control Center Manager by using the menu Tools->Control Center Manager. Expand BI_DEMO->SALES_WH_LOCAL, click SALES_WH node on the project tree. We can see the following objects: Deploy all the objects in the following order: Sequence CLASS_OM_DIM_SEQ Table CHANNELS_OM, SALES_OM, SRC_CHANNELS, SRC_ORDERS, SRC_ORDER_ITEMS Dimension CHANNELS_OM Cube SALES_OM Mapping LOAD_CHANNELS_OM, LOAD_SALES_OM Note that we deployed source tables as well. Normally, we import source table from database instead of deploying them to target schema. However, in this example, we designed the source tables in OWB and deployed them to database for the purpose of this demonstration. 3. Prepare and examine source data Before running the mappings, we need to populate and examine the source data first. Run SRC_CHANNELS.sql, SRC_ORDERS.sql and SRC_ORDER_ITEMS.sql as target user. Then we check the data in these three tables. Table SRC_CHANNELS SQL> select rownum, id, class, name from src_channels; Records 1~5 are correct; they should be loaded into dimension without error. Records 6,7 and 8 have null parents; they should be loaded into dimension with a default parent value, and should be inserted into error table at the same time. Records 9, 10 and 11 have “invalid” parents; they should be rejected by dimension, and inserted into error table. Table SRC_ORDERS and SRC_ORDER_ITEMS SQL> select rownum, a.id, a.channel, b.amount, b.quantity, b.cost from src_orders a, src_order_items b where a.id = b.order_id; Record 178 has null dimension reference; it should be loaded into cube with a default dimension reference, and should be inserted into error table at the same time. Record 179 has “invalid” dimension reference; it should be rejected by cube, and inserted into error table. Other records should be aggregated and loaded into cube correctly. 4. Run the mappings and examine the target data In the Control Center Manager, expand BI_DEMO-> SALES_WH_LOCAL-> SALES_WH-> Mappings, right click on LOAD_CHANNELS_OM node, click Start. Use the same way to run mapping LOAD_SALES_OM. When they successfully finished, we can check the data in target tables. Table CHANNELS_OM SQL> select rownum, total_id, total_name, total_source_id, class_id,class_name, class_source_id, channel_id, channel_name,channel_source_id from channels_om order by abs(dimension_key); Records 1,2 and 3 are the default dimension records for the three levels. Records 8, 10 and 15 are the loaded records that originally have null parents. We see their parents name (class_name) is set to DEF_CLASS_NAME. Those records whose CHANNEL_NAME are Special_4, Special_5 and Special_6 are not loaded to this table because of the invalid parent. Error Table CHANNELS_OM_ERR SQL> select rownum, class_source_id, channel_id, channel_name,channel_source_id, err$$$_error_reason from channels_om_err order by channel_name; We can see all the record with null parent or invalid parent are inserted into this error table. Error reason is “Default parent used for record” for the first three records, and “No parent found for record” for the last three. Table SALES_OM SQL> select a.*, b.channel_name from sales_om a, channels_om b where a.channels=b.channel_id; We can see the order record with null channel_name has been loaded into target table with a default channel_name. The one with “invalid” channel_name are not loaded. Error Table SALES_OM_ERR SQL> select a.amount, a.cost, a.quantity, a.channels, b.channel_name, a.err$$$_error_reason from sales_om_err a, channels_om b where a.channels=b.channel_id(+); We can see the order records with null or invalid channel_name are inserted into error table. If the dimension reference column is null, the error reason is “Default dimension record used for fact”. If it is invalid, the error reason is “Dimension record not found for fact”. Summary In summary, this article illustrated the Orphan Management feature in OWB 11gR2. Automated orphan management policies improve ETL developer and administrator productivity by addressing an important cause of cube and dimension load failures, without requiring developers to explicitly build logic to handle these orphan rows.

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • UnauthorizedAccessException on MemoryMappedFile in C# 4.

    - by Kevin Nisbet
    Hey, I wanted to play around with using a MemoryMappedFile to access an existing binary file. If this even at all possible or am I a crazy person? The idea would be to map the existing binary file directly to memory for some preferably higher-speed operations. Or to atleast see how these things worked. using System.IO.MemoryMappedFiles; System.IO.FileInfo fi = new System.IO.FileInfo(@"C:\testparsercap.pcap"); MemoryMappedFileSecurity sec = new MemoryMappedFileSecurity(); System.IO.FileStream file = fi.Open(System.IO.FileMode.Open, System.IO.FileAccess.ReadWrite, System.IO.FileShare.ReadWrite); MemoryMappedFile mf = MemoryMappedFile.CreateFromFile(file, "testpcap", fi.Length, MemoryMappedFileAccess.Read, sec, System.IO.HandleInheritability.Inheritable, true); MemoryMappedViewAccessor FileMapView = mf.CreateViewAccessor(); PcapHeader head = new PcapHeader(); FileMapView.Read<PcapHeader>(0, out head); I get System.UnauthorizedAccessException was unhandled (Message=Access to the path is denied.) on the mf.CreateViewAccessor() line. I don't think it's file-permissions, since I'm running as a nice insecure administrator user, and there aren't any other programs open that might have a read-lock on the file. This is on Vista with UAC disabled. If it's simply not possible and I missed something in the documentation, please let me know. I could barely find anything at all referencing this feature of .net 4.0 Thanks!

    Read the article

  • ASP.NET MVC - dropdown list post handling problem

    - by ile
    I've had troubles for a few days already with handling form that contains dropdown list. I tried all that I've learned so far but nothing helps. This is my code: using System; using System.Collections.Generic; using System.Linq; using System.Web; using CMS; using CMS.Model; using System.ComponentModel.DataAnnotations; namespace Portal.Models { public class ArticleDisplay { public ArticleDisplay() { } public int CategoryID { set; get; } public string CategoryTitle { set; get; } public int ArticleID { set; get; } public string ArticleTitle { set; get; } public DateTime ArticleDate; public string ArticleContent { set; get; } } public class HomePageViewModel { public HomePageViewModel(IEnumerable<ArticleDisplay> summaries, Article article) { this.ArticleSummaries = summaries; this.NewArticle = article; } public IEnumerable<ArticleDisplay> ArticleSummaries { get; private set; } public Article NewArticle { get; private set; } } public class ArticleRepository { private DB db = new DB(); // // Query Methods public IQueryable<ArticleDisplay> FindAllArticles() { var result = from category in db.ArticleCategories join article in db.Articles on category.CategoryID equals article.CategoryID orderby article.Date descending select new ArticleDisplay { CategoryID = category.CategoryID, CategoryTitle = category.Title, ArticleID = article.ArticleID, ArticleTitle = article.Title, ArticleDate = article.Date, ArticleContent = article.Content }; return result; } public IQueryable<ArticleDisplay> FindTodayArticles() { var result = from category in db.ArticleCategories join article in db.Articles on category.CategoryID equals article.CategoryID where article.Date == DateTime.Today select new ArticleDisplay { CategoryID = category.CategoryID, CategoryTitle = category.Title, ArticleID = article.ArticleID, ArticleTitle = article.Title, ArticleDate = article.Date, ArticleContent = article.Content }; return result; } public Article GetArticle(int id) { return db.Articles.SingleOrDefault(d => d.ArticleID == id); } public IQueryable<ArticleDisplay> DetailsArticle(int id) { var result = from category in db.ArticleCategories join article in db.Articles on category.CategoryID equals article.CategoryID where id == article.ArticleID select new ArticleDisplay { CategoryID = category.CategoryID, CategoryTitle = category.Title, ArticleID = article.ArticleID, ArticleTitle = article.Title, ArticleDate = article.Date, ArticleContent = article.Content }; return result; } // // Insert/Delete Methods public void Add(Article article) { db.Articles.InsertOnSubmit(article); } public void Delete(Article article) { db.Articles.DeleteOnSubmit(article); } // // Persistence public void Save() { db.SubmitChanges(); } } } using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using Portal.Models; using CMS.Model; namespace Portal.Areas.CMS.Controllers { public class ArticleController : Controller { ArticleRepository articleRepository = new ArticleRepository(); ArticleCategoryRepository articleCategoryRepository = new ArticleCategoryRepository(); // // GET: /Article/ public ActionResult Index() { ViewData["categories"] = new SelectList ( articleCategoryRepository.FindAllCategories().ToList(), "CategoryId", "Title" ); Article article = new Article() { Date = DateTime.Now, CategoryID = 1 }; HomePageViewModel homeData = new HomePageViewModel(articleRepository.FindAllArticles().ToList(), article); return View(homeData); } // // GET: /Article/Details/5 public ActionResult Details(int id) { var article = articleRepository.DetailsArticle(id).Single(); if (article == null) return View("NotFound"); return View(article); } // // GET: /Article/Create //public ActionResult Create() //{ // ViewData["categories"] = new SelectList // ( // articleCategoryRepository.FindAllCategories().ToList(), "CategoryId", "Title" // ); // Article article = new Article() // { // Date = DateTime.Now, // CategoryID = 1 // }; // return View(article); //} // // POST: /Article/Create [ValidateInput(false)] [AcceptVerbs(HttpVerbs.Post)] public ActionResult Create(Article article) { if (ModelState.IsValid) { try { // TODO: Add insert logic here articleRepository.Add(article); articleRepository.Save(); return RedirectToAction("Index"); } catch { return View(article); } } else { return View(article); } } // // GET: /Article/Edit/5 public ActionResult Edit(int id) { ViewData["categories"] = new SelectList ( articleCategoryRepository.FindAllCategories().ToList(), "CategoryId", "Title" ); var article = articleRepository.GetArticle(id); return View(article); } // // POST: /Article/Edit/5 [ValidateInput(false)] [AcceptVerbs(HttpVerbs.Post)] public ActionResult Edit(int id, FormCollection collection) { Article article = articleRepository.GetArticle(id); try { // TODO: Add update logic here UpdateModel(article, collection.ToValueProvider()); articleRepository.Save(); return RedirectToAction("Details", new { id = article.ArticleID }); } catch { return View(article); } } // // HTTP GET: /Article/Delete/1 public ActionResult Delete(int id) { Article article = articleRepository.GetArticle(id); if (article == null) return View("NotFound"); else return View(article); } // // HTTP POST: /Article/Delete/1 [AcceptVerbs(HttpVerbs.Post)] public ActionResult Delete(int id, string confirmButton) { Article article = articleRepository.GetArticle(id); if (article == null) return View("NotFound"); articleRepository.Delete(article); articleRepository.Save(); return View("Deleted"); } [ValidateInput(false)] public ActionResult UpdateSettings(int id, string value, string field) { // This highly-specific example is from the original coder's blog system, // but you can substitute your own code here. I assume you can pick out // which text field it is from the id. Article article = articleRepository.GetArticle(id); if (article == null) return Content("Error"); if (field == "Title") { article.Title = value; UpdateModel(article, new[] { "Title" }); articleRepository.Save(); } if (field == "Content") { article.Content = value; UpdateModel(article, new[] { "Content" }); articleRepository.Save(); } if (field == "Date") { article.Date = Convert.ToDateTime(value); UpdateModel(article, new[] { "Date" }); articleRepository.Save(); } return Content(value); } } } and view: <%@ Page Title="" Language="C#" MasterPageFile="~/Areas/CMS/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<Portal.Models.HomePageViewModel>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Index </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <div class="naslov_poglavlja_main">Articles Administration</div> <%= Html.ValidationSummary("Create was unsuccessful. Please correct the errors and try again.") %> <% using (Html.BeginForm("Create","Article")) {%> <div class="news_forma"> <label for="Title" class="news">Title:</label> <%= Html.TextBox("Title", "", new { @class = "news" })%> <%= Html.ValidationMessage("Title", "*") %> <label for="Content" class="news">Content:</label> <div class="textarea_okvir"> <%= Html.TextArea("Content", "", new { @class = "news" })%> <%= Html.ValidationMessage("Content", "*")%> </div> <label for="CategoryID" class="news">Category:</label> <%= Html.DropDownList("CategoryId", (IEnumerable<SelectListItem>)ViewData["categories"], new { @class = "news" })%> <p> <input type="submit" value="Publish" class="form_submit" /> </p> </div> <% } %> <div class="naslov_poglavlja_main"><%= Html.ActionLink("Write new article...", "Create") %></div> <div id="articles"> <% foreach (var item in Model.ArticleSummaries) { %> <div> <div class="naslov_vijesti" id="<%= item.ArticleID %>"><%= Html.Encode(item.ArticleTitle) %></div> <div class="okvir_vijesti"> <div class="sadrzaj_vijesti" id="<%= item.ArticleID %>"><%= item.ArticleContent %></div> <div class="datum_vijesti" id="<%= item.ArticleID %>"><%= Html.Encode(String.Format("{0:g}", item.ArticleDate)) %></div> <a class="news_delete" href="#" id="<%= item.ArticleID %>">Delete</a> </div> <div class="dno"></div> </div> <% } %> </div> </asp:Content> When trying to post new article I get following error: System.InvalidOperationException: The ViewData item that has the key 'CategoryId' is of type 'System.Int32' but must be of type 'IEnumerable'. I really don't know what to do cause I'm pretty new to .net and mvc Any help appreciated! Ile EDIT: I found where I made mistake. I didn't include date. If in view form I add this line I'm able to add article: <%=Html.Hidden("Date", String.Format("{0:g}", Model.NewArticle.Date)) %> But, if I enter wrong datetype or leave title and content empty then I get the same error. In this eample there is no need for date edit, but I will need it for some other forms and validation will be necessary. EDIT 2: Error happens when posting! Call stack: App_Web_of9beco9.dll!ASP.areas_cms_views_article_create_aspx.__RenderContent2(System.Web.UI.HtmlTextWriter __w = {System.Web.UI.HtmlTextWriter}, System.Web.UI.Control parameterContainer = {System.Web.UI.WebControls.ContentPlaceHolder}) Line 31 + 0x9f bytes C#

    Read the article

  • ASP.NET Web Site Administration Tool unkown Error ASP.NET 4 VS 2010

    - by Gabriel Guimarães
    I was following the MVCMusic tutorial with an machine with full sql server 2008 r2 and full visual studio professional and when I got to the page where it sets up membership (near page 66) the Web administration tool wont work, i got the following error: An error was encountered. Please return to the previous page and try again. my web config is like this: <connectionStrings> <clear /> <add name="MvcMusicStoreCN" connectionString="Data Source=.;Initial Catalog=MvcMusicStore;Integrated Security=True" providerName="System.Data.SqlClient" /> <add name="MvcMusicStoreEntities" connectionString="metadata=res://*/Models.Store.csdl|res://*/Models.Store.ssdl|res://*/Models.Store.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=.;Initial Catalog=MvcMusicStore;Integrated Security=True;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /> </connectionStrings> <system.web> <membership defaultProvider="AspNetSqlMembershipProvider"> <providers> <clear /> <add name="AspNetSqlMembershipProvider" type="System.Web.Security.SqlMembershipProvider" connectionStringName="MvcMusicStoreCN" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="true" requiresUniqueEmail="false" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" passwordAttemptWindow="10" applicationName="/" passwordFormat="Hashed" /> </providers> </membership> <profile> <providers> <clear /> <add name="AspNetSqlProfileProvider" type="System.Web.Profile.SqlProfileProvider" connectionStringName="MvcMusicStoreCN" applicationName="/" /> </providers> </profile> <roleManager enabled="true" defaultProvider="MvcMusicStoreCN"> <providers> <clear /> <add connectionStringName="MvcMusicStoreCN" applicationName="/" name="AspNetSqlRoleProvider" type="System.Web.Security.SqlRoleProvider" /> <add applicationName="/" name="AspNetWindowsTokenRoleProvider" type="System.Web.Security.WindowsTokenRoleProvider" /> </providers> </roleManager> </system.web>

    Read the article

  • Unable to regress web application from AJAX Control Toolkit 3.0 back to 1.0

    - by David Neale
    I was recently asked to stop using the Ajax Control Toolkit 3.0 in my application and need to go back to 1.0. Luckily I only have one calendar control which I don't believe will be affected by this. I have removed the reference to the 3.0 .dll and added a reference to the 1.0 .dll. These are the assemblies in web.config: <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Data.DataSetExtensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add assembly="System.Xml.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Web.Extensions.Design, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add assembly="System.Design, Version=2.0.0.0, Culture=neutral, PublicKeyToken=B03F5F7F11D50A3A"/> <add assembly="System.Windows.Forms, Version=2.0.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/></assemblies> and this also also there: <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions" publicKeyToken="31bf3856ad364e35"/> <bindingRedirect oldVersion="1.0.0.0-1.1.0.0" newVersion="3.5.0.0"/> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions.Design" publicKeyToken="31bf3856ad364e35"/> <bindingRedirect oldVersion="1.0.0.0-1.1.0.0" newVersion="3.5.0.0"/> </dependentAssembly> </assemblyBinding> </runtime> I get a compile error of: Could not load file or assembly 'AjaxControlToolkit, Version=3.0.30930.28736, Culture=neutral, PublicKeyToken=28f01b0e84b6d53e' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)

    Read the article

  • WCF - The maximum nametable character count quota (16384) has been exceeded while reading XML data.

    - by Jankhana
    I'm having a WCF Service that uses wsHttpBinding. The server configuration is as follows : <bindings> <wsHttpBinding> <binding name="wsHttpBinding" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <security mode="None"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true" /> </security> </binding> </wsHttpBinding> </bindings> At the client side I'm including the Service reference of the WCF-Service. It works great if I have limited functions say 90 Operation Contract in my IService but if add one more OperationContract than I'm unable to Update the Service reference nor i'm able to add that service reference. In this article it's mentioned that by changing those config files(i.e devenv.exe.config, WcfTestClient.exe.config and SvcUtil.exe.config) it will work but even including those bindings in those config files still that error pops up saying There was an error downloading 'http://10.0.3.112/MyService/Service1.svc/mex'. The request failed with HTTP status 400: Bad Request. Metadata contains a reference that cannot be resolved: 'http://10.0.3.112/MyService/Service1.svc/mex'. There is an error in XML document (1, 89549). The maximum nametable character count quota (16384) has been exceeded while reading XML data. The nametable is a data structure used to store strings encountered during XML processing - long XML documents with non-repeating element names, attribute names and attribute values may trigger this quota. This quota may be increased by changing the MaxNameTableCharCount property on the XmlDictionaryReaderQuotas object used when creating the XML reader. Line 1, position 89549. If the service is defined in the current solution, try building the solution and adding the service reference again. Any idea how to solve this????

    Read the article

  • Custom sectionGroup and Section App.config

    - by fampinheiro
    <configSections> <section name="castle" type="Castle.Windsor.Configuration.AppDomain.CastleSectionhandler, Castle.Windsor" /> <sectionGroup name="codegarten"> <section name="configuration" type="Tmp.StartupCodegartenConfigSection, Tmp" /> <section name="apache" type="Tmp.StartupApacheConfigSection, Tmp" /> </sectionGroup> </configSections> When i use msdn main to see all the sections i get this error, Unhandled Exception: System.Configuration.ConfigurationErrorsException: An error occurred creating the configuration section handler for codegarten/apache: Coul d not load type 'Tmp.StartupApacheConfigSection' from assembly 'Tmp'. (D:\Codega rten\trunk\Codegarten\Tmp\bin\Debug\Tmp.exe.Config line 8) ---> System.TypeLoadE xception: Could not load type 'Tmp.StartupApacheConfigSection' from assembly 'Tm p'. at System.Configuration.TypeUtil.GetTypeWithReflectionPermission(IInternalCon figHost host, String typeString, Boolean throwOnError) at System.Configuration.MgmtConfigurationRecord.CreateSectionFactory(FactoryR ecord factoryRecord) at System.Configuration.BaseConfigurationRecord.FindAndEnsureFactoryRecord(St ring configKey, Boolean& isRootDeclaredHere) --- End of inner exception stack trace --- at System.Configuration.BaseConfigurationRecord.FindAndEnsureFactoryRecord(St ring configKey, Boolean& isRootDeclaredHere) at System.Configuration.BaseConfigurationRecord.GetSectionRecursive(String co nfigKey, Boolean getLkg, Boolean checkPermission, Boolean getRuntimeObject, Bool ean requestIsHere, Object& result, Object& resultRuntimeObject) at System.Configuration.ConfigurationSectionCollection.Get(String name) at System.Configuration.ConfigurationSectionCollection.<GetEnumerator>d__0.Mo veNext() at Tmp.Program.ShowSectionGroupInfo(ConfigurationSectionGroup sectionGroup) i n D:\Codegarten\trunk\Codegarten\Tmp\Program.cs:line 53 at Tmp.Program.ShowSectionGroupCollectionInfo(ConfigurationSectionGroupCollec tion sectionGroups) in D:\Codegarten\trunk\Codegarten\Tmp\Program.cs:line 30 at Tmp.Program.Main(String[] args) in D:\Codegarten\trunk\Codegarten\Tmp\Prog ram.cs:line 22 Thanks

    Read the article

  • "Message":"Invalid JSON primitive: RecordId."

    - by Radhi
    getting error in ajax call from jquery. here is my jquery function function AddAlbumToMyProfile(AlbumId, AlbumName, AlbumTypeName) { var obj = { AlbumId: AlbumId, AlbumName: AlbumName, AlbumTypeName: AlbumTypeName }; //following is ASP.NET AJAX serialize function to convert //object into jSON. var json = Sys.Serialization.JavaScriptSerializer.serialize(obj); $.ajax({ type: "POST", url: "Gallary.aspx/AddAlbumToMyProfile", data: json, contentType: "application/json; charset=utf-8", dataType: "json", async: true, cache: false, success: function(msg) { if (msg.d == '') { alert("Album Added to your profile"); } else { alert(msg.d); } }, error: function(XMLHttpRequest, textStatus, errorThrown) { } }); } and this is my webmethod [WebMethod] public static string DeleteRecord(Int64 RecordId, Int64 UserId, Int64 UserProfileId, string ItemType, string FileName) { try { string FilePath = HttpContext.Current.Server.MapPath(FileName); XDocument xmldoc = XDocument.Load(FilePath); XElement Xelm = xmldoc.Element("UserProfile"); XElement parentElement = Xelm.XPathSelectElement(ItemType + "/Fields"); (from BO in parentElement.Descendants("Record") where BO.Element("Id").Attribute("value").Value == RecordId.ToString() select BO).Remove(); XDocument xdoc = XDocument.Parse(Xelm.ToString(), LoadOptions.PreserveWhitespace); xdoc.Save(FilePath); UserInfoHandler obj = new UserInfoHandler(); return obj.GetHTML(UserId, UserProfileId, FileName, ItemType, RecordId, Xelm).ToString(); } catch (Exception ex) { HandleException.LogError(ex, "EditUserProfile.aspx", "DeleteRecord"); } return "success"; } can anybody please tell me whats wrong in my code?? i am getting error: {"Message":"Invalid JSON primitive: RecordId.","StackTrace":" at System.Web.Script.Serialization.JavaScriptObjectDeserializer.DeserializePrimitiveObject()\r\n at System.Web.Script.Serialization.JavaScriptObjectDeserializer.DeserializeInternal(Int32 depth)\r\n at System.Web.Script.Serialization.JavaScriptObjectDeserializer.BasicDeserialize(String input, Int32 depthLimit, JavaScriptSerializer serializer)\r\n at System.Web.Script.Serialization.JavaScriptSerializer.Deserialize(JavaScriptSerializer serializer, String input, Type type, Int32 depthLimit)\r\n at System.Web.Script.Serialization.JavaScriptSerializer.Deserialize[T](String input)\r\n at System.Web.Script.Services.RestHandler.GetRawParamsFromPostRequest(HttpContext context, JavaScriptSerializer serializer)\r\n at System.Web.Script.Services.RestHandler.GetRawParams(WebServiceMethodData methodData, HttpContext context)\r\n at System.Web.Script.Services.RestHandler.ExecuteWebServiceCall(HttpContext context, WebServiceMethodData methodData)","ExceptionType":"System.ArgumentException"}

    Read the article

  • Sharing an assembly between ASP.NET and Silverlight

    - by vtortola
    Hi, I've created an assembly to share it between my main app and the silverlight app. At the beginning it looked like it was going to work but now I get this exception: "System.IO.FileNotFoundException was caught, Message="Could not load file or assembly 'System.Xml.Linq". I'm using .NET 3.5 Sp1 and Silverlight 3. That shared assembly uses System.Xml.Linq, and it cannot find it... I think because it is trying to find that version in the .NET framework instead looking in the silverlight one. How can I fix this? Cheers. PS: this is the full exception output: System.IO.FileNotFoundException was caught Message="Could not load file or assembly 'System.Xml.Linq, Version=2.0.5.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified." Source="MyApp.Metadata" FileName="System.Xml.Linq, Version=2.0.5.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" FusionLog="=== Pre-bind state information ===\r\nLOG: User = IIS APPPOOL\DefaultAppPool\r\nLOG: DisplayName = System.Xml.Linq, Version=2.0.5.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35\n (Fully-specified)\r\nLOG: Appbase = file:///C:/Users/vtortola.MyApp/Documents/MyApp/MyAppSAS/WebApplication1/WebApplication1/\r\nLOG: Initial PrivatePath = C:\Users\vtortola.MyApp\Documents\MyApp\MyAppSAS\WebApplication1\WebApplication1\bin\r\nCalling assembly : MyApp.Metadata, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null.\r\n===\r\nLOG: This bind starts in default load context.\r\nLOG: Using application configuration file: C:\Users\vtortola.MyApp\Documents\MyApp\MyAppSAS\WebApplication1\WebApplication1\web.config\r\nLOG: Using host configuration file: C:\Windows\Microsoft.NET\Framework64\v2.0.50727\Aspnet.config\r\nLOG: Using machine configuration file from C:\Windows\Microsoft.NET\Framework64\v2.0.50727\config\machine.config.\r\nLOG: Post-policy reference: System.Xml.Linq, Version=2.0.5.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35\r\nLOG: The same bind was seen before, and was failed with hr = 0x80070002.\r\n" StackTrace: at MyApp.Metadata.MyAppEntity.Deserialize(String message)

    Read the article

  • UpdateModel() fails after migration from MVC 1.0 to MVC 2.0

    - by Alastair Pitts
    We are in the process of migrating our ASP.NET MVC 1.0 web app to MVC 2.0 but we have run into a small snag. In our report creation wizard, it is possible leave the Title text box empty and have it be populated with a generic title (in the post action). The code that does the update on the model of the Title is: if (TryUpdateModel(reportToEdit, new[] { "Title" })) { //all ok here try to create (custom validation and attach to graph to follow) //if title is empty get config subject if (reportToEdit.Title.Trim().Length <= 0) reportToEdit.Title = reportConfiguration.Subject; if (!_service.CreateReport(1, reportToEdit, SelectedUser.ID, reportConfigID, reportCategoryID, reportTypeID, deviceUnitID)) return RedirectToAction("Index"); } In MVC 1.0, this works correctly,the reportToEdit has an empty title if the textbox is empty, which is then populated with the Subject property. In MVC 2.0 this fails/returns false. If I add the line above: UpdateModel(reportToEdit, new[] { "Title" }); it throws System.InvalidOperationException was unhandled by user code Message="The model of type 'Footprint.Web.Models.Reports' could not be updated." Source="System.Web.Mvc" StackTrace: at System.Web.Mvc.Controller.UpdateModel[TModel](TModel model, String prefix, String[] includeProperties, String[] excludeProperties, IValueProvider valueProvider) at System.Web.Mvc.Controller.UpdateModel[TModel](TModel model, String[] includeProperties) at Footprint.Web.Controllers.ReportsController.Step1(FormCollection form) in C:\TFS Workspace\ExtBusiness_Footprint\Branches\apitts_uioverhaul\Footprint\Footprint.Web\Controllers\ReportsController.cs:line 398 at lambda_method(ExecutionScope , ControllerBase , Object[] ) at System.Web.Mvc.ActionMethodDispatcher.Execute(ControllerBase controller, Object[] parameters) at System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary`2 parameters) at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary`2 parameters) at System.Web.Mvc.ControllerActionInvoker.<>c__DisplayClassd.<InvokeActionMethodWithFilters>b__a() at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter(IActionFilter filter, ActionExecutingContext preContext, Func`1 continuation) InnerException: Reading the MVC2 Release notes I see this breaking change: Every property for model objects that use IDataErrorInfo to perform validation is validated, regardless of whether a new value was set. In ASP.NET MVC 1.0, only properties that had new values set would be validated. In ASP.NET MVC 2, the Error property of IDataErrorInfo is called only if all the property validators were successful. but I'm confused how this is affecting me. I'm using the entity framework generated classes. Can anyone pinpoint why this is failing?

    Read the article

  • Why doesn't Data Driven Subscription in SSRS 2005 like my Stored Procedure?

    - by bert
    I'm trying to define a Data Driven Subscription for a report in SSRS 2005. In Step 3 of the set up you're asked for: " a command or query that returns a list of recipients and optionally returns fields used to vary delivery settings and report parameter values for each recipient" This I have written and it returns the data without a hitch. I press next and it rolls onto the next screen in the set up which has all the variables to set for the DDS and in each case it has an option to "Select Value From Database" I select this radio button and press the drop down. No fields are available to me. Now the only way I could vary the number of parameters returned by the SP was to have the SP write the SQL to an nvarchar variable and then at the end execute the variable as sql. I have tested this in the Management Studio and it returns the expected fields. I even named them after the fields in SSRS but the thing won't put the field names into the dropdowns. I've even taken the query body out of the Stored Proc, verified it in SSRS and then tried that. It doesn't work either. Can anyone shed any light into what I'm doing wrong?

    Read the article

  • c# SmtpClient class not able to send email using gmail

    - by Sir Psycho
    Hi, I'm having trouble with this code sending email using my gmail account. Im pulling my hair out. The same settings work fine in Thunderbird. Heres the code. I've also tried port 465 with no luck. SmtpClient ss = new SmtpClient("smtp.gmail.com", 587); ss.Credentials = new NetworkCredential("username", "pass"); ss.EnableSsl = true; ss.Timeout = 10000; ss.DeliveryMethod = SmtpDeliveryMethod.Network; ss.UseDefaultCredentials = false; MailMessage mm = new MailMessage("[email protected]", "[email protected]", "subject here", "my body"); mm.BodyEncoding = UTF8Encoding.UTF8; mm.DeliveryNotificationOptions = DeliveryNotificationOptions.OnFailure; ss.Send(mm); Heres the error "The SMTP server requires a secure connection or the client was not authenticated. The server response was: 5.5.1 Authentication Required. Learn more at " Heres the stack trace at System.Net.Mail.MailCommand.CheckResponse(SmtpStatusCode statusCode, String response) at System.Net.Mail.MailCommand.Send(SmtpConnection conn, Byte[] command, String from) at System.Net.Mail.SmtpTransport.SendMail(MailAddress sender, MailAddressCollection recipients, String deliveryNotify, SmtpFailedRecipientException& exception) at System.Net.Mail.SmtpClient.Send(MailMessage message) at email_example.Program.Main(String[] args) in C:\Users\Vince\Documents\Visual Studio 2008\Projects\email example\email example\Program.cs:line 23 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart()

    Read the article

  • A catalogue of Cassandra log messages: What is the correct interpretation?

    - by knorv
    The following is a complete catalogue of all log messages generated by Cassandra 0.6 when stress-testing a Cassandra installation over an extended period of time: AntiEntropyService: Sending AEService tree for (,) to: [] CassandraDaemon: Binding thrift service to localhost/N.N.N.N:N CassandraDaemon: Cassandra starting up... ColumnFamilyStore: has reached its threshold; switching in a fresh Memtable at CommitLogContext(file='.../cassandra/commitlog/CommitLog-N.log', position=N) ColumnFamilyStore: Enqueuing flush of Memtable()@N CommitLog: Discarding obsolete commit log:CommitLogSegment(.../cassandra/commitlog/CommitLog-N.log) CommitLog: Log replay complete CommitLog: Replaying .../cassandra/commitlog/CommitLog-N.log, ... CommitLogSegment: Creating new commitlog segment .../cassandra/commitlog/CommitLog-N.log CompactionManager: Compacted to .../cassandra/data//-N-Data.db. N/N bytes for N keys. Time: Nms. CompactionManager: Compacting [org.apache.cassandra.io.SSTableReader(path='.../cassandra/data//-N-Data.db'), ...] DatabaseDescriptor: Auto DiskAccessMode determined to be mmap GCInspector: GC for ConcurrentMarkSweep: N ms, N reclaimed leaving N used; max is N GCInspector: GC for ParNew: N ms, N reclaimed leaving N used; max is N Memtable: Completed flushing .../cassandra/data//-N-Data.db Memtable: Writing Memtable()@N SSTable: Deleted .../cassandra/data//-N-Data.db SSTableDeletingReference: Deleted .../cassandra/data//-N-Data.db SSTableReader: Sampling index for .../cassandra/data//-N-Data.db StorageService: Starting up server gossip SystemTable: Saved ClusterName found: Test Cluster SystemTable: Saved ClusterName not found. Using Test Cluster SystemTable: Saved Token found: N SystemTable: Saved Token not found. Using N For each of the log messages listed - what is the correct interpretation of the log message?

    Read the article

  • Ninject woes... 404 error problems

    - by jbarker7
    We are using the beloved Ninject+Ninject.Web.Mvc with MVC 2 and are running into some problems. Specifically dealing with 404 errors. We have a logging service that logs 500 errors and records them. Everything is chugging along just perfectly except for when we attempt to enter a non-existent controller. Instead of getting the desired 404 we end up with a 500 error: Cannot be null Parameter name: service [ArgumentNullException: Cannot be null Parameter name: service] Ninject.ResolutionExtensions.GetResolutionIterator(IResolutionRoot root, Type service, Func`2 constraint, IEnumerable`1 parameters, Boolean isOptional) +188 Ninject.ResolutionExtensions.TryGet(IResolutionRoot root, Type service, IParameter[] parameters) +15 Ninject.Web.Mvc.NinjectControllerFactory.GetControllerInstance(RequestContext requestContext, Type controllerType) +36 System.Web.Mvc.DefaultControllerFactory.CreateController(RequestContext requestContext, String controllerName) +68 System.Web.Mvc.MvcHandler.ProcessRequestInit(HttpContextBase httpContext, IController& controller, IControllerFactory& factory) +118 System.Web.Mvc.MvcHandler.BeginProcessRequest(HttpContextBase httpContext, AsyncCallback callback, Object state) +46 System.Web.Mvc.MvcHandler.BeginProcessRequest(HttpContext httpContext, AsyncCallback callback, Object state) +63 System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.BeginProcessRequest(HttpContext context, AsyncCallback cb, Object extraData) +13 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +8679426 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +155 I did some searching and found some similar issues, but those 404 issues seem to be unrelated. Any help here would be great. Thanks! Josh

    Read the article

  • "Unrecognized configuration section connectionStrings" in app.exe.config

    - by Richard Bysouth
    Hi On a Terminal Server install of my WinForms app, one of my clients gets the following exception on startup: "Unrecognized configuration section connectionStrings" This is occurring in myapp.exe.config but I can't figure out why. Runs perfectly everywhere else, only difference between this install and any other is the connection string. I've searched around, but can only find this issue relating to ASP.NET apps and issues in web.config. Any ideas what could be broken in the config of this WinForms app though? Is it indicating a problem further up in machine.config? FYI the top part of myapp.exe.config is: <?xml version="1.0" encoding="utf-8"?> <configuration> <configSections> <sectionGroup name="applicationSettings" type="System.Configuration.ApplicationSettingsGroup, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=jjjjjjjjjj"> <section name="MyApp.Settings" type="System.Configuration.ClientSettingsSection, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=jjjjjjjj" requirePermission="false" /> </sectionGroup> <sectionGroup name="userSettings" type="System.Configuration.UserSettingsGroup, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=jjjjjjjjj"> <section name="MyApp.Settings" type="System.Configuration.ClientSettingsSection, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=jjjjjjjjjj" allowExeDefinition="MachineToLocalUser" requirePermission="false" /> </sectionGroup> </configSections> <connectionStrings> <add name="MyApp.DataAccessLayer.Settings.MyConnectionString" connectionString="$$$$$$" providerName="System.Data.SqlClient" /> </connectionStrings> ... thanks Richard

    Read the article

  • Winforms: Attempted to read or write protected memory. This is often an indication that other memory is corrupt

    - by mamcx
    I have a bunch of background events. All of them call a log: private void log(string text, params object[] values) { if (editLog.InvokeRequired) { editLog.BeginInvoke( (MethodInvoker)delegate { this.log(text, values); }); } else { text = string.Format(text, values) + Environment.NewLine; lock (editLog) { editLog.AppendText(text); editLog.SelectionStart = editLog.TextLength; editLog.ScrollToCaret(); } } } Sometimes I get this, but other times not: System.AccessViolationException was unhandled Message=Attempted to read or write protected memory. This is often an indication that other memory is corrupt. Source=System.Windows.Forms StackTrace: at System.Windows.Forms.UnsafeNativeMethods.CallWindowProc(IntPtr wndProc, IntPtr hWnd, Int32 msg, IntPtr wParam, IntPtr lParam) at System.Windows.Forms.NativeWindow.DefWndProc(Message& m) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.RichTextBox.WndProc(Message& m) at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.UnsafeNativeMethods.SendMessage(HandleRef hWnd, Int32 msg, Int32 wParam, Object& editOle) at System.Windows.Forms.TextBoxBase.ScrollToCaret() at Program1.frmMain.log(String text, Object[] values) in ... ... ... P.D: Not always stop at this line, randomly will be one of the three times editLog methods/properties are used. Not always a exception is throw. Sometimes look like the thing freeze. But not the main UI, just the flow of messages (ie: log look like is never called again) The app is a single form, with background process. I can't see what I doing wrong with this...

    Read the article

  • iPhone: Memory leak when using NSOperationQueue...

    - by MacTouch
    Hi, I'm sitting here for at least half an hour to find a memory leak in my code. I just replaced an synchronous call to a (touch) method with an asynchronous one using NSOperationQueue. The Leak Inspector reports a memory leak after I did the change to the code. What's wrong with the version using NSOperationQueue? Version without a MemoryLeak -(NSData *)dataForKey:(NSString*)ressourceId_ { NSString *path = [self cachePathForKey:ressourceId_]; // returns an autoreleased NSString* NSString *cacheKey = [self cacheKeyForRessource:ressourceId_]; // returns an autoreleased NSString* NSData *data = [[self memoryCache] valueForKey:cacheKey]; if (!data) { data = [self loadData:path]; // returns an autoreleased NSData* if (data) { [[self memoryCache] setObject:data forKey:cacheKey]; } } [[self touch:path]; return data; } Version with a MemoryLeak (I do not see any) -(NSData *)dataForKey:(NSString*)ressourceId_ { NSString *path = [self cachePathForKey:ressourceId_]; // returns an autoreleased NSString* NSString *cacheKey = [self cacheKeyForRessource:ressourceId_]; // returns an autoreleased NSString* NSData *data = [[self memoryCache] valueForKey:cacheKey]; if (!data) { data = [self loadData:path]; // returns an autoreleased NSData* if (data) { [[self memoryCache] setObject:data forKey:cacheKey]; } } NSInvocationOperation *touchOp = [[NSInvocationOperation alloc] initWithTarget:self selector:@selector(touch:) object:path]; [[self operationQueue] addOperation:touchOp]; [touchOp release]; return data; } And of course, the touch method does nothing special too. Just change the date of the file. -(void)touch:(id)path_ { NSString *path = (NSString *)path_ NSFileManager *fm = [NSFileManager defaultManager]; if ([fm fileExistsAtPath:path]) { NSDictionary *attributes = [NSDictionary dictionaryWithObjectsAndKeys:[NSDate date], NSFileModificationDate, nil]; [fm setAttributes: attributes ofItemAtPath:path error:nil]; } }

    Read the article

  • Updating textfield in doctrine produces an exception

    - by james-murphy
    I have a textfield that contains say for example the following text:- "A traditional English dish comprising sausages in Yorkshire pudding batter, usually served with vegetables and gravy." This textfield is in a form that simply updates an item record using it's ID. If I edit part of the textfield and replace "and gravy." with "humous." So that the textfield now contains "A traditional English dish comprising sausages in Yorkshire pudding batter, usually served with vegetables and humous." I get the following exception:- Fatal error: Uncaught exception 'Doctrine_Query_Exception' with message 'Unknown component alias humous' in C:\Projects\nitrous\lightweight\system\database\Doctrine\Query\Abstract.php:780 Stack trace: C:\Projects\nitrous\lightweight\system\database\Doctrine\Query\Abstract.php(767): Doctrine_Query_Abstract-getQueryComponent('humous') C:\Projects\nitrous\lightweight\system\database\Doctrine\Query\Set.php(58): Doctrine_Query_Abstract-getAliasDeclaration('humous') C:\Projects\nitrous\lightweight\system\database\Doctrine\Query\Abstract.php(2092): Doctrine_Query_Set-parse('i.details = 'A ...') C:\Projects\nitrous\lightweight\system\database\Doctrine\Query.php(1058): Doctrine_Query_Abstract-_processDqlQueryPart('set', Array) C:\Projects\nitrous\lightweight\system\database\Doctrine\Query\Abstract.php(971): Doctrine_Query-getSqlQuery(Array) C:\Projects\nitrous\lightweight\system\database\Doctrine\Query\Abstract.php(1030): Doctrine_Query_Abstract-_execute(Array) C:\Projects\nitrous\lightweight\system\appl in C:\Projects\nitrous\lightweight\system\database\Doctrine\Query\Abstract.php on line 780 I'm using Doctrine 1.0.6 hooked into CodeIgniter 1.7.0 if anyone is interested. My doctrine query that actually performs the update looks as follows:- public function updateItems($id, $arrayItem) { $query = new Doctrine_Query(); $query->update('Item i'); foreach($arrayItem as $key => $value) { $query->set('i.'.$key, "'".$value."'"); } $query->where('i.id = ?', $id); return $query->execute(); } This seems bizarre because if i replace the entire string "A traditional English dish comprising sausages in Yorkshire pudding batter, usually served with vegetables and humous." with something completely different like just "test" it doesn't throw an exception and works just fine. This baffles me... is it a bug in Doctrine or have I missed something?

    Read the article

< Previous Page | 697 698 699 700 701 702 703 704 705 706 707 708  | Next Page >