Search Results

Search found 35976 results on 1440 pages for 'js test driver'.

Page 284/1440 | < Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >

  • Why won't the following PDO transaction won't work in PHP?

    - by jfizz
    I am using PHP version 5.4.4, and a MySQL database using InnoDB. I had been using PDO for awhile without utilizing transactions, and everything was working flawlessly. Then, I decided to try to implement transactions, and I keep getting Internal Server Error 500. The following code worked for me (doesn't contain transactions). try { $DB = new PDO('mysql:host=localhost;dbname=database', 'root', 'root'); $DB->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); $dbh = $DB->prepare("SELECT * FROM user WHERE username = :test"); $dbh->bindValue(':test', $test, PDO::PARAM_STR); $dbh->execute(); } catch(Exception $e){ $dbh->rollback(); echo "an error has occured"; } Then I attempted to utilize transactions with the following code (which doesn't work). try { $DB = new PDO('mysql:host=localhost;dbname=database', 'root', 'root'); $DB->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); $dbh = $DB->beginTransaction(); $dbh->prepare("SELECT * FROM user WHERE username = :test"); $dbh->bindValue(':test', $test, PDO::PARAM_STR); $dbh->execute(); $dbh->commit(); } catch(Exception $e){ $dbh->rollback(); echo "an error has occured"; } When I run the previous code, I get an Internal Server Error 500. Any help would be greatly appreciated! Thanks!

    Read the article

  • Changes to data inside class not being shown when accessed from outside class.

    - by Hypatia
    I have two classes, Car and Person. Car has as one of its members an instance of Person, driver. I want to move a car, while keeping track of its location, and also move the driver inside the car and get its location. However, while this works from inside the class (I have printed out the values as they are calculated), when I try to access the data from main, there's nothing there. I.e. the array position[] ends up empty. I am wondering if there is something wrong with the way I have set up the classes -- could it be a problem of the scope of the object? I have tried simplifying the code so that I only give what is necessary. Hopefully that covers everything that you would need to see. The constructer Car() fills the offset array of driver with nonzero values. class Car{ public: Container(float=0,float=0,float=0); ~Container(); void move(float); void getPosition(float[]); void getDriverPosition(float[]); private: float position[3]; Person driver; float heading; float velocity; }; class Person{ public: Person(float=0,float=0,float=0); ~Person(); void setOffset(float=0,float=0,float=0); void setPosition(float=0,float=0,float=0); void getOffset(float[]); void getPosition(float[]); private: float position[3]; float offset[3]; }; Some of the functions: void Car::move(float time){ float distance = velocity*time; location[0] += distance*cos(PI/2 - heading); location[1] += distance*sin(PI/2 - heading); float driverLocation [3]; float offset[3]; driver->getOffset(offset); for (int i = 0; i < 3; i++){ driverLocation[i] = offset[i] + location[i]; } } void Car::getDriverPosition(float p[]){ driver.getPosition(p); } void Person::getPosition(float p[]){ for (int i = 0; i < 3; i++){ p[i] = position[i]; } } void Person::getOffset(float o[]){ for (int i = 0; i < 3; i++){ o[i] = offset[i]; } } In Main: Car * car = new Car(); car->move(); float p[3]; car->getDriverPosition(p); When I print driverLocation[] inside the move() function, I have actual nonzero values. When I print p[] inside main, all I get are zeros.

    Read the article

  • Load script with parameters

    - by Doseke
    Before I used .jsp pages for jsf, and the below code was pretty fine <script language="javascript" src='<%= renderResponse.encodeURL(renderRequest.getContextPath() +"/resources/jsCropperUI/scriptaculous.js?load=effects,builder,dragdrop") %>' > </script> Now, I'm using .xhtml with RichFaces, and the below code does not work <a4j:loadScript src="/resources/jsCropperUI/scriptaculous.js?load=effects,builder,dragdrop"/> Exception is Static resource not found for path /resources/jsCropperUI/scriptaculous.js?load=effects,builder,dragdrop How can I fix this?

    Read the article

  • Ways to support manually executed tests? (that can be used on a Mac)

    - by Rinzwind
    Are there any tools that can be used on a Mac to support manually executed tests? I have a number of tests that I'm executing manually and which I'm currently documenting using merely a plain text file. "Tools" can be interpreted rather loosely here, anything that's a step up from the plain text file would be useful: a template for some suitable application, supporting AppleScript scripts, a web-based system, a full-blown application ... Some things that would be great to have better support for (see also the example below): Checking off each step while you're manually executing the test. Showing the next step(s) in a small window that is always kept in front of all other windows. Automatically updating the 'last tested' and 'using svn revision' info. Keeping a record of all previous testing rounds (not just the last one). ... Any suggestions for any such "tools" that can be used on a Mac? An example (faked) entry from the plain text file to give you a better idea of what I'm looking for: - Check that exported web pages render properly in Safari. Last tested: 2010-03-24 Using SVN revision: 1000 Steps: - Open a new document. - Add some items to the document. - Export the document to a web page "Test.html" in a new folder "Export Test" on the Desktop. - Open the web page in Safari, script: tell application "Finder" open file "Test.html" of folder "Export Test" of desktop end tell Expected results: - The web page should appear properly with all items shown. Clean up steps: - Remove the folder "Export Test" from the Desktop. ( Note: for those unaware, the snippet of AppleScript in the above can be executed from most text editing applications through the Services menu by selecting the snippet and using: the application menu Services Script Editor Run as AppleScript. This is quite useful to automate some steps for tests that are difficult to automate as a whole. )

    Read the article

  • Zend Framework - How do make a hierarchy without it being a module?

    - by Josh
    Here is my specific issue. I want to make an api level which then under that you can select which method you will use. For example: test.com/api/rest test.com/api/xmlprc Currently I have api mapping to a module directory. I then setup a route to make it a rest route. test.com/api is a rest route, but I would rather have it be test.com/api/rest. This would allow me later to add others. In Bootstrap.php: $front = Zend_Controller_Front::getInstance(); $router = $front->getRouter(); $route = new Zend_Controller_Router_Route( 'api/:module/:controller/:id/*', array('module' =>'default') ); $router-addRoute('api', $route); $restRoute = new Zend_Rest_Route($front, array(), array( 'rest' )); $router-addRoute('rest', $restRoute); return $router; In application.ini: resources.frontController.moduleDirectory = APPLICATION_PATH "/modules" I know it will involve routes, but sometimes I find the Zend Framework documentation to be a little hard to follow. When I go to test.com/rest/controller/ it works how it should, but if I go to test.com/api/rest/ it tells me that my module is api and controller is rest.

    Read the article

  • How can code in a JavaScript file get the file's URL?

    - by dalbaeb
    I need to dynamically load a CSS stylesheet into a page that's on a different domain. How can I get the complete URL of the JS file to use in the href attribute of the stylesheet? For instance, here is the structure: http://bla.com/js/script.js http://bla.com/css/style.css I want to dynamically load the stylesheet into a page http://boo.net/index.html. The problem is, I don't know the bla.com bit in advance, just the fact that the stylesheet is in ../css/ relative to the JS file. The script is, of course, included on index.html. jQuery's fine too.

    Read the article

  • Scan file contents into an array of a structure.

    - by ZaZu
    Hello, I have a structure in my program that contains a particular array. I want to scan a random file with numbers and put the contents into that array. This is my code : ( NOTE : This is a sample from a bigger program, so I need the structure and arrays as declared ) The contents of the file are basically : 5 4 3 2 5 3 4 2 #include<stdio.h> #define first 500 #define sec 500 struct trial{ int f; int r; float what[first][sec]; }; int trialtest(trial *test); main(){ trial test; trialtest(&test); } int trialtest(trial *test){ int z,x,i; FILE *fin; fin=fopen("randomfile.txt","r"); for(i=0;i<5;i++){ fscanf(fin,"%5.2f\t",(*test).what[z][x]); } fclose(fin); return 0; } But the problem is, whenever this I run this code, I get this error : (25) : warning 508 - Data of type 'double' supplied where a pointer is required I tried adding do{ for(i=0;i<5;i++){ q=fscanf(fin,"%5.2f\t",(*test).what[z][x]); } }while(q!=EOF); But that didnt work either, it gives the same error. Does anyone have a solution to this problem ?

    Read the article

  • using jquery.noConflict()

    - by user548192
    Hi All, I have two files. HTML and .js files. in code.js, I have written jquery code and in HTML file, I am including code.js as following: jQuery.noConflict(); var $jcode = jQuery; in code.js, I have written following: jcode(document).ready(function() { jcode.interval(checkForms, 2000); }); When I run it, it gives me error as can not read property of interval undefined. I think there is something wrong with my usage of noConflict. Can you please help? Thanks

    Read the article

  • <script> Tag cannot be self closed?

    - by Joe Hopfgartner
    I had this code in my Website <script type="text/javascript" src="http://code.jquery.com/jquery-1.4.4.min.js"/> <script type='text/javascript' src='/lib/player/swfobject.js'></script> swfobject was not working (not loaded). After altering the code to: <script type="text/javascript" src="http://code.jquery.com/jquery-1.4.4.min.js"></script> <script type='text/javascript' src='/lib/player/swfobject.js'></script> It worked fine. The document was parsed as HTML5. I think its funny. Okay, granted a tag that is closed and a self closing tag are not the same. So i would understand if jquery couldnt load. Altough i find it rediciulous. But what i do not understand is that jquery loads but the following, correctly written tag, doesnt?

    Read the article

  • ElanTech touchpad both keys simultaneously don't work

    - by Wojciech
    I have a huge problem with ElanTech touchpad. Without the ElanTech driver both the keys can be used at the same time(R+L). This is usefull in games like Mafia2 (can't play without it). When I install their driver I get the gestures, scrolling etc. but I can't use both keys at the same time. It is a common problem. Acer Aspire v3-571G Windows 7 x64 This didn't work at all: Synaptics 15.3.41.5 Is there any universal driver which will give me at least scrolling and simultaneous keys usage?

    Read the article

  • Adding an element to a multidimensional array

    - by stef
    How can I loop through the array below and an element per array, with key "url_slug" and value "foo"? I tried with array_push but that gets rid of the key names (it seems?) Doing a foreach($array as $k = $v) doesn't do it either, I think. The new array should be exactly the same only having 4 elements per array instead of 3, with the key / values above. Array ( [0] => Array ( [name_en] => Test 5 [url_name_nl] => test-5 [cat_name] => mobile ) [1] => Array ( [name_en] => Test 10 [url_name_nl] => test-10 [cat_name] => mobile ) [2] => Array ( [name_en] => Test 25 [url_name_nl] => test-25 [cat_name] => mobile ) ) EDIT: full working solution. A little more complex than originally described foreach ($prods as $key => &$value) { if($key == "cat_name") $slug = $value['cat_name']; $url_slug = $this->lang->line($slug); $value['url_slug'] = $url_slug; }

    Read the article

  • HP Presario CQ 61-322ER (VV884EA) Wi-Fi hang up! [closed]

    - by qgrabber
    Possible Duplicate: HP Presario CQ 61-322ER (VV884EA) Wi-Fi hang up! I have my new laptop and don't have Windows XP drivers for it. I found that it contains the Broadcom BCM4310 chip, but when I install any Broadcom driver my laptop hangup on installing bcm5*.sys driver. Only power-off button make any effect. After reboot the device list (Device Manager) contains Broadcom WLAN adapter, but it is marked as disabled, for some hardware errror! Also if I disable device before, and install driver - then - all is OK! But when I try to enable it, Windows hang up anyway (no speaker beep, no mouse input, no keyboard input - nothing) What is the solution?

    Read the article

  • asset_packing tiny_mce files

    - by haries
    I use inplacericheditor plugin and tiny_mce Before asset_packager usage, this is how I include the files and they work well <script src="/javascripts/patch_inplaceeditor_1-8-2.js" type="text/javascript"> </script> <script src="/javascripts/patch_inplaceeditor_editonblank_1-8-2.js" type="text/javascript" </script> <script src="/javascripts/tiny_mce/tiny_mce.js" type="text/javascript"></script> <script src="/javascripts/tiny_mce_init.js" type="text/javascript"></script> <script src="/javascripts/inplacericheditor.js" type="text/javascript"></script> My asset_packager.yml section looks like this for the above files: tinyeditor: patch_inplaceeditor_1-8-2 patch_inplaceeditor_editonblank_1-8-2 tiny_mce/tiny_mce tiny_mce_init tiny_mce/langs/en tiny_mce/themes/advanced/editor_template tiny_mce/themes/advanced/langs/en tiny_mce/plugins/save/editor_plugin tiny_mce/plugins/autoresize/editor_plugin tiny_mce/plugins/paste/editor_plugin tiny_mce/plugins/preview/editor_plugin tiny_mce/plugins/table/editor_plugin tiny_mce/plugins/contextmenu/editor_plugin tiny_mce/plugins/emotions/editor_plugin inplacericheditor When I include the asset_packaged file and load the page (in production) I get the following errors: "Ajax.InPlaceEditor is undefined" "Ajax.InPlaceRichEditor is not a constructor" Can anyone shed some light on where I am going wrong or share a better way to asset_package tinymce? Thanks!

    Read the article

  • USB 3.0 port with USB 3.0 device in Ubuntu 12.10

    - by fernando garcía
    When I try to connect a USB 3.0 device in Ubuntu 12.10 (ASUS K55VD, kernel 3.5.0-19-generic #30-Ubuntu SMP), the system says [ 74.747832] hub 3-0:1.0: unable to enumerate USB device on port 1 [ 74.931957] usb 4-1: new SuperSpeed USB device number 2 using xhci_hcd [ 74.949390] usb 4-1: New USB device found, idVendor=05e3, idProduct=0731 [ 74.949396] usb 4-1: New USB device strings: Mfr=0, Product=1, SerialNumber=2 [ 74.949400] usb 4-1: Product: USB Storage [ 74.949403] usb 4-1: SerialNumber: 0000000000000033 [ 75.033327] usbcore: registered new interface driver uas [ 75.038548] Initializing USB Mass Storage driver... [ 75.038651] scsi7 : usb-storage 4-1:1.0 [ 75.038700] usbcore: registered new interface driver usb-storage [ 75.038701] USB Mass Storage support registered. but it does not recognize the device, and the disks applications (gparted, nautilus) act as if nothing had been connected. I have checked other questions, but either they have no answers or they told about previous Ubuntu version with 3.0.x kernels. A USB 2.0 device will work in the USB 3.0 ports. A USB 3.0 device will work (at USB 2.0 speeds) in the USB 2.0 ports. The problem, as I wrote, is between USB 3.0 devices and USB 3.0 ports. I have my USB 3.0 ports configured without legacy support via the BIOS (the way they should be, I suppose). But I also have tried to configure them with XHCI Preboot mode disabled. Have any one solved a similar problem? Thanks in advance.

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • How do I connect my NetXtreme BCM5755M Gigabit Ethernet PCI Express in y Dell latitude d630?

    - by Stanton.Sculpture
    Trying to get my 09:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5755M Gigabit Ethernet PCI Express (rev 02) working with my wifi. I just installed Raring Ringtail on this dell latitude D630 and I can't get it to connect without a wifi dongle. This is what I got when I typed sudo lshw -c network: *-network description: Ethernet interface product: NetXtreme BCM5755M Gigabit Ethernet PCI Express vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:09:00.0 logical name: eth0 version: 02 serial: 00:21:70:98:04:32 capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.128 firmware=5755m-v3.29 latency=0 link=no multicast=yes port=twisted pair resources: irq:44 memory:fe8f0000-fe8fffff *-network description: Wireless interface physical id: 2 bus info: usb@2:1 logical name: wlan0 serial: 7c:dd:90:11:a0:10 capabilities: ethernet physical wireless configuration: broadcast=yes driver=rt2800usb driverversion=3.8.0-31-generic firmware=0.29 ip=10.0.0.8 link=yes multicast=yes wireless=IEEE 802.11bgn Also, when I go to additional drivers in the software and updates settings, no proprietary drivers show up. I've tried sudo apt-get install b43-fwcutter firmware-b43-installer because it worked on a bunch of old Dell laptops that I converted over before, but it didn't work on this one. Is this driver even compatible with wifi? Please help.

    Read the article

  • Scanner that worked with Ubuntu 10.4 cannot be found by 13.4

    - by stevecoh1
    My computer previously ran Ubuntu 10.4. After upgrading to 13.4, my Epson scanner no longer can be found by the system. Following documentation, I find the following: $ sane-find-scanner # sane-find-scanner will now attempt to detect your scanner. If the # result is different from what you expected, first make sure your # scanner is powered up and properly connected to your computer. # No SCSI scanners found. If you expected something different, make sure that # you have loaded a kernel SCSI driver for your SCSI adapter. could not open USB device 0x046d/0x082b at 001:007: Access denied (insufficient permissions) ... # No USB scanners found. If you expected something different, make sure that # you have loaded a kernel driver for your USB host controller and have setup # the USB system correctly. See man sane-usb for details. ... If I instead run sudo sane-find-scanner, I get $ sudo sane-find-scanner # sane-find-scanner will now attempt to detect your scanner. If the # result is different from what you expected, first make sure your # scanner is powered up and properly connected to your computer. # No SCSI scanners found. If you expected something different, make sure that # you have loaded a kernel SCSI driver for your SCSI adapter. found USB scanner (vendor=0x04b8 [EPSON], product=0x0131 [EPSON Scanner]) at libusb:001:009 could not fetch string descriptor: Pipe error could not fetch string descriptor: Pipe error # Your USB scanner was (probably) detected. It may or may not be supported by # SANE. Try scanimage -L and read the backend's manpage. So what do I do? scanimage -L does nothing for me and I don't know what the "backend's manpage" might be. It's seems likely that this is a permissions issue since the scanner can be found as root, but I don't know how to solve it. Can someone help?

    Read the article

  • What is ODBC?

    According to Microsoft, ODBC is a specification for a database API.  This API is database and operating system agnostic due to the fact that the primary goal of the ODBC API is to be language-independent. Additionally, the open functions of the API are created by the manufactures of DBMS-specific drivers. Developers can use these exposed functions from within their own custom applications so that they can communicate with DBMS through the language-independent drivers. ODBC Advantages Multiple ODBC drivers for each DBSM Example Oracle’s ODBC Driver Merant’s Oracle Driver Microsoft’s Oracle Driver ODBC Drivers are constantly updated for the latest data types ODBC allows for more control when querying ODBC allows for Isolation Levels ODBC Disadvantages ODBC Requires DSN ODBC is the proxy between an application and a database ODBC is dependent on third party drivers ODBC Transaction Isolation Levels are related to and limited by the transaction management capabilities of the data source. Transaction isolation levels:  READ UNCOMMITTED Data is allowed to be read prior to the committing of a transaction.  READ COMMITTED Data is only accessible after a transaction has completed  REPEATABLE READ The same data value is read during the entire transaction  SERIALIZABLE Transactions have no effect on other transactions

    Read the article

  • Java crashes on lubuntu but not Ubuntu

    - by Echogene
    I have lubuntu and Ubuntu partitions on my drive. I've been having an interesting time with the new lubuntu partition. I've encountered strange things with the game Minecraft, Java and graphics drivers on the lubuntu partition. Firstly, I'll say that Minecraft runs fine at about 60fps on the Ubuntu partition with the latest drivers. (This is lower than it should be as it's a pretty decent graphics card [Radeon HD 5700].) When I first started lubuntu, I tried to see if I could get Minecraft running on Java. Java crashed when loading the main game graphics on both Sun and OpenJDK without proprietary drivers. Java also crashed on both Javas with proprietary drivers after the necessary restart. However, after disabling (with 'remove' button) the proprietary drivers with jockey-gtk in the session after the restart to install the drivers, Minecraft ran very well at ~120fps. This didn't continue after another restart, when it ran at 9fps. After failing thereafter on lubuntu to get it working at 15fps, I tried reinstalling lubuntu and installed the exact same driver (the latest one, not the one appearing on jockey) and Java versions as on Ubuntu. That is, now Ubuntu and lubuntu have the same graphics driver and Java version. Minecraft still crashes in the same way on lubuntu but works fine on Ubuntu. I would appreciate any explanation for any of these events. What differences between lubuntu and Ubuntu could cause this? Edit: After installing the 32bit driver version on lubuntu (seeing as lubuntu is 32bit), I have Java "working" for Minecraft. However, it is at <15fps again and it can't log in to servers as it takes too long.

    Read the article

  • hp pavilion g6 1250 with a BCM 4313 doesn't see any wireless networks

    - by Ahmed Kotb
    i have tried using ubuntu 10.04 and ubuntu 11.10 and both have the same problem the driver is detected by the additional propriety drivers wizard and after installation, ubuntu can't see except on wireless network which is not mine (and i can't connect to it as it is secured) There are plenty of wireless networks around me but ubuntu can't detect them and if i tried to connect to one of them as if it was hidden connection time out. the command lspci -nvn | grep -i net gives 04:00.0 Network controller [0280]: Broadcom Corporation BCM4313 802.11b/g/n Wireless LAN Controller [14e4:4727] (rev 01) 05:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller [10ec:8136] (rev 05) iwconfig gives lo no wireless extensions. eth0 no wireless extensions. wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=19 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off i guess it is something related to Broadcom driver .. but i don't know , any help will be appreciated UPDATE: ok i installed a new copy of 11.10 to remove the effect of any trials i have made i followed the link (http://askubuntu.com/q/67806) as suggested all what i have done now is trying the command lsmod | grep brc and it gave me the following brcmsmac 631693 0 brcmutil 17837 1 brcmsmac mac80211 310872 1 brcmsmac cfg80211 199587 2 brcmsmac,mac80211 crc_ccitt 12667 1 brcmsmac then i blacklisted all the other drivers as mentioned in the link the wireless is still disabled.. in the last installation installing the Brodcom STA driver form the additional drivers enabled the menu but as i have said before it wasn't able to connect or even get a list of available networks so what should i do now ? the output of command rfkill list all rfkill list all 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no

    Read the article

  • The ship "shudders" in scrolling Asteroids

    - by Ciaran
    In my Asteroids game the user can scroll through space. When scrolling, the ship is drawn in the centre of the window. I use interpolation. I scroll the window uing glOrtho, centering it around the centre of the ship. On my first machine (7 years old, Windows XP, NVIDIA), I am doing 50 updates and 76 frames per second. This is smooth. My other machine an old compaq laptop (Pentium III) with Linux and Radeon OpenGL driver delivers 50 updates and 30 frames per second. The ship regularly seems to "shudder" back and forth when at maximum thrust. When you position the mouse cursor beside the ship it is obvious that its relative position in the window changes. Also, the stars seem blurred into short "lines". Playing the game in non-scrolling mode, the ship moves within the window, glOrtho is therefore not called repeatedly and there is no problem. I suspect a bug in my positioning of the ship and the window but I have dumped out these values and they seem to only go forward, not forward-back-forward. The driver does support double buffering. I guess if it is my bug I need to slow the frame-rate down to debug properly. My question: is this an obvious driver bug or is the slower machine uncovering a bug in my stuff and if so, some debugging tips would be appreciated. I am drawing in world co-ordinates and letting OpenGL do the scaling and translation so if I had a quick way of verifying what pixel co-ordinates OpenGL produces for the ship centre, that would help clarify this.

    Read the article

  • cuda install in ubuntu13.10?

    - by hexiangpeng
    the cuda_install_.log show ERROR: Unable to build the NVIDIA kernel module. ERROR: Installation has failed. Please see the file '/var/log/nvidia-installer.log' for details. You may find suggestions on fixing installation problems in the README available on the Linux driver download page at www.nvidia.com. The driver installation is unable to locate the kernel source. Please make sure that the kernel source packages are installed and set up correctly. and the other .log show ^ /tmp/selfgz3964/NVIDIA-Linux-x86-319.37/kernel/nv-i2c.c: In function ‘nv_i2c_del_adapter’: /tmp/selfgz3964/NVIDIA-Linux-x86-319.37/kernel/nv-i2c.c:327:14: error: void value not ignored as it ought to be osstatus = i2c_del_adapter(pI2cAdapter); ^ make[3]: * [/tmp/selfgz3964/NVIDIA-Linux-x86-319.37/kernel/nv-i2c.o] ?? 1 make[2]: * [module/tmp/selfgz3964/NVIDIA-Linux-x86-319.37/kernel] ?? 2 NVIDIA: left KBUILD. nvidia.ko failed to build! make[1]: * [module] ?? 1 make: * [module] ?? 2 - Error. ERROR: Unable to build the NVIDIA kernel module. ERROR: Installation has failed. Please see the file '/var/log/nvidia-installer.log' for details. You may find suggestions on fixing installation problems in the README available on the Linux driver download page at www.nvidia.com. i don't understand

    Read the article

  • JDBC Connection Pools in Glassfish

    - by Dana Singleterry
    I've been attempting to configure Glassfish 3.1.2.2 for ADF 11g and the need arose to create a jdbc connection pool to my Oracle XE 11g database. While this is really very trivial there were no samples of how to do this and documentation, while good, rarely ever provides concrete examples. After fumbling around for a few minutes searching for an example I gave up and figured it out on my own. Here are the steps for any of you that may be in need. This can be done either via the Glassfish command line tool asadmin or through the admin console. I'm doing this through the admin console. Start Glassfish and connect to the admin console with the credentials you defined at installation: http://localhost:4848 Navigate to Resources | JDBC | JDBC Connection Pools and select New. Be sure to enter Resource Type & Datasource Classname under General Settings tab. You can go with the defaults for Pool Settings etc... View Image Go to the Additional Properties tab and create username, password, and url properties with the respective values. View Image Navigate to Resources | JDBC | JDBC Resources and select New. Be sure to enter the JNDI Name and select the Pool Name for the jdbc connection pool you created previously. View Image Navigate to Configurations | server-config | JVM Settings and select the JVM Options tab. Add the values highlighted: -Doracle.jdbc.J2EE13Compliant=true is used to make sure the driver behaves in a JEE-compliant manner. View Image To integrate the JDBC driver into a GlassFish Server domain, copy the JAR files into the domain-dir/lib directory, then restart the server. The JAR file for the Oracle 11 database driver is ojdbc6dms.jar. An upcoming entry will demonstrate configuring Glassfish for Oracle ADF Applications.

    Read the article

  • Syslog/kernlog filling up with "did not claim interface N before use"

    - by Wayne Werner
    As I discovered when asking this question, it appears that demond_nscan is trying to use a device without claiming it. And it does it.... hundreds of time per second apparently. This makes kern.log and syslog huge (100GB ). In this particular case the problem is directly a result of some Lexmark drivers that were installed (found at /usr/local/lexmark/unix_scan_drivers/bin/demond_nscan). Here are a few things I know: The drivers are for an all-in-one printer/scanner device. There was a previous lexmark printer-only that was installed with CUPS This driver was the one for CUPS systems, and I think that it automatically added it to the list of printers in CUPS. The issue started spamming kern/syslog only after these drivers were installed, using the lexmark installers While googling around I found this thread that's not completely related, but it does mention that it might be happening when two drivers try to control the same device at the same time. How can I resolve this issue so that either I only have one driver, or get the driver to claim the device before usage?

    Read the article

< Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >