Search Results

Search found 18209 results on 729 pages for 'loop device'.

Page 248/729 | < Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >

  • iptables unresolved dependencies

    - by tertle
    I'm trying to setup OpenVPN Access Server on a VPS running ubuntu 9.10 for a friend so she can play games from her uni campus. The problem is I keep running into this error when trying to start openvpn. Service deferred error: IPTablesServiceBase: failed to run iptables-restore [status=1]: ['FATAL: Could not load /lib/modules/2.6.18-028stab070.14/modules.dep: No such file or directory', 'FATAL: Could not load /lib/modules/2.6.18-028stab070.14/modules.dep: No such file or directory', 'iptables-restore: line 46 failed']: internet/base:1175,internet/base:752,internet/process:45,internet/process:306,internet/_baseprocess:48,internet/process:775,internet/_baseprocess:60,svc/pp:116,svc/svcnotify:26,internet/defer:238,internet/defer:307,internet/defer:323,sagent/ipts:105,sagent/ipts:39,util/error:52,util/error:32 service failed to start due to unresolved dependencies: set(['user', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['user', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['iptables_openvpn']) Now I've already got my provider to enabled the TUN/TAP device driver and I checked this using # cat /dev/net/tun Which returned “File descriptor in bad state” Which I believe means it's enabled. After extensive searching, I've been unable to find any solution other than people suggesting to make sure TUN/TAP device driver is enabled. Any ideas on how to solve my issue? I'm not very experience with linux and I feel in over my head here so any advice is greatly appreciated. --edit-- Just stumbled across this Not sure how I missed it earlier. I believe I need to get modprobe ipt_mark & modprobe ipt_MARK run on the hostnode by my provider. Is this correct and something I should try get done.

    Read the article

  • Is it OK to use dynamic typing to reduce the amount of variables in scope?

    - by missingno
    Often, when I am initializing something I have to use a temporary variable, for example: file_str = "path/to/file" file_file = open(file) or regexp_parts = ['foo', 'bar'] regexp = new RegExp( regexp_parts.join('|') ) However, I like to reduce the scope my variables to the smallest scope possible so there is less places where they can be (mis-)used. For example, I try to use for(var i ...) in C++ so the loop variable is confined to the loop body. In these initialization cases, if I am using a dynamic language, I am then often tempted to reuse the same variable in order to prevent the initial (and now useless) value from being used latter in the function. file = "path/to/file" file = open(file) regexp = ['...', '...'] regexp = new RegExp( regexp.join('|') ) The idea is that by reducing the number of variables in scope I reduce the chances to misuse them. However this sometimes makes the variable names look a little weird, as in the first example, where "file" refers to a "filename". I think perhaps this would be a non issue if I could use non-nested scopes begin scope1 filename = ... begin scope2 file = open(filename) end scope1 //use file here //can't use filename on accident end scope2 but I can't think of any programming language that supports this. What rules of thumb should I use in this situation? When is it best to reuse the variable? When is it best to create an extra variable? What other ways do we solve this scope problem?

    Read the article

  • Unable to mount an LVM Hard-drive after upgrade

    - by Bruce Staples
    I imagine this is a basic gotcha ... but I can't see it. I have a system with 2(physical) harddrives. The boot system (/dev/sda) was running 10.04 & the second drive (/dev/sdb) was just a mounted filesystem. I did a clean load of Ubuntu 12.04 overwriting /dev/sda (not an upgrade) & now cannot mount the second drive. so I do not know what to enter it into the fstab ... I had expected to use: /dev/sdb /tera ext4 defaults 0 2 But even manual mounting fails (I also have tried various "-t" options on the off chance!) sudo mount -t ext4 /dev/sdb1 /tera mount: wrong fs type, bad option, bad superblock on /dev/sdb1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Output from disk queries indicate that it is a Linux LVM & a healthy disk still. sudo lshw -C disk *-disk:0 description: ATA Disk product: WDC WD5000AACS-0 vendor: Western Digital physical id: 0 bus info: scsi@2:0.0.0 logical name: /dev/sda version: 01.0 serial: WD-WCASU1401098 size: 465GiB (500GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=00015a55 *-disk:1 description: ATA Disk product: WDC WD10EADS-00L vendor: Western Digital physical id: 1 bus info: scsi@3:0.0.0 logical name: /dev/sdb version: 01.0 serial: WD-WCAU47836304 size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 sudo fdisk -l Disk /dev/sda: 500.1 GB, 500106780160 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976771055 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00015a55 Device Boot Start End Blocks Id System /dev/sda1 * 2048 972580863 486289408 83 Linux /dev/sda2 972582910 976769023 2093057 5 Extended /dev/sda5 972582912 976769023 2093056 82 Linux swap / Solaris Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 1 1953525167 976762583+ 8e Linux LVM LVM doesn't appear to be an option for mount or fstab. ... and here's a Smart data Screenshot from Disk Utility.

    Read the article

  • Using SPServices &amp; jQuery to Find My Stuff from Multi-Select Person/Group Field

    - by Mark Rackley
    Okay… quick blog post for all you SPServices fans out there. I needed to quickly write a script that would return all the tasks currently assigned to me.  I also wanted it to return any task that was assigned to a group I belong to. This can actually be done with a CAML query, so no big deal, right?  The rub is that the “assigned to” field is a multi-select person or group field. As far as I know (and I actually know so little) you cannot just write a CAML query to return this information. If you can, please leave a comment below and disregard the rest of this blog post… So… what’s a hacker to do? As always, I break things down to their most simple components (I really love the KISS principle and would get it tattooed on my back if people wouldn’t think it meant “Knights In Satan’s Service”. You really gotta be an old far to get that reference).  Here’s what we’re going to do: Get currently logged in user’s name as it is stored in a person field Find all the SharePoint groups the current user belongs to Retrieve a set of assigned tasks from the task list and then find those that are assigned to current  user or group current user belongs to Nothing too hairy… So let’s get started Some Caveats before I continue There are some obvious performance implications with this solution as I make a total of four SPServices calls and there’s a lot of looping going on. Also, the CAML query in this blog has NOT been optimized. If you move forward with this code, tweak it so that it returns a further subset of data or you will see horrible performance if you have a few hundred entries in your task list. Add a date range to the CAML or something. Find some way to limit the results as much as possible. Lastly, if you DO have a better solution, I would like you to share. Iron sharpens iron and all…   Alright, let’s really get started. Get currently logged in user’s name as it is stored in a person field First thing we need to do is understand how a person group looks when you look at the XML returned from a SharePoint Web Service call. It turns out it’s stored like any other multi select item in SharePoint which is <id>;#<value> and when you assign a person to that field the <value> equals the person’s name “Mark Rackley” in my case. This is for Windows Authentication, I would expect this to be different in FBA, but I’m not using FBA. If you want to know what it looks like with FBA you can use the code in this blog and strategically place an alert to see the value.  Anyway… I need to find the name of the user who is currently logged in as it is stored in the person field. This turns out to be one SPServices call: var userName = $().SPServices.SPGetCurrentUser({                     fieldName: "Title",                     debug: false                     }); As you can see, the “Title” field has the information we need. I suspect (although again, I haven’t tried) that the Title field also contains the user’s name as we need it if I was using FBA. Okay… last thing we need to do is store our users name in an array for processing later: myGroups = new Array(); myGroups.push(userName); Find all the SharePoint groups the current user belongs to Now for the groups. How are groups returned in that XML stream?  Same as the person <ID>;#<Group Name>, and if it’s a mutli select it’s all returned in one big long string “<ID>;#<Group Name>;#<ID>;#<Group Name>;#<ID>;#<Group Name>;#<ID>;#<Group Name>;#<ID>;#<Group Name>”.  So, how do we find all the groups the current user belongs to? This is also a simple SPServices call. Using the “GetGroupCollectionFromUser” operation we can find all the groups a user belongs to. So, let’s execute this method and store all our groups. $().SPServices({       operation: "GetGroupCollectionFromUser",       userLoginName: $().SPServices.SPGetCurrentUser(),       async: false,       completefunc: function(xData, Status) {          $(xData.responseXML).find("[nodeName=Group]").each(function() {                 myGroups.push($(this).attr("Name"));          });         }     }); So, all we did in the above code was execute the “GetGroupCollectionFromUser” operation and look for the each “Group” node (row) and store the name for each group in our array that we put the user’s name in previously (myGroups). Now we have an array that contains the current user’s name as it will appear in the person field XML and  all the groups the current user belongs to. The Rest Now comes the easy part for all of you familiar with SPServices. We are going to retrieve our tasks from the Task list using “GetListItems” and look at each entry to see if it belongs to this person. If it does belong to this person we are going to store it for later processing. That code looks something like this: // get list of assigned tasks that aren't closed... *modify the CAML to perform better!*             $().SPServices({                   operation: "GetListItems",                   async: false,                   listName: "Tasks",                   CAMLViewFields: "<ViewFields>" +                             "<FieldRef Name='AssignedTo' />" +                             "<FieldRef Name='Title' />" +                             "<FieldRef Name='StartDate' />" +                             "<FieldRef Name='EndDate' />" +                             "<FieldRef Name='Status' />" +                             "</ViewFields>",                   CAMLQuery: "<Query><Where><And><IsNotNull><FieldRef Name='AssignedTo'/></IsNotNull><Neq><FieldRef Name='Status'/><Value Type='Text'>Completed</Value></Neq></And></Where></Query>",                     completefunc: function (xData, Status) {                         var aDataSet = new Array();                        //loop through each returned Task                         $(xData.responseXML).find("[nodeName=z:row]").each(function() {                             //store the multi-select string of who task is assigned to                             var assignedToString = $(this).attr("ows_AssignedTo");                             found = false;                            //loop through the persons name and all the groups they belong to                             for(var i=0; i<myGroups.length; i++) {                                 //if the person's name or group exists in the assigned To string                                 //then the task is assigned to them                                 if (assignedToString.indexOf(myGroups[i]) >= 0){                                     found = true;                                     break;                                 }                             }                             //if the Task belongs to this person then store or display it                             //(I'm storing it in an array)                             if (found){                                 var thisName = $(this).attr("ows_Title");                                 var thisStartDate = $(this).attr("ows_StartDate");                                 var thisEndDate = $(this).attr("ows_EndDate");                                 var thisStatus = $(this).attr("ows_Status");                                                                  var aDataRow=new Array(                                     thisName,                                     thisStartDate,                                     thisEndDate,                                     thisStatus);                                 aDataSet.push(aDataRow);                             }                          });                          SomeFunctionToDisplayData(aDataSet);                     }                 }); Some notes on why I did certain things and additional caveats. You will notice in my code that I’m doing an AssignedToString.indexOf(GroupName) to see if the task belongs to the person. This could possibly return bad results if you have SharePoint Group names that are named in such a way that the “IndexOf” returns a false positive.  For example if you have a Group called “My Users” and a group called “My Users – SuperUsers” then if a user belonged to “My Users” it would return a false positive on executing “My Users – SuperUsers”.IndexOf(“My Users”). Make sense? Just be aware of this when naming groups, we don’t have this problem. This is where also some fine-tuning can probably be done by those smarter than me. This is a pretty inefficient method to determine if a task belongs to a user, I mean what if a user belongs to 20 groups? That’s a LOT of looping.  See all the opportunities I give you guys to do something fun?? Also, why am I storing my values in an array instead of just writing them out to a Div? Well.. I want to pass my data to a jQuery library to format it all nice and pretty and an Array is a great way to do that. When all is said and done and we put all the code together it looks like:   $(document).ready(function() {         var userName = $().SPServices.SPGetCurrentUser({                     fieldName: "Title",                     debug: false                     });         myGroups = new Array();     myGroups.push(userName );       $().SPServices({       operation: "GetGroupCollectionFromUser",       userLoginName: $().SPServices.SPGetCurrentUser(),       async: false,       completefunc: function(xData, Status) {          $(xData.responseXML).find("[nodeName=Group]").each(function() {                 myGroups.push($(this).attr("Name"));          });                      // get list of assigned tasks that aren't closed... *modify this CAML to perform better!*             $().SPServices({                   operation: "GetListItems",                   async: false,                   listName: "Tasks",                   CAMLViewFields: "<ViewFields>" +                             "<FieldRef Name='AssignedTo' />" +                             "<FieldRef Name='Title' />" +                             "<FieldRef Name='StartDate' />" +                             "<FieldRef Name='EndDate' />" +                             "<FieldRef Name='Status' />" +                             "</ViewFields>",                   CAMLQuery: "<Query><Where><And><IsNotNull><FieldRef Name='AssignedTo'/></IsNotNull><Neq><FieldRef Name='Status'/><Value Type='Text'>Completed</Value></Neq></And></Where></Query>",                     completefunc: function (xData, Status) {                         var aDataSet = new Array();                         //loop through each returned Task                         $(xData.responseXML).find("[nodeName=z:row]").each(function() {                             //store the multi-select string of who task is assigned to                             var assignedToString = $(this).attr("ows_AssignedTo");                             found = false;                            //loop through the persons name and all the groups they belong to                             for(var i=0; i<myGroups.length; i++) {                                 //if the person's name or group exists in the assigned To string                                 //then the task is assigned to them                                 if (assignedToString.indexOf(myGroups[i]) >= 0){                                     found = true;                                     break;                                 }                             }                            //if the Task belongs to this person then store or display it                             //(I'm storing it in an array)                             if (found){                                 var thisName = $(this).attr("ows_Title");                                 var thisStartDate = $(this).attr("ows_StartDate");                                 var thisEndDate = $(this).attr("ows_EndDate");                                 var thisStatus = $(this).attr("ows_Status");                                                                  var aDataRow=new Array(                                     thisName,                                     thisStartDate,                                     thisEndDate,                                     thisStatus);                                 aDataSet.push(aDataRow);                             }                          });                          SomeFunctionToDisplayData(aDataSet);                     }                 });       }    });  }); Final Thoughts So, there you have it. Take it and run with it. Make it something cool (and tell me how you did it). Another possible way to improve performance in this scenario is to use a DVWP to display the tasks and use jQuery and the “myGroups” array from this blog post to hide all those rows that don’t belong to the current user. I haven’t tried it, but it does move some of the processing off to the server (generating the view) so it may perform better.  As always, thanks for stopping by… hope you have a Merry Christmas…

    Read the article

  • Some info about SD card patitions after the use of the dd statment and some oiters doubts?

    - by AndreaNobili
    I am not very experienced using Linux and I have the following situation that cause me some doubts. I have wrote RaspBian (the RaspBerry linux distribution) on an SD card using Ubuntu dd statment: sudo dd if=2014-01-07-wheezy-raspbian.img of=/dev/sdb bs=1024 So if now I perform the fdisk -l statment I obtain that I have 2 partitions related to my SD card, that are the followins: Dispositivo Boot Start End Blocks Id System /dev/sdb1 8192 122879 57344 c W95 FAT32 (LBA) /dev/sdb2 122880 5785599 2831360 83 Linux And now the first doubt: the dd statment create on the SD card two partitions: 1) /dev/sdb1 that is a litle FAT32 partition (what it means (LBA)?) 2) /dev/sdb2 that is a larger Linux ext3 partition Ok...the doubt is: why it also create to me a FAT32 partition and not only a Linux ext3 partition? Ok...if I go into my computer resource I can see a device (related to my SD card) into the devices list that contains some RaspBian file, following a screenshot: And if I see the property of this device I obtain this: So, looking at the previous screenshot it seems to me that this is the small FAT32 partition, and now I have the followings doubts: If it is the smallest FAT32 partition, what contains? The RaspBian boot or what? Why, in the devices list, I have only the FAT32 partition and not also the Linux one (/dev/sdb2), to see it have I to mount it? how? Tnx

    Read the article

  • Matrix Multiplication with C++ AMP

    - by Daniel Moth
    As part of our API tour of C++ AMP, we looked recently at parallel_for_each. I ended that post by saying we would revisit parallel_for_each after introducing array and array_view. Now is the time, so this is part 2 of parallel_for_each, and also a post that brings together everything we've seen until now. The code for serial and accelerated Consider a naïve (or brute force) serial implementation of matrix multiplication  0: void MatrixMultiplySerial(std::vector<float>& vC, const std::vector<float>& vA, const std::vector<float>& vB, int M, int N, int W) 1: { 2: for (int row = 0; row < M; row++) 3: { 4: for (int col = 0; col < N; col++) 5: { 6: float sum = 0.0f; 7: for(int i = 0; i < W; i++) 8: sum += vA[row * W + i] * vB[i * N + col]; 9: vC[row * N + col] = sum; 10: } 11: } 12: } We notice that each loop iteration is independent from each other and so can be parallelized. If in addition we have really large amounts of data, then this is a good candidate to offload to an accelerator. First, I'll just show you an example of what that code may look like with C++ AMP, and then we'll analyze it. It is assumed that you included at the top of your file #include <amp.h> 13: void MatrixMultiplySimple(std::vector<float>& vC, const std::vector<float>& vA, const std::vector<float>& vB, int M, int N, int W) 14: { 15: concurrency::array_view<const float,2> a(M, W, vA); 16: concurrency::array_view<const float,2> b(W, N, vB); 17: concurrency::array_view<concurrency::writeonly<float>,2> c(M, N, vC); 18: concurrency::parallel_for_each(c.grid, 19: [=](concurrency::index<2> idx) restrict(direct3d) { 20: int row = idx[0]; int col = idx[1]; 21: float sum = 0.0f; 22: for(int i = 0; i < W; i++) 23: sum += a(row, i) * b(i, col); 24: c[idx] = sum; 25: }); 26: } First a visual comparison, just for fun: The beginning and end is the same, i.e. lines 0,1,12 are identical to lines 13,14,26. The double nested loop (lines 2,3,4,5 and 10,11) has been transformed into a parallel_for_each call (18,19,20 and 25). The core algorithm (lines 6,7,8,9) is essentially the same (lines 21,22,23,24). We have extra lines in the C++ AMP version (15,16,17). Now let's dig in deeper. Using array_view and extent When we decided to convert this function to run on an accelerator, we knew we couldn't use the std::vector objects in the restrict(direct3d) function. So we had a choice of copying the data to the the concurrency::array<T,N> object, or wrapping the vector container (and hence its data) with a concurrency::array_view<T,N> object from amp.h – here we used the latter (lines 15,16,17). Now we can access the same data through the array_view objects (a and b) instead of the vector objects (vA and vB), and the added benefit is that we can capture the array_view objects in the lambda (lines 19-25) that we pass to the parallel_for_each call (line 18) and the data will get copied on demand for us to the accelerator. Note that line 15 (and ditto for 16 and 17) could have been written as two lines instead of one: extent<2> e(M, W); array_view<const float, 2> a(e, vA); In other words, we could have explicitly created the extent object instead of letting the array_view create it for us under the covers through the constructor overload we chose. The benefit of the extent object in this instance is that we can express that the data is indeed two dimensional, i.e a matrix. When we were using a vector object we could not do that, and instead we had to track via additional unrelated variables the dimensions of the matrix (i.e. with the integers M and W) – aren't you loving C++ AMP already? Note that the const before the float when creating a and b, will result in the underling data only being copied to the accelerator and not be copied back – a nice optimization. A similar thing is happening on line 17 when creating array_view c, where we have indicated that we do not need to copy the data to the accelerator, only copy it back. The kernel dispatch On line 18 we make the call to the C++ AMP entry point (parallel_for_each) to invoke our parallel loop or, as some may say, dispatch our kernel. The first argument we need to pass describes how many threads we want for this computation. For this algorithm we decided that we want exactly the same number of threads as the number of elements in the output matrix, i.e. in array_view c which will eventually update the vector vC. So each thread will compute exactly one result. Since the elements in c are organized in a 2-dimensional manner we can organize our threads in a two-dimensional manner too. We don't have to think too much about how to create the first argument (a grid) since the array_view object helpfully exposes that as a property. Note that instead of c.grid we could have written grid<2>(c.extent) or grid<2>(extent<2>(M, N)) – the result is the same in that we have specified M*N threads to execute our lambda. The second argument is a restrict(direct3d) lambda that accepts an index object. Since we elected to use a two-dimensional extent as the first argument of parallel_for_each, the index will also be two-dimensional and as covered in the previous posts it represents the thread ID, which in our case maps perfectly to the index of each element in the resulting array_view. The kernel itself The lambda body (lines 20-24), or as some may say, the kernel, is the code that will actually execute on the accelerator. It will be called by M*N threads and we can use those threads to index into the two input array_views (a,b) and write results into the output array_view ( c ). The four lines (21-24) are essentially identical to the four lines of the serial algorithm (6-9). The only difference is how we index into a,b,c versus how we index into vA,vB,vC. The code we wrote with C++ AMP is much nicer in its indexing, because the dimensionality is a first class concept, so you don't have to do funny arithmetic calculating the index of where the next row starts, which you have to do when working with vectors directly (since they store all the data in a flat manner). I skipped over describing line 20. Note that we didn't really need to read the two components of the index into temporary local variables. This mostly reflects my personal choice, in some algorithms to break down the index into local variables with names that make sense for the algorithm, i.e. in this case row and col. In other cases it may i,j,k or x,y,z, or M,N or whatever. Also note that we could have written line 24 as: c(idx[0], idx[1])=sum  or  c(row, col)=sum instead of the simpler c[idx]=sum Targeting a specific accelerator Imagine that we had more than one hardware accelerator on a system and we wanted to pick a specific one to execute this parallel loop on. So there would be some code like this anywhere before line 18: vector<accelerator> accs = MyFunctionThatChoosesSuitableAccelerators(); accelerator acc = accs[0]; …and then we would modify line 18 so we would be calling another overload of parallel_for_each that accepts an accelerator_view as the first argument, so it would become: concurrency::parallel_for_each(acc.default_view, c.grid, ...and the rest of your code remains the same… how simple is that? Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Advancing my Embedded knowledge.....with a CS degree.

    - by Mercfh
    So I graduated last December with a B.S. in Computer Science, in a pretty good well known engineering college. However towards the end I realized that I actually like Assembly/Lower level C programming more than I actually enjoy higher level abstracted OO stuff. (Like I Programmed my own Device Drivers for USB stuff in Linux, stuff like that) But.....I mean we really didn't concentrate much on that in college, perhaps an EE/CE degree would've been better, but I knew the classes......and things weren't THAT much different. I've messed around with Atmel AVR's/Arduino stuff (Mostly robotics) and Linux Kernals/Device Drivers. but I really want to enhance my skills and maybe one day get a job doing embedded stuff. (I have a job now, it's An entry level software dev/tester job, it's a good job but not exactly what my passion lies in) (Im pretty good with C and certain ASM's for specific microcontrollers) Is this even possible with a CS degree? or am I screwed? (since technically my degree usually doesn't involve much embedded stuff) If Im NOT screwed then what should I be studying/learning? How would I even go about it........ I guess I could eventually say "Experienced with XXXX Microcontrollers/ASM/etc...." but still, it wouldn't be the same as having a CE/EE degree. Also....going back to college isn't an option. just fyi. edit: Any book recommendations for "getting used to this stuff" I have ARM System-on-Chip Architecture (2nd edition) it's good.....for ARM stuff lol

    Read the article

  • Introducing the New Boot Framework in CE 7

    - by Kate Moss' Open Space
    CE 7 introduces a new boot loader framework, BLDR (platform\common\src\common\bldr\). Some people like its powerful and flexbility, others may feel its too complicate as a boot loader framework. Despite to the favor, it is already there; so let's take a look at its features. Unlike the previous BL framwork (CE7 still provides it in platform\common\src\common\boot\) is a monolithic library, the new framework has more architecture structure. It not only defines main body but also provides rich components, such as filesystem (BinFS/FAT), download transportations, display, logging and block devices: bios INT13, FAL, IDE, Flash ( and etc. Note that in the block device category, the FAL is for legacy FMD/FAL, Flash is for latest MSFlash. Some of you may have encountered MSFlash MDD/PDD compatible partition is hard to created in bootloader and now it provides a clean solution! (Since this is a big topic, I will introduce it in future post) Today, I am going to show you some basic helper components - Image Loading functions. When OS image stored in the block device, it can be a file format, says your NK.BIN in the FAT volume or a RAW format, says the image is programmed to a BINFS partition. For the first one you can use BootFileSystemReadBinFile (platform\common\src\common\bldr\fileSystem\utils\fileSystemReadBinFile.c) and use BootBlockLoadBinFsImage (platform\common\src\common\bldr\block\utils\loadBinFs.c) to load from a partition. Need a sample code? No problem, the BootLoaderLoadOs in platform\cepc\src\boot\bldr\loados.c just provide a perfect example.

    Read the article

  • Unable To Get Sound Working to External Speaker on HP TouchSmart 320 on 11.04 or 11.10

    - by Schof
    This is an HP TouchSmart 320, model number 320-1200m. I'm using Ubuntu 11.04. Hardware information: root@hp320:/home/mpower# aplay -l **** List of PLAYBACK Hardware Devices **** card 0: Generic [HD-Audio Generic], device 0: STAC92xx Analog [STAC92xx Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 root@hp320:~$ cat /proc/asound/card0/codec#0 | grep Codec Codec: IDT 92HD91BXX Sound to headphone jack works properly, but sound to built-in speakers does not work. I have installed Windows, and with Windows 7 installed, all audio hardware works properly, so this isn't a hardware fault. I've looked at https://help.ubuntu.com/community/HdaIntelSoundHowto and have been unable to find my card in http://www.kernel.org/doc/Documentation/sound/alsa/HD-Audio-Models.txt . I have tried adding almost every conceivable model in the line "options snd-hda-intel model=MODEL" line I added to /etc/modprobe.d/alsa-base.conf. Update 2011-11-09 2:31 PM PST: I've gone to Control Center - Sound Preferences to attempt settings that make sound work. The "Hardware" tab shows one device: "Internal Audio 1 Output / 1 Input Analog Stereo Duplex." There are two output profiles listed in the selection box at the bottom of the tag: Analog Stereo Duplex and Analog Stereo Output. Neither cause sound to emit from the speakers. I've also run alsamixer on the command-line and ensured that everything is set to maximum and nothing is muted. Update 2011-11-09 5:15 PM PST: I've replicated the exact same symptoms in 11.10. Update 2011-11-10 11:31 AM PST: I've filed a bug in launchpad: https://launchpad.net/ubuntu/+source/alsa-driver/+bug/888703

    Read the article

  • Skype sounds sizzle/distorted/bad

    - by Filubuntu
    I have the same problem as described in the questions skype notification sounds sizzled and bad sound on login to skype. But it is not only the login, notification, but also when talking to somebody. I tried the solution to remove/re-install skype and most of the solutions in this questions, e.g. checking mixer, sound settings and installing alsa-hda-dkms (incl. system restart). After installing skype (and even after upgrade to skype 4.0) in Ubuntu 12.04 (AMD 64) there was no sound at all. I followed the first step of the SoundTroubleshootingProcedure and at least there is now sound: sudo add-apt-repository ppa:ubuntu-audio-dev/ppa; sudo apt-get update;sudo apt-get dist-upgrade; sudo apt-get install linux-sound-base alsa-base alsa-utils gdm ubuntu-desktop linux-image-`uname -r` libasound2; sudo apt-get -y --reinstall install linux-sound-base alsa-base alsa-utils gdm ubuntu-desktop linux-image-`uname -r` libasound2; killall pulseaudio; rm -r ~/.pulse*; sudo usermod -aG `cat /etc/group | grep -e '^pulse:' -e '^audio:' -e '^pulse-access:' -e '^pulse-rt:' -e '^video:' | awk -F: '{print $1}' | tr '\n' ',' | sed 's:,$::g'` `whoami` The jittering sound would sometimes disappear, e.g. on the Echo-Testcall after replaying the recorded part. And I noticed that if I let music play in the rhythmbox and then start skype, the sound is fine. So I have a weak solution, but I would be glad it would work without this detour. As requested: My sound card is a an "AMD High Definition Audio Device" called Advanced Micro Devices (AMD) Hudson Azalia controller (rev01), subsystem Lenovo Device 21ea (according to sysinfo) on a Lenovo Thinkpad Edge 525.

    Read the article

  • Can't access external hard drives or thumb drives

    - by calden
    I am not a complete linux noob but I don't know a lot either and would greatly appreciate some help with this. I just installed Ubuntu 10.10 onto my laptop. Everything is working great however USB devices such as thumb drives and external hard drives wont show up. I have been looking around a bit and when I run sudo fdisk -l it displays this: Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00065684 Device Boot Start End Blocks Id System /dev/sda1 * 1 29255 234983424 83 Linux /dev/sda2 29255 30402 9212929 5 Extended /dev/sda5 29255 30402 9212928 82 Linux swap / Solaris Disk /dev/sdb: 16.0 GB, 16026435072 bytes 64 heads, 32 sectors/track, 15283 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000df90d Device Boot Start End Blocks Id System /dev/sdb1 * 1 15283 15649776 7 HPFS/NTFS It does seem to display my 16 gig thumb drive but other then seeing it here I cant access it to read and write files to it. It does the same with my external hard drive. I know those devices work as I have tried them on my other computer and they are working fine. Also this is what is in fstab if this will help anybody help me: proc /proc proc nodev,noexec,nosuid 0 0 /dev/sdb1 / ext4 errors=remount-ro 0 1 /dev/sdb5 none swap sw 0 0 Thank you very much for the help everyone.

    Read the article

  • How to Use a PIN Instead of a Password in Windows 8

    - by Taylor Gibb
    Entering your full password on a touch screen device can really become a pain in the neck, luckily for us we can link a short 4 digit PIN to our user account and log in with that instead. Note: PIN codes are nowhere near as safe as using an alphanumeric password, however, they do still have a purpose when you don’t want to enter your 15 character password on a touch screen device. Creating a PIN Press the Win + I keyboard combination to bring up the Settings Charm, then click on the Change PC settings link. This will open up the Modern UI PC Settings app, where you can click on the Users section. On the right hand side you will see a Create a PIN button, click on it. Now you will need to verify that you are the owner of this user account by entering your password. Then you can choose a PIN, remember that it can only contain digits. Now when you get to the login screen you will have the option to use a PIN. How To Boot Your Android Phone or Tablet Into Safe Mode HTG Explains: Does Your Android Phone Need an Antivirus? How To Use USB Drives With the Nexus 7 and Other Android Devices

    Read the article

  • SD Card only mounted after a reboot

    - by hattenn
    I have a Kingston 2GB MicroSD and I plug it in via an inconix MicroSD Adapter to the internal card reader of my Samsung N210 Netbook with Ubuntu 10.10, but it doesn't show up. Only if I reboot the system when the card's plugged in it shows up. Why does it need a reboot for mounting? sudo fdisk -l gives the output below. But I can only see the drive when I reboot the computer while the card's plugged. Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9a5a7990 Device Boot Start End Blocks Id System /dev/sda1 1 1959 15728640 27 Unknown Partition 1 does not end on cylinder boundary. /dev/sda2 * 1959 1972 102400 7 HPFS/NTFS /dev/sda3 1972 18992 136718750 83 Linux /dev/sda4 18992 19458 3738625 5 Extended /dev/sda5 18992 19458 3738624 82 Linux swap / Solaris Disk /dev/sdb: 1973 MB, 1973420032 bytes 60 heads, 59 sectors/track, 1088 cylinders Units = cylinders of 3540 * 512 = 1812480 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 1 1089 1927100+ 6 FAT16

    Read the article

  • Creating floppy drive special devices under Quantal

    - by JCCyC
    First, I'd like for the various special devices for different floppy capacities (like /dev/fd0u720 etc.) to be available. I tried to adapt some udev rules I found online. I tried this, which I saved as /etc/udev/rules.d/70-persistent-floppy.rules: # change floppy device ownership and permissions # default permissions are 640, which prevents group users from having write access # first fix primary devices (/dev/fd0, /dev/fd1, etc.) # also change group ownership from disk to floppy SUBSYSTEM=="block", KERNEL=="fd[0-9]*", GROUP="floppy", MODE="0660" # next recreate secondary devices (/dev/fd0u720, /dev/fd0u1440, etc.) SUBSYSTEM=="block", KERNEL=="fd[0-9]*", ACTION=="add", RUN+="create_floppy_devices -c -t $attr{cmos} -m %M -M 0660 -G floppy $root/%k" But to no avail. It seems the create_floppy_devices script isn't provided with 12.10. How do I obtain it? Second: I'm using MATE, and whenever I log in I get a message box saying it tried to mount the drive but failed. How do I disable this? Third (which is probably related to the second): Whenever there's a disk in the drive, the motor won't stop spinning. If I do a mdir of it, after it returns, the motor stops, and then starts again. I suspect there's some process in MATE doing this. UPDATE: For CentOS 6 (who does have a create_floppy_devices program) the following rules file worked. Saved as /etc/udev/rules.d/98-floppy.rules: # change floppy device ownership and permissions # default permissions are 640, which prevents group users from having write access # first fix primary devices (/dev/fd0, /dev/fd1, etc.) # also change group ownership from disk to floppy KERNEL=="fd[0-9]*", GROUP="floppy", MODE="0660" # next recreate secondary devices (/dev/fd0u720, /dev/fd0u1440, etc.) # drive A: is type 4 (1.44MB) - add other lines for other drives KERNEL=="fd0*", ACTION=="add", RUN+="/lib/udev/create_floppy_devices -c -t 4 -m %M -M 0660 -G floppy $root/%k"

    Read the article

  • Xna, after mouse click cpu usage goes 100%

    - by kosnkov
    Hi i have following code and it is enough just if i click on blue window then cpu goes to 100% for like at least one minute even with my i7 4 cores. I just check even with empty project and is the same !!! public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; private Texture2D cursorTex; private Vector2 cursorPos; GraphicsDevice device; float xPosition; float yPosition; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } protected override void Initialize() { Viewport vp = GraphicsDevice.Viewport; xPosition = vp.X + (vp.Width / 2); yPosition = vp.Y + (vp.Height / 2); device = graphics.GraphicsDevice; base.Initialize(); } protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); cursorTex = Content.Load<Texture2D>("strzalka"); } protected override void UnloadContent() { // TODO: Unload any non ContentManager content here } protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); spriteBatch.Begin(); spriteBatch.Draw(cursorTex, cursorPos, Color.White); spriteBatch.End(); base.Draw(gameTime); } }

    Read the article

  • the correct way to deal with gtk_events_pending and gtk_main_iteration

    - by abd alsalam
    I have program that send files and i want to make a progress bar for it, but that progress bar just updated after the transferring complete,so i putted a gtk_events_pending() and gtk_main_iteration() functions in the sending loop to go back to the gtk main loop to update the progress bar but also it seems to not work here is a EDIT: the send function is in a separated thread snippet from my code float Percent = 0.0 ; float Interval = 0.0 ; the sending function gint SendTheFile ( ) { char FileBlockBuffer[512]; bzero(FileBlockBuffer, 512); int FileBlockSize ; FILE * FilePointer ; int filesize = 0 ; FilePointer = fopen(LocalFileName , "r"); struct stat st; stat(LocalFileName, &st); filesize = st.st_size; Interval = (512 / (float)filesize) ; while((FileBlockSize = fread(FileBlockBuffer,sizeof(char),512,FilePointer))>0) { send(SocketDiscriptor , FileBlockBuffer , FileBlockSize,0); bzero(FileBlockBuffer, 512); Percent = Percent + Interval ; if (Percent > 1.0)Percent = 0.0; while(gtk_events_pending() ) { gtk_main_iteration(); } } update progress bar function gint UpdateProgressBar(gpointer data) { gtk_progress_bar_set_fraction(GTK_PROGRESS_BAR(data),Percent); } updating progress bar in the main function g_timeout_add(50,(GSourceFunc)UpdateProgressBar,SendFileProgressBar);

    Read the article

  • Battery is drained too quickly

    - by LucaB
    I'm getting really low battery life under ubuntu, not even close to windows. I tried powertop, and I saw that my laptop is consuming in idle nearly 20 watts (a bit more). I tried to install laptop-mode-tools, change "good" into "bad" in powertop, but nothing changes. I see that I have the the HD audio output device which is running at 100% every time. Could this be the problem? This is a report from powertop. The battery reports a discharge rate of 22.8 W The estimated remaining time is 33 minutes Summary: 381.8 wakeups/second, 0.0 GPU ops/second and 0.0 VFS ops/sec Usage Events/s Category Description 3.2 ms/s 182.7 Timer tick_sched_timer 100.0% Device Audio codec hwC0D3: Intel 7.9 ms/s 25.1 Process /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch -background no 1.9 ms/s 24.2 Interrupt [6] tasklet(softirq) 2.9 ms/s 23.2 Process /usr/lib/chromium-browser/chromium-browser --type=zygote 8.1 ms/s 20.3 Process /usr/lib/unity/unity-panel-service 0.7 ms/s 17.4 Timer hrtimer_wakeup 4.2 ms/s 12.6 Process unity-2d-panel 604.4 µs/s 9.7 Process syndaemon -i 2.0 -K -R -t 149.7 µs/s 9.7 kWork ieee80211_iface_work 0.8 ms/s 8.7 Process metacity 19.5 ms/s 1.0 Process powertop 3.0 ms/s 6.8 Process //bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session 699.0 µs/s 6.8 Process /usr/lib/thunderbird/thunderbird 4.3 ms/s 4.8 Process gnome-terminal 658.9 µs/s 2.9 Interrupt [1] timer(softirq) 75.1 µs/s 2.9 kWork iwl_bg_run_time_calib_work 163.8 µs/s 1.9 Process /usr/lib/accountsservice/accounts-daemon 70.6 µs/s 1.9 Process [ksoftirqd/2] 25.8 µs/s 1.9 Process [ksoftirqd/0] 1.0 ms/s 1.0 Process /usr/bin/python /usr/sbin/powernapd 408.2 µs/s 1.0 Process unity-2d-shell 189.8 µs/s 1.0 Process /usr/lib/chromium-browser/chromium-browser 124.4 µs/s 1.0 Process /usr/lib/unity-lens-applications/unity-applications-daemon 113.3 µs/s 1.0 Process /usr/lib/gnome-settings-daemon/gnome-settings-daemon 112.0 µs/s 1.0 Process nautilus -n 104.9 µs/s 1.0 Process /usr/lib/gvfs/gvfsd-trash --spawner :1.2 /org/gtk/gvfs/exec_spaw/0 77.5 µs/s 1.0 Process /usr/lib/x86_64-linux-gnu/colord/colord 75.6 µs/s 1.0 Process /usr/lib/gvfs/gvfs-gdu-volume-monitor 75.0 µs/s 1.0 Interrupt [53] i915 74.9 µs/s 1.0 Process /usr/lib/gvfs/gvfs-afc-volume-monitor What should I do to make the battery consumption lower?

    Read the article

  • Sharing an internet connection through the Ethernet port

    - by Bob Cunningham
    I have a small living room PC (Bohica, running fully-updated Ubuntu 10.10/Maverick) connected to my HDTV that I use for web browsing and media streaming. It connects via WiFi (wlan0) to my Fedora server (Snafu) that in turn connects to the internet. I use static addressing, and everything has been working fine. I just got a Blu-ray player, and I'd like to give it wired network access to the internet via Bohica's available wired ethernet port (eth0). So far, I haven't been to get eth0 and the network configured to get the Blu-ray player talking to the internet. Here's my wlan0 configuration: ip4 addr: 192.168.0.100 mask: /24 (255.255.255.0) gateway: 192.168.0.4 (fedora box) The Blu-ray player is set to an IP of 192.168.0.98/24, with the same gateway as above. I want eth0 set to an IP of 192.168.0.99/24, but when I do this using nm-connection-editor I lose internet access (the system tries to use eth0 as the default internet access interface). How do I get my blu-ray player to talk to the internet through Bohica, and do so without disrupting my current (working) network? Thanks. Edit: Here's the relevant output from nm-tool with the Blu-ray player connected: $ nm-tool NetworkManager Tool State: connected - Device: eth0 Type: Wired Driver: forcedeth State: disconnected Default: no HW Address: 90:FB:A6:2C:94:32 Capabilities: Carrier Detect: yes Speed: 100 Mb/s Wired Properties Carrier: on - Device: wlan0 [wlan0] Type: 802.11 WiFi Driver: ndiswrapper State: connected Default: yes HW Address: 00:26:5A:C0:D0:05 IPv4 Settings: Address: 192.168.0.100 Prefix: 24 (255.255.255.0) Gateway: 192.168.0.4

    Read the article

  • Importing tab delimited file into array in Visual Basic 2013 [migrated]

    - by JaceG
    I am needing to import a tab delimited text file that has 11 columns and an unknown number of rows (always minimum 3 rows). I would like to import this text file as an array and be able to call data from it as needed, throughout my project. And then, to make things more difficult, I need to replace items in the array, and even add more rows to it as the project goes on (all at runtime). Hopefully someone can suggest code corrections or useful methods. I'm hoping to use something like the array style sMyStrings(3,2), which I believe would be the easiest way to control my data. Any help is gladly appreciated, and worthy of a slab of beer. Here's the coding I have so far: Imports System.IO Imports Microsoft.VisualBasic.FileIO Public Class Main Dim strReadLine As String Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load Dim sReader As IO.StreamReader = Nothing Dim sRawString As String = Nothing Dim sMyStrings() As String = Nothing Dim intCount As Integer = -1 Dim intFullLoop As Integer = 0 If IO.File.Exists("C:\MyProject\Hardware.txt") Then ' Make sure the file exists sReader = New IO.StreamReader("C:\MyProject\Hardware.txt") Else MsgBox("File doesn't exist.", MsgBoxStyle.Critical, "Error") End End If Do While sReader.Peek >= 0 ' Make sure you can read beyond the current position sRawString = sReader.ReadLine() ' Read the current line sMyStrings = sRawString.Split(New Char() {Chr(9)}) ' Separate values and store in a string array For Each s As String In sMyStrings ' Loop through the string array intCount = intCount + 1 ' Increment If TextBox1.Text <> "" Then TextBox1.Text = TextBox1.Text & vbCrLf ' Add line feed TextBox1.Text = TextBox1.Text & s ' Add line to debug textbox If intFullLoop > 14 And intCount > -1 And CBool((intCount - 0) / 11 Mod 0) Then cmbSelectHinge.Items.Add(sMyStrings(intCount)) End If Next intCount = -1 intFullLoop = intFullLoop + 1 Loop End Sub

    Read the article

  • CUDA & MSI GT60 with Optimus enabled GTX670M?

    - by user1076693
    I have a MSI GT60 Laptop with an Optimus enabled GTX 670M GPU, and I have been trying to get CUDA going in Ubuntu 12.04 environment. I realize that Optimus is not supported in Linux, but I have read the following post suggesting that CUDA works for hybrid GPUs. How can I get nVidia CUDA or OpenCL working on a laptop with nVidia discrete card/Intel Integrated Graphics? I installed the NVIDIA driver via sudo add-apt-repository ppa:ubuntu-x-swat/x-updates sudo apt-get update sudo apt-get install nvidia-current The resulting driver version is 302.17, and supposedly GTX 670M is supported since 295.59. I also downloaded CUDA 4.2 from the NVIDIA site, and compiled it against nvidia-current libraries. Unfortunately, when I run deviceQuery in the CUDA SDK, I get the following output cudaGetDeviceCount returned 38 -> no CUDA-capable device is detected Checking /proc/driver/nvidia/gpus/0/information gives the following Model: GeForce GTX 670M IRQ: 16 GPU UUID: GPU-????????-????-????-????-???????????? Video BIOS: ??.??.??.??.?? Bus Type: PCI-E DMA Size: 32 bits DMA Mask: 0xffffffffff Bus Location: 0000:01.00.0 Here is the output of "lspci | grep VGA" 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation Device 1213 (rev ff) So... what am I doing wrong? Thanks!

    Read the article

  • Ubuntu 13.04 to 13.10: Filesystem check or mount failed

    - by SamHuckaby
    I attempted to upgrade from Ubuntu 13.04 to 13.10 today, and mid upgrade the system started flaking out, and eventually locked up entirely. I was forced to restart the computer, and am now unable to get the computer to boot up at all. When I boot currently, it takes me to the GRUB menu, and I can choose to boot normally, or boot in an older version. I have tried several things, which I list below, but no matter what, when I try to finish booting into Ubuntu, I receive the following error: Filesystem check or mount failed. A maintenance shell will now be started. CONTROL-D will terminate this shell and continue booting after re-trying filesystems. Any further errors will be ignored root@ubuntu-computername:~# I have fun fsck -f and everything appears correct, no errors are reported. and it passes all 5 checks. If I run fdisk -l then I get the following information: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes Disk identifier: 0x00010824 Device Boot Start End Blocks Id System /dev/sda1 * 2048 608456703 304227328 83 Linux /dev/sda2 608458750 625141759 8341505 5 Extended Partition 2 does not start on physical sector boundary. /dev/sda5 608458752 625141759 8341504 82 Linux swap / Solaris Disk /dev/sdb: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0fb4b7e8 Device Boot Start End Blocks Id System /dev/sdb1 8192 625139711 312565760 7 HPFS/NTFS/exFAT I am considering just installing a new OS on the other disk, that currently has nothing on it, and then just attempting to scrape my data off the old disk (thankfully I didn't encrypt the files). Really my question is this: Can I salvage this Ubuntu install, or should I give up and just reinstall?

    Read the article

  • Non use of persisted data – Part deux

    - by Dave Ballantyne
    In my last blog I showed how persisted data may not be used if you have used the base data on an include on an index. That wasn't the only problem ive had that showed the same symptom.  Using the same code as before,  I was executing similar to the below : select BillToAddressID,SOD.SalesOrderDetailID,SOH.CleanedGuid from sales.salesorderheader SOH join Sales.SalesOrderDetail SOD on SOH.SalesOrderID = SOD.SalesOrderID But,  due to a distribution error in statistics i found it necessary to use a table hint.  In this case, I wanted to force a loop join select BillToAddressID,SOD.SalesOrderDetailID,SOH.CleanedGuid from sales.salesorderheader SOH inner loop join Sales.SalesOrderDetail SOD on SOH.SalesOrderID = SOD.SalesOrderID   But, being the diligent  TSQL developer that I am ,looking at the execution plan I noticed that the ‘compute scalar’ operator was again calling the function.  Again,  profiler is a more graphic way to view this…..   All very odd,  just because ive forced a join , that has NOTHING, to do with my persisted data then something is causing the data to be re-evaluated. Not sure if there is any easy fix you can do to the TSQL here, but again its a lesson learned (or rather reinforced) examine the execution plan of every query you write to ensure that it is operating as you thought it would.

    Read the article

  • Finding Buried Controls

    - by Bunch
    This post is pretty specific to an issue I had but still has some ideas that could be applied in other scenarios. The problem I had was updating a few buttons so their Text values could be set in the code behind which had a method to grab the proper value from an external source. This was so that if the application needed to be installed by a customer using a language other than English or needed a different notation for the button's Text they could simply update the database. Most of the time this was no big deal. However I had one instance where the button was part of a control, the button had no set ID and that control was only found in a dll. So there was no markup to edit for the Button. Also updating the dll was not an option so I had to make the best of what I had to work with. In the cs file for the aspx file with the control on it I added the Page_LoadComplete. The problem button was within a GridView so I added a foreach to go through each GridViewRow and find the button I needed. Since I did not have an ID to work with besides a random ctl00$main$DllControl$gvStuff$ctl03$ctl05 using the GridView's FindControl was out. I ended up looping through each GridViewRow, then if a RowState equaled Edit loop through the Cells, each control in the Cell and check each control to see if it held a Panel that contained the button. If the control was a Panel I could then loop through the controls in the Panel, find the Button that had text of "Update" (that was the hard coded part) and change it using the method to return the proper value from the database. if (rowState.Contains("Edit")){  foreach (DataControlFieldCell rowCell in gvr.Cells)  {   foreach (Control ctrl in rowCell.Controls)   {    if (ctrl.GetType() == typeof(Panel))     {     foreach (Control childCtrl in ctrl.Controls)     {      if (childCtrl.GetType() == typeof(Button))      {       Button update = (Button)childCtrl;       if (update.Text == "Update")       {        update.Text = method to return the external value for the button's text;       }      }     }    }   }  }} Tags: ASP.Net, CSharp

    Read the article

  • Why does Ubuntu 13.10 not detect my Win7 partition?

    - by goutham
    I'm trying to install Ubuntu 13.10 alongside Windows 7 on my DELL INSPIRON 14z 5423 laptop and I'm new to all of this. I'm using the Ubuntu 13.10 64-bit ISO burned onto a CD. The first time I tried to install it, Ubuntu said it did not detect any other OS, which meant I only had 4 options: Erase disk and install Ubuntu (I don't want to do this) Encrypt new Ubuntu. Use LVM. Something else. If I choose the Something else option, it brings me to the partition menu and says that I have 1 disk with free space of (500Gb), but that's not true because I have Windows 7. I restarted the laptop several times and booted the CD again and I got exactly the same as I did previously. How do fix this problem and install Ubuntu alongside Windows 7? After executing "sudo fdisk -l" command in terminal ubuntu@ubuntu:~$ sudo fdisk -l WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0xd2b811c5 Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 314574847 157184000 7 HPFS/NTFS/exFAT /dev/sda3 314574848 629147647 157286400 7 HPFS/NTFS/exFAT /dev/sda4 629147648 976771071 173811712 7 HPFS/NTFS/exFAT After removing one partition I executed command once again ubuntu@ubuntu:~$ sudo fdisk -l WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0xd2b811c5 Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 629145599 314469376 7 HPFS/NTFS/exFAT /dev/sda3 629147648 976771071 173811712 7 HPFS/NTFS/exFAT

    Read the article

  • USB and CD data cannot be read or mounted in 12.10

    - by aravind4j
    I'm using Ubuntu 12.10. I wrote a data disk using Brasero Disk Burner, but the system cannot read the CD. I tried to write the same data into my HP v220w pen drive but now there is an error in that too. It shows the following: Error mounting /dev/sdb at /media/aravind4j/Aravind4j: Command-line `mount -t "ntfs" -o "uhelper=udisks2,nodev,nosuid,uid=1000,gid=1000,dmask=0077,fmask=0177" "/dev/sdb" "/media/aravind4j/Aravind4j"' exited with non-zero exit status 13: $MFTMirr does not match $MFT (record 0). Failed to mount '/dev/sdb': Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more details. (udisks-error-quark, 0) I used a NTFS file system for the pen drive as you can recognize from the above statement. I would like to recover the files so please help me recover the files without formatting the drive.

    Read the article

< Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >