Search Results

Search found 48823 results on 1953 pages for 'run loop'.

Page 286/1953 | < Previous Page | 282 283 284 285 286 287 288 289 290 291 292 293  | Next Page >

  • glibc not found when trying to install Absinthe?

    - by RafLance
    I am trying to install Absinthe 2.0.4 on Ubuntu 11.10 on a netbook. When I try to run the install file, this keeps on happening: rafael@RafLaptop:~/Desktop/absinthe-linux-2.0.4$ ./absinthe.x86 ./absinthe.x86: /lib/i386-linux-gnu/libc.so.6: version `GLIBC_2.15' not found (required by ./absinthe.x86) Do I need to upgrade GLIBC? If so, how do I do that? Since I'm on a netbook I can't use a LiveCD so I wanted to know if there was a way I could fix this issue without reinstalling my whole OS. Any explanations about what GLIBC is exactly would be great too since this is a learning experience for me. I know that GLIBC is a part of libc.so.6 and so I tried to run sudo apt-get install libc.so.6 but was told that it was up to date. But GLIBC isn't? I hope this articulates my problem well, if there are any pieces of missing info or questions to clarity my question, please let me know! ~-~ EDIT/UPDATE: So after some help on the AskUbuntu chat room from user izx, I have gathered the following information/understanding: -I need to run this program with Ubuntu 12.04 or recompile it from source -Upgrading libc on Oneiric to 2.15 while possible, is not an easy task and is not officially supported.

    Read the article

  • Using SPServices &amp; jQuery to Find My Stuff from Multi-Select Person/Group Field

    - by Mark Rackley
    Okay… quick blog post for all you SPServices fans out there. I needed to quickly write a script that would return all the tasks currently assigned to me.  I also wanted it to return any task that was assigned to a group I belong to. This can actually be done with a CAML query, so no big deal, right?  The rub is that the “assigned to” field is a multi-select person or group field. As far as I know (and I actually know so little) you cannot just write a CAML query to return this information. If you can, please leave a comment below and disregard the rest of this blog post… So… what’s a hacker to do? As always, I break things down to their most simple components (I really love the KISS principle and would get it tattooed on my back if people wouldn’t think it meant “Knights In Satan’s Service”. You really gotta be an old far to get that reference).  Here’s what we’re going to do: Get currently logged in user’s name as it is stored in a person field Find all the SharePoint groups the current user belongs to Retrieve a set of assigned tasks from the task list and then find those that are assigned to current  user or group current user belongs to Nothing too hairy… So let’s get started Some Caveats before I continue There are some obvious performance implications with this solution as I make a total of four SPServices calls and there’s a lot of looping going on. Also, the CAML query in this blog has NOT been optimized. If you move forward with this code, tweak it so that it returns a further subset of data or you will see horrible performance if you have a few hundred entries in your task list. Add a date range to the CAML or something. Find some way to limit the results as much as possible. Lastly, if you DO have a better solution, I would like you to share. Iron sharpens iron and all…   Alright, let’s really get started. Get currently logged in user’s name as it is stored in a person field First thing we need to do is understand how a person group looks when you look at the XML returned from a SharePoint Web Service call. It turns out it’s stored like any other multi select item in SharePoint which is <id>;#<value> and when you assign a person to that field the <value> equals the person’s name “Mark Rackley” in my case. This is for Windows Authentication, I would expect this to be different in FBA, but I’m not using FBA. If you want to know what it looks like with FBA you can use the code in this blog and strategically place an alert to see the value.  Anyway… I need to find the name of the user who is currently logged in as it is stored in the person field. This turns out to be one SPServices call: var userName = $().SPServices.SPGetCurrentUser({                     fieldName: "Title",                     debug: false                     }); As you can see, the “Title” field has the information we need. I suspect (although again, I haven’t tried) that the Title field also contains the user’s name as we need it if I was using FBA. Okay… last thing we need to do is store our users name in an array for processing later: myGroups = new Array(); myGroups.push(userName); Find all the SharePoint groups the current user belongs to Now for the groups. How are groups returned in that XML stream?  Same as the person <ID>;#<Group Name>, and if it’s a mutli select it’s all returned in one big long string “<ID>;#<Group Name>;#<ID>;#<Group Name>;#<ID>;#<Group Name>;#<ID>;#<Group Name>;#<ID>;#<Group Name>”.  So, how do we find all the groups the current user belongs to? This is also a simple SPServices call. Using the “GetGroupCollectionFromUser” operation we can find all the groups a user belongs to. So, let’s execute this method and store all our groups. $().SPServices({       operation: "GetGroupCollectionFromUser",       userLoginName: $().SPServices.SPGetCurrentUser(),       async: false,       completefunc: function(xData, Status) {          $(xData.responseXML).find("[nodeName=Group]").each(function() {                 myGroups.push($(this).attr("Name"));          });         }     }); So, all we did in the above code was execute the “GetGroupCollectionFromUser” operation and look for the each “Group” node (row) and store the name for each group in our array that we put the user’s name in previously (myGroups). Now we have an array that contains the current user’s name as it will appear in the person field XML and  all the groups the current user belongs to. The Rest Now comes the easy part for all of you familiar with SPServices. We are going to retrieve our tasks from the Task list using “GetListItems” and look at each entry to see if it belongs to this person. If it does belong to this person we are going to store it for later processing. That code looks something like this: // get list of assigned tasks that aren't closed... *modify the CAML to perform better!*             $().SPServices({                   operation: "GetListItems",                   async: false,                   listName: "Tasks",                   CAMLViewFields: "<ViewFields>" +                             "<FieldRef Name='AssignedTo' />" +                             "<FieldRef Name='Title' />" +                             "<FieldRef Name='StartDate' />" +                             "<FieldRef Name='EndDate' />" +                             "<FieldRef Name='Status' />" +                             "</ViewFields>",                   CAMLQuery: "<Query><Where><And><IsNotNull><FieldRef Name='AssignedTo'/></IsNotNull><Neq><FieldRef Name='Status'/><Value Type='Text'>Completed</Value></Neq></And></Where></Query>",                     completefunc: function (xData, Status) {                         var aDataSet = new Array();                        //loop through each returned Task                         $(xData.responseXML).find("[nodeName=z:row]").each(function() {                             //store the multi-select string of who task is assigned to                             var assignedToString = $(this).attr("ows_AssignedTo");                             found = false;                            //loop through the persons name and all the groups they belong to                             for(var i=0; i<myGroups.length; i++) {                                 //if the person's name or group exists in the assigned To string                                 //then the task is assigned to them                                 if (assignedToString.indexOf(myGroups[i]) >= 0){                                     found = true;                                     break;                                 }                             }                             //if the Task belongs to this person then store or display it                             //(I'm storing it in an array)                             if (found){                                 var thisName = $(this).attr("ows_Title");                                 var thisStartDate = $(this).attr("ows_StartDate");                                 var thisEndDate = $(this).attr("ows_EndDate");                                 var thisStatus = $(this).attr("ows_Status");                                                                  var aDataRow=new Array(                                     thisName,                                     thisStartDate,                                     thisEndDate,                                     thisStatus);                                 aDataSet.push(aDataRow);                             }                          });                          SomeFunctionToDisplayData(aDataSet);                     }                 }); Some notes on why I did certain things and additional caveats. You will notice in my code that I’m doing an AssignedToString.indexOf(GroupName) to see if the task belongs to the person. This could possibly return bad results if you have SharePoint Group names that are named in such a way that the “IndexOf” returns a false positive.  For example if you have a Group called “My Users” and a group called “My Users – SuperUsers” then if a user belonged to “My Users” it would return a false positive on executing “My Users – SuperUsers”.IndexOf(“My Users”). Make sense? Just be aware of this when naming groups, we don’t have this problem. This is where also some fine-tuning can probably be done by those smarter than me. This is a pretty inefficient method to determine if a task belongs to a user, I mean what if a user belongs to 20 groups? That’s a LOT of looping.  See all the opportunities I give you guys to do something fun?? Also, why am I storing my values in an array instead of just writing them out to a Div? Well.. I want to pass my data to a jQuery library to format it all nice and pretty and an Array is a great way to do that. When all is said and done and we put all the code together it looks like:   $(document).ready(function() {         var userName = $().SPServices.SPGetCurrentUser({                     fieldName: "Title",                     debug: false                     });         myGroups = new Array();     myGroups.push(userName );       $().SPServices({       operation: "GetGroupCollectionFromUser",       userLoginName: $().SPServices.SPGetCurrentUser(),       async: false,       completefunc: function(xData, Status) {          $(xData.responseXML).find("[nodeName=Group]").each(function() {                 myGroups.push($(this).attr("Name"));          });                      // get list of assigned tasks that aren't closed... *modify this CAML to perform better!*             $().SPServices({                   operation: "GetListItems",                   async: false,                   listName: "Tasks",                   CAMLViewFields: "<ViewFields>" +                             "<FieldRef Name='AssignedTo' />" +                             "<FieldRef Name='Title' />" +                             "<FieldRef Name='StartDate' />" +                             "<FieldRef Name='EndDate' />" +                             "<FieldRef Name='Status' />" +                             "</ViewFields>",                   CAMLQuery: "<Query><Where><And><IsNotNull><FieldRef Name='AssignedTo'/></IsNotNull><Neq><FieldRef Name='Status'/><Value Type='Text'>Completed</Value></Neq></And></Where></Query>",                     completefunc: function (xData, Status) {                         var aDataSet = new Array();                         //loop through each returned Task                         $(xData.responseXML).find("[nodeName=z:row]").each(function() {                             //store the multi-select string of who task is assigned to                             var assignedToString = $(this).attr("ows_AssignedTo");                             found = false;                            //loop through the persons name and all the groups they belong to                             for(var i=0; i<myGroups.length; i++) {                                 //if the person's name or group exists in the assigned To string                                 //then the task is assigned to them                                 if (assignedToString.indexOf(myGroups[i]) >= 0){                                     found = true;                                     break;                                 }                             }                            //if the Task belongs to this person then store or display it                             //(I'm storing it in an array)                             if (found){                                 var thisName = $(this).attr("ows_Title");                                 var thisStartDate = $(this).attr("ows_StartDate");                                 var thisEndDate = $(this).attr("ows_EndDate");                                 var thisStatus = $(this).attr("ows_Status");                                                                  var aDataRow=new Array(                                     thisName,                                     thisStartDate,                                     thisEndDate,                                     thisStatus);                                 aDataSet.push(aDataRow);                             }                          });                          SomeFunctionToDisplayData(aDataSet);                     }                 });       }    });  }); Final Thoughts So, there you have it. Take it and run with it. Make it something cool (and tell me how you did it). Another possible way to improve performance in this scenario is to use a DVWP to display the tasks and use jQuery and the “myGroups” array from this blog post to hide all those rows that don’t belong to the current user. I haven’t tried it, but it does move some of the processing off to the server (generating the view) so it may perform better.  As always, thanks for stopping by… hope you have a Merry Christmas…

    Read the article

  • double-click does not open the default program

    - by Chang
    I installed Ubuntu 12.04, with texlive-full and texworks. When I double-click a .tex file in Nautilus, it pops up a Do you want to run "xxxxxxxx.tex", or display it contents? "xxxxxxxx.tex" is an executable text file. Run in Terminal Display Cancel Run If I choose Display, it opens texworks. How can I make it open without seeing the above conversation window? By the way, is .tex file indeed an executable file? ADDED Just for the case, my ~/.local/share/applications/mimeapps.list file looks like the following: [Default Applications] text/html=google-chrome.desktop x-scheme-handler/http=google-chrome.desktop x-scheme-handler/https=google-chrome.desktop x-scheme-handler/about=google-chrome.desktop x-scheme-handler/unknown=google-chrome.desktop text/x-tex=texworks.desktop x-scheme-handler/mailto=google-chrome.desktop [Added Associations] text/x-tex=texworks.desktop; text/x-bibtex=jabref.desktop;gedit.desktop; I observe texworks are both up and down. Should I remove one? ADDED This does not happen with all .tex files. In fact, I am currently using Ubuntu 12.04 under Virtual Box with Windows 7 host. I have my Dropbox account synced with the Windows 7 host, and I access files in Dropbox in Ubuntu through Virtual Box's shared folder functionality. (I didn't install Dropbox client in Ubuntu.) Files in Dropbox are owned by root with group vboxsf. My personal account is in the group vboxsf. It seems that I have to uncheck the option for "executable", but I have all my .tex files in the Dropbox shared folder. Would there be any workaround?

    Read the article

  • Working with Timelines with LINQ to Twitter

    - by Joe Mayo
    When first working with the Twitter API, I thought that using SinceID would be an effective way to page through timelines. In practice it doesn’t work well for various reasons. To explain why, Twitter published an excellent document that is a must-read for anyone working with timelines: Twitter Documentation: Working with Timelines This post shows how to implement the recommended strategies in that document by using LINQ to Twitter. You should read the document in it’s entirety before moving on because my explanation will start at the bottom and work back up to the top in relation to the Twitter document. What follows is an explanation of SinceID, MaxID, and how they come together to help you efficiently work with Twitter timelines. The Role of SinceID Specifying SinceID says to Twitter, “Don’t return tweets earlier than this”. What you want to do is store this value after every timeline query set so that it can be reused on the next set of queries.  The next section will explain what I mean by query set, but a quick explanation is that it’s a loop that gets all new tweets. The SinceID is a backstop to avoid retrieving tweets that you already have. Here’s some initialization code that includes a variable named sinceID that will be used to populate the SinceID property in subsequent queries: // last tweet processed on previous query set ulong sinceID = 210024053698867204; ulong maxID; const int Count = 10; var statusList = new List<status>(); Here, I’ve hard-coded the sinceID variable, but this is where you would initialize sinceID from whatever storage you choose (i.e. a database). The first time you ever run this code, you won’t have a value from a previous query set. Initially setting it to 0 might sound like a good idea, but what if you’re querying a timeline with lots of tweets? Because of the number of tweets and rate limits, your query set might take a very long time to run. A caveat might be that Twitter won’t return an entire timeline back to Tweet #0, but rather only go back a certain period of time, the limits of which are documented for individual Twitter timeline API resources. So, to initialize SinceID at too low of a number can result in a lot of initial tweets, yet there is a limit to how far you can go back. What you’re trying to accomplish in your application should guide you in how to initially set SinceID. I have more to say about SinceID later in this post. The other variables initialized above include the declaration for MaxID, Count, and statusList. The statusList variable is a holder for all the timeline tweets collected during this query set. You can set Count to any value you want as the largest number of tweets to retrieve, as defined by individual Twitter timeline API resources. To effectively page results, you’ll use the maxID variable to set the MaxID property in queries, which I’ll discuss next. Initializing MaxID On your first query of a query set, MaxID will be whatever the most recent tweet is that you get back. Further, you don’t know what MaxID is until after the initial query. The technique used in this post is to do an initial query and then use the results to figure out what the next MaxID will be.  Here’s the code for the initial query: var userStatusResponse = (from tweet in twitterCtx.Status where tweet.Type == StatusType.User && tweet.ScreenName == "JoeMayo" && tweet.SinceID == sinceID && tweet.Count == Count select tweet) .ToList(); statusList.AddRange(userStatusResponse); // first tweet processed on current query maxID = userStatusResponse.Min( status => ulong.Parse(status.StatusID)) - 1; The query above sets both SinceID and Count properties. As explained earlier, Count is the largest number of tweets to return, but the number can be less. A couple reasons why the number of tweets that are returned could be less than Count include the fact that the user, specified by ScreenName, might not have tweeted Count times yet or might not have tweeted at least Count times within the maximum number of tweets that can be returned by the Twitter timeline API resource. Another reason could be because there aren’t Count tweets between now and the tweet ID specified by sinceID. Setting SinceID constrains the results to only those tweets that occurred after the specified Tweet ID, assigned via the sinceID variable in the query above. The statusList is an accumulator of all tweets receive during this query set. To simplify the code, I left out some logic to check whether there were no tweets returned. If  the query above doesn’t return any tweets, you’ll receive an exception when trying to perform operations on an empty list. Yeah, I cheated again. Besides querying initial tweets, what’s important about this code is the final line that sets maxID. It retrieves the lowest numbered status ID in the results. Since the lowest numbered status ID is for a tweet we already have, the code decrements the result by one to keep from asking for that tweet again. Remember, SinceID is not inclusive, but MaxID is. The maxID variable is now set to the highest possible tweet ID that can be returned in the next query. The next section explains how to use MaxID to help get the remaining tweets in the query set. Retrieving Remaining Tweets Earlier in this post, I defined a term that I called a query set. Essentially, this is a group of requests to Twitter that you perform to get all new tweets. A single query might not be enough to get all new tweets, so you’ll have to start at the top of the list that Twitter returns and keep making requests until you have all new tweets. The previous section showed the first query of the query set. The code below is a loop that completes the query set: do { // now add sinceID and maxID userStatusResponse = (from tweet in twitterCtx.Status where tweet.Type == StatusType.User && tweet.ScreenName == "JoeMayo" && tweet.Count == Count && tweet.SinceID == sinceID && tweet.MaxID == maxID select tweet) .ToList(); if (userStatusResponse.Count > 0) { // first tweet processed on current query maxID = userStatusResponse.Min( status => ulong.Parse(status.StatusID)) - 1; statusList.AddRange(userStatusResponse); } } while (userStatusResponse.Count != 0 && statusList.Count < 30); Here we have another query, but this time it includes the MaxID property. The SinceID property prevents reading tweets that we’ve already read and Count specifies the largest number of tweets to return. Earlier, I mentioned how it was important to check how many tweets were returned because failing to do so will result in an exception when subsequent code runs on an empty list. The code above protects against this problem by only working with the results if Twitter actually returns tweets. Reasons why there wouldn’t be results include: if the first query got all the new tweets there wouldn’t be more to get and there might not have been any new tweets between the SinceID and MaxID settings of the most recent query. The code for loading the returned tweets into statusList and getting the maxID are the same as previously explained. The important point here is that MaxID is being reset, not SinceID. As explained in the Twitter documentation, paging occurs from the newest tweets to oldest, so setting MaxID lets us move from the most recent tweets down to the oldest as specified by SinceID. The two loop conditions cause the loop to continue as long as tweets are being read or a max number of tweets have been read.  Logically, you want to stop reading when you’ve read all the tweets and that’s indicated by the fact that the most recent query did not return results. I put the check to stop after 30 tweets are reached to keep the demo from running too long – in the console the response scrolls past available buffer and I wanted you to be able to see the complete output. Yet, there’s another point to be made about constraining the number of items you return at one time. The Twitter API has rate limits and making too many queries per minute will result in an error from twitter that LINQ to Twitter raises as an exception. To use the API properly, you’ll have to ensure you don’t exceed this threshold. Looking at the statusList.Count as done above is rather primitive, but you can implement your own logic to properly manage your rate limit. Yeah, I cheated again. Summary Now you know how to use LINQ to Twitter to work with Twitter timelines. After reading this post, you have a better idea of the role of SinceID - the oldest tweet already received. You also know that MaxID is the largest tweet ID to retrieve in a query. Together, these settings allow you to page through results via one or more queries. You also understand what factors affect the number of tweets returned and considerations for potential error handling logic. The full example of the code for this post is included in the downloadable source code for LINQ to Twitter.   @JoeMayo

    Read the article

  • Mono is frequently used to say "Yes, .NET is cross-platform". How valid is that claim?

    - by Thorbjørn Ravn Andersen
    In What would you choose for your project between .NET and Java at this point in time? I say that I would consider the "Will you always deploy to Windows?" the single most important (EDIT: technical) decision to make up front in a new web project, and if the answer is "no", I would recommend Java instead of .NET. A very common counter-argument is that "If we ever want to run on Linux/OS X/Whatever, we'll just run Mono", which is a very compelling argument on the surface, but I don't agree for several reasons. OpenJDK and all the vendor supplied JVM's have passed the official Sun TCK ensuring things work correctly. I am not aware of Mono passing a Microsoft TCK. Mono trails the .NET releases. What .NET-level is currently fully supported? Does all GUI elements (WinForms?) work correctly in Mono? Businesses may not want to depend on Open Source frameworks as the official plan B. I am aware that with the new governance of Java by Oracle, the future is unsafe, but e.g. IBM provides JDK's for many platforms, including Linux. They are just not open sourced. So, under which circumstances is Mono a valid business strategy for .NET-applications? Edit: Mark H summarized it as: "If the claim is that "I have a windows application written in .NET, it should run on mono", then not, it's not a valid claim - but Mono has made efforts to make porting such applications simpler.".

    Read the article

  • Using prefix incremented loops in C#

    - by KChaloux
    Back when I started programming in college, a friend encouraged me to use the prefix incrementation operator ++i instead of the postfix i++, citing that there was a slight chance of better performance with no real chance of a downside. I realize this is true in C++, and it's become a general habit that I continue to do. I'm led to believe that it makes little to no difference when used in a loop in C#, regardless of data type. Apparently the ++ operator can't be overridden. Nevertheless, I like the appearance more, and don't see a direct downside to it. It did astonish a coworker just a moment ago though, he made the (fairly logical) assumption that my loop would terminate early as a result. He's a self-taught programmer, and apparently never came across the C++ convention. That made me question whether or not the equivalent behavior of pre- and post-fix increment and decrement operators in loops is well known enough. Is it acceptable for me to continue using ++i in looping constructs because of style preference, even though it has no real performance benefit? Or is it likely to cause confusion amongst other programmers? Note: This is assuming the ++i convention is used consistently throughout all code.

    Read the article

  • Is it OK to use dynamic typing to reduce the amount of variables in scope?

    - by missingno
    Often, when I am initializing something I have to use a temporary variable, for example: file_str = "path/to/file" file_file = open(file) or regexp_parts = ['foo', 'bar'] regexp = new RegExp( regexp_parts.join('|') ) However, I like to reduce the scope my variables to the smallest scope possible so there is less places where they can be (mis-)used. For example, I try to use for(var i ...) in C++ so the loop variable is confined to the loop body. In these initialization cases, if I am using a dynamic language, I am then often tempted to reuse the same variable in order to prevent the initial (and now useless) value from being used latter in the function. file = "path/to/file" file = open(file) regexp = ['...', '...'] regexp = new RegExp( regexp.join('|') ) The idea is that by reducing the number of variables in scope I reduce the chances to misuse them. However this sometimes makes the variable names look a little weird, as in the first example, where "file" refers to a "filename". I think perhaps this would be a non issue if I could use non-nested scopes begin scope1 filename = ... begin scope2 file = open(filename) end scope1 //use file here //can't use filename on accident end scope2 but I can't think of any programming language that supports this. What rules of thumb should I use in this situation? When is it best to reuse the variable? When is it best to create an extra variable? What other ways do we solve this scope problem?

    Read the article

  • Matrix Multiplication with C++ AMP

    - by Daniel Moth
    As part of our API tour of C++ AMP, we looked recently at parallel_for_each. I ended that post by saying we would revisit parallel_for_each after introducing array and array_view. Now is the time, so this is part 2 of parallel_for_each, and also a post that brings together everything we've seen until now. The code for serial and accelerated Consider a naïve (or brute force) serial implementation of matrix multiplication  0: void MatrixMultiplySerial(std::vector<float>& vC, const std::vector<float>& vA, const std::vector<float>& vB, int M, int N, int W) 1: { 2: for (int row = 0; row < M; row++) 3: { 4: for (int col = 0; col < N; col++) 5: { 6: float sum = 0.0f; 7: for(int i = 0; i < W; i++) 8: sum += vA[row * W + i] * vB[i * N + col]; 9: vC[row * N + col] = sum; 10: } 11: } 12: } We notice that each loop iteration is independent from each other and so can be parallelized. If in addition we have really large amounts of data, then this is a good candidate to offload to an accelerator. First, I'll just show you an example of what that code may look like with C++ AMP, and then we'll analyze it. It is assumed that you included at the top of your file #include <amp.h> 13: void MatrixMultiplySimple(std::vector<float>& vC, const std::vector<float>& vA, const std::vector<float>& vB, int M, int N, int W) 14: { 15: concurrency::array_view<const float,2> a(M, W, vA); 16: concurrency::array_view<const float,2> b(W, N, vB); 17: concurrency::array_view<concurrency::writeonly<float>,2> c(M, N, vC); 18: concurrency::parallel_for_each(c.grid, 19: [=](concurrency::index<2> idx) restrict(direct3d) { 20: int row = idx[0]; int col = idx[1]; 21: float sum = 0.0f; 22: for(int i = 0; i < W; i++) 23: sum += a(row, i) * b(i, col); 24: c[idx] = sum; 25: }); 26: } First a visual comparison, just for fun: The beginning and end is the same, i.e. lines 0,1,12 are identical to lines 13,14,26. The double nested loop (lines 2,3,4,5 and 10,11) has been transformed into a parallel_for_each call (18,19,20 and 25). The core algorithm (lines 6,7,8,9) is essentially the same (lines 21,22,23,24). We have extra lines in the C++ AMP version (15,16,17). Now let's dig in deeper. Using array_view and extent When we decided to convert this function to run on an accelerator, we knew we couldn't use the std::vector objects in the restrict(direct3d) function. So we had a choice of copying the data to the the concurrency::array<T,N> object, or wrapping the vector container (and hence its data) with a concurrency::array_view<T,N> object from amp.h – here we used the latter (lines 15,16,17). Now we can access the same data through the array_view objects (a and b) instead of the vector objects (vA and vB), and the added benefit is that we can capture the array_view objects in the lambda (lines 19-25) that we pass to the parallel_for_each call (line 18) and the data will get copied on demand for us to the accelerator. Note that line 15 (and ditto for 16 and 17) could have been written as two lines instead of one: extent<2> e(M, W); array_view<const float, 2> a(e, vA); In other words, we could have explicitly created the extent object instead of letting the array_view create it for us under the covers through the constructor overload we chose. The benefit of the extent object in this instance is that we can express that the data is indeed two dimensional, i.e a matrix. When we were using a vector object we could not do that, and instead we had to track via additional unrelated variables the dimensions of the matrix (i.e. with the integers M and W) – aren't you loving C++ AMP already? Note that the const before the float when creating a and b, will result in the underling data only being copied to the accelerator and not be copied back – a nice optimization. A similar thing is happening on line 17 when creating array_view c, where we have indicated that we do not need to copy the data to the accelerator, only copy it back. The kernel dispatch On line 18 we make the call to the C++ AMP entry point (parallel_for_each) to invoke our parallel loop or, as some may say, dispatch our kernel. The first argument we need to pass describes how many threads we want for this computation. For this algorithm we decided that we want exactly the same number of threads as the number of elements in the output matrix, i.e. in array_view c which will eventually update the vector vC. So each thread will compute exactly one result. Since the elements in c are organized in a 2-dimensional manner we can organize our threads in a two-dimensional manner too. We don't have to think too much about how to create the first argument (a grid) since the array_view object helpfully exposes that as a property. Note that instead of c.grid we could have written grid<2>(c.extent) or grid<2>(extent<2>(M, N)) – the result is the same in that we have specified M*N threads to execute our lambda. The second argument is a restrict(direct3d) lambda that accepts an index object. Since we elected to use a two-dimensional extent as the first argument of parallel_for_each, the index will also be two-dimensional and as covered in the previous posts it represents the thread ID, which in our case maps perfectly to the index of each element in the resulting array_view. The kernel itself The lambda body (lines 20-24), or as some may say, the kernel, is the code that will actually execute on the accelerator. It will be called by M*N threads and we can use those threads to index into the two input array_views (a,b) and write results into the output array_view ( c ). The four lines (21-24) are essentially identical to the four lines of the serial algorithm (6-9). The only difference is how we index into a,b,c versus how we index into vA,vB,vC. The code we wrote with C++ AMP is much nicer in its indexing, because the dimensionality is a first class concept, so you don't have to do funny arithmetic calculating the index of where the next row starts, which you have to do when working with vectors directly (since they store all the data in a flat manner). I skipped over describing line 20. Note that we didn't really need to read the two components of the index into temporary local variables. This mostly reflects my personal choice, in some algorithms to break down the index into local variables with names that make sense for the algorithm, i.e. in this case row and col. In other cases it may i,j,k or x,y,z, or M,N or whatever. Also note that we could have written line 24 as: c(idx[0], idx[1])=sum  or  c(row, col)=sum instead of the simpler c[idx]=sum Targeting a specific accelerator Imagine that we had more than one hardware accelerator on a system and we wanted to pick a specific one to execute this parallel loop on. So there would be some code like this anywhere before line 18: vector<accelerator> accs = MyFunctionThatChoosesSuitableAccelerators(); accelerator acc = accs[0]; …and then we would modify line 18 so we would be calling another overload of parallel_for_each that accepts an accelerator_view as the first argument, so it would become: concurrency::parallel_for_each(acc.default_view, c.grid, ...and the rest of your code remains the same… how simple is that? Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • What is the best way to render a 2d game map?

    - by Deukalion
    I know efficiency is key in game programming and I've had some experiences with rendering a "map" earlier but probably not in the best of ways. For a 2D TopDown game: (simply render the textures/tiles of the world, nothing else) Say, you have a map of 1000x1000 (tiles or whatever). If the tile isn't in the view of the camera, it shouldn't be rendered - it's that simple. No need to render a tile that won't be seen. But since you have 1000x1000 objects in your map, or perhaps less you probably don't want to loop through all 1000*1000 tiles just to see if they're suppose to be rendered or not. Question: What is the best way to implement this efficiency? So that it "quickly/quicker" can determine what tiles are suppose to be rendered? Also, I'm not building my game around tiles rendered with a SpriteBatch so there's no rectangles, the shapes can be different sizes and have multiple points, say a curved object of 10 points and a texture inside that shape; Question: How do you determine if this kind of objects is "inside" the View of the camera? It's easy with a 48x48 rectangle, just see if it X+Width or Y+Height is in the view of the camera. Different with multiple points. Simply put, how to manage the code and the data efficiently to not having to run through/loop through a million of objects at the same time.

    Read the article

  • How to make Box2D bodies automatically return to a initial rotation

    - by sm4
    I have two long Box2D bodies, that can collide while moving one of them around with MouseJoint. I want them to try to hold their position and rotation. Blue body is moved using MouseJoint (yellow) towards the Red body. Red body has another MouseJoint - Blue can push Red, but Red will try to return to the start point thanks to the MouseJoint - this works just fine. Both bodies correctly rotate along the middle. This is still as I want. I change the MouseJoint to move the Blue away. What I need is both bodies return to their initial rotation (green arrows) Desired positions and rotations Is there anything in Box2D that could do this automatically? The MouseJoint does that nicely for position. I need it in AndEngine (Java, Android) port, but any Box2D solution is fine. EDIT: By automatically I mean having something I can add to the object "Paddle" without the need to change game loop. I want to encapsulate this functionality to the object itself. I already have an object Paddle that has its own UpdateHandler which is being called from the game loop. What would be much nicer is to attach some kind of "spring" joint to both left and right sides of the paddle that would automatically level the paddle. I will be exploring this option soon.

    Read the article

  • Cannot access the filesystems using LiveCD (LVM2,EXT2)

    - by ftkg
    I have to restore the /etc/passwd file I accidentally renamed in my Ubuntu Server, so I booted the machine using a LiveCD. Problem is, the system filesystem does not appear in Nautilus, under 'Devices'. Am I missing anything? ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000956dc Device Boot Start End Blocks Id System /dev/sda1 * 2048 499711 248832 83 Linux /dev/sda2 501758 625141759 312320001 5 Extended /dev/sda5 501760 625141759 312320000 8e Linux LVM ubuntu@ubuntu:~$ mount /cow on / type overlayfs (rw) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) /dev/sr0 on /cdrom type iso9660 (ro,noatime) /dev/loop0 on /rofs type squashfs (ro,noatime) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) tmpfs on /tmp type tmpfs (rw,nosuid,nodev) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) gvfs-fuse-daemon on /home/ubuntu/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=ubuntu) ubuntu@ubuntu:~$ sudo blkid /dev/loop0: TYPE="squashfs" /dev/sda1: UUID="aad69790-198d-45bc-9ccd-e4cba7456914" TYPE="ext2" /dev/sda5: UUID="wbIDX7-RILL-VtFT-gX15-N1GJ-Yyfg-V8Oe5m" TYPE="LVM2_member" /dev/sr0: LABEL="Ubuntu 12.04 LTS i386" TYPE="iso9660" ubuntu@ubuntu:~$ cat /etc/fstab overlayfs / overlayfs rw 0 0 tmpfs /tmp tmpfs nosuid,nodev 0 0

    Read the article

  • Circle to Circle collision, checking each circle against all others

    - by user14861
    I'm currently coding a little circle to circle collision demo but I've got a little stuck. I think I currently have the code to detect collision but I'm not sure how to loop through my list of circles and check them off against one another. Collision check code: public static Vector2 GetIntersectionDepth(Circle a, Circle b) { float xValue = a.Center.X - b.Center.X; float yValue = a.Center.Y - b.Center.Y; Vector2 depth = Vector2.Zero; float distance = Vector2.Distance(a.Center, b.Center); if (a.Radius + b.Radius > distance) { float result = (a.Radius + b.Radius) - distance; depth.X = (float)Math.Cos(result); depth.Y = (float)Math.Sin(result); } return depth; } Loop through code: Vector2 depth = Vector2.Zero; for (int i = 0; i < bounds.Count; i++) for (int j = i+1; j < bounds.Count; j++) { depth = CircleToCircleIntersection.GetIntersectionDepth(bounds[i], bounds[j]); } Clearly I'm a complete newbie, wondering if anyone can give any suggestions or point out my errors, thanks in advance. :)

    Read the article

  • MySQL - Configuration

    - by Stuart Brierley
    Having previously detailed how to install MySQL Server, the next step is configuring MySQL. The MySQL configuration wizard can either be run immediately following installation from the MySQL installation wizard or manually from the Start Menu. Following the splash screen you can then choose whether to run a detailed or standard configuration. The detailed configuration allows you to create the optimal configuration for your specific machine, whereas the standard configuration creates a general configuration that can then be manually tuned. I chose detailed.   You are then asked to choose the type of server instance that is being configured. In this case it is a developer machine. Following this you are asked to choose the type of database usage that you expect on the server. I opted for multifunctional. You must then specify the location of the InnoDB tablespace.   Next specify the number of concurent connections to the server.   Now you must configure the networking options. I left the Strict mode enabled as this is the recommended option, but I disabled TCP/IP networking as I wanted to restrict this MySQL installation to the local machine only.   Set the character set that is best suited to your use - for me this was the default standard character set. Next up is the option to run MySQL as a service and whether or not to include the mysql dircetories in the windows PATH. I kept the install as a windows service option enabled, but unchecked the Launch MySQL server automatically option. This is because I only wanted MySQL running when I specifically want to use it. I also enabled the include in windows PATH option.   You can then change the security settings for the mysql installation. I opted to change the root password, disable root from local machines and disable annoymous access.   You are now ready to execute the configuration.   Once completed you should hopefully see the completed screen with lots of nice ticks against the various configuration tasks.

    Read the article

  • How to write your unit tests to switch between NUnit and MSTest

    - by Justin Jones
    On my current project I found it useful to use both NUnit and MsTest for unit testing. When using ReSharper for running unit tests, it just simply works better with NUnit, and on large scale projects NUnit tends to run faster. We would have just simply used NUnit for everything, but MSTest gave us a few bonuses out of the box that were hard to pass up. Namely code coverage (without having to shell out thousands of extra dollars for the privilege) and integrated tests into the build process. I’m one of those guys who wants the build to fail if the unit tests don’t pass. If they don’t pass, there’s no point in sending that build on to QA. So making the build work with MsTest is easiest if you just create a unit test project in your solution. This adds the right references and project type Guids in the project file so that everything just automagically just works. Then (using NuGet of course) you add in NUnit. At the top of your test file, remove the using statements that refer to MsTest and replace it with the following: #if NUNIT using NUnit.Framework; #else using TestFixture = Microsoft.VisualStudio.TestTools.UnitTesting.TestClassAttribute; using Test = Microsoft.VisualStudio.TestTools.UnitTesting.TestMethodAttribute; using TestFixtureSetUp = Microsoft.VisualStudio.TestTools.UnitTesting.TestInitializeAttribute; using SetUp = Microsoft.VisualStudio.TestTools.UnitTesting.TestInitializeAttribute; using Microsoft.VisualStudio.TestTools.UnitTesting; #endif Basically I’m taking the NUnit naming conventions, and redirecting them to MsTest. You can go the other way, of course. I only chose this direction because I had already written the tests as NUnit tests. NUnit and MsTest provide largely the same functionality with slightly differing class names. There’s few actual differences between then, and I have not run into them on this project so far. To run the tests as NUnit tests, simply open up the project properties tab and add the compiler directive NUNIT. Remove it, and you’re back in MsTest land.

    Read the article

  • Finding shapes in 2D Array, then optimising

    - by assemblism
    I'm new so I can't do an image, but below is a diagram for a game I am working on, moving bricks into patterns, and I currently have my code checking for rotated instances of a "T" shape of any colour. The X and O blocks would be the same colour, and my last batch of code would find the "T" shape where the X's are, but what I wanted was more like the second diagram, with two "T"s Current result      Desired Result [X][O][O]                [1][1][1] [X][X][_]                [2][1][_] [X][O][_]                [2][2][_] [O][_][_]                [2][_][_] My code loops through x/y, marks blocks as used, rotates the shape, repeats, changes colour, repeats. I have started trying to fix this checking with great trepidation. The current idea is to: loop through the grid and make note of all pattern occurrences (NOT marking blocks as used), and putting these to an array loop through the grid again, this time noting which blocks are occupied by which patterns, and therefore which are occupied by multiple patterns. looping through the grid again, this time noting which patterns obstruct which patterns That much feels right... What do I do now? I think I would have to try various combinations of conflicting shapes, starting with those that obstruct the most other patterns first.How do I approach this one? use the rational that says I have 3 conflicting shapes occupying 8 blocks, and the shapes are 4 blocks each, therefore I can only have a maximum of two shapes. (I also intend to incorporate other shapes, and there will probably be score weighting which will need to be considered when going through the conflicting shapes, but that can be another day) I don't think it's a bin packing problem, but I'm not sure what to look for. Hope that makes sense, thanks for your help

    Read the article

  • HTC Android device mounted as USB drive is read-only unless I'm root

    - by Ian Dickinson
    When I connect my HTC Incredible S to my Ubuntu 10.10 system as a USB drive, the device seems to mount OK, but is read-only unless I access it as root. For example, if I run nautilus, I can't drag and drop files to the SD-card in the phone, but if I run sudo nautilus I can. I have USB debug support set on the phone (Applications > Development > USB debugging) and I have added a rule for the device in /etc/udev/rules.d/51-android.rules on my Ubuntu system. Any suggestions as to how I can mount the drive so that I can copy content to the SD card without needing to sudo? Update Following advice from waltinator, I added the following line to my /etc/fstab: UUID=3537-3834 /media/usb1 vfat rw,user,noexec,nodev,nosuid,noauto However, the Android device is still being auto-mounted on /media/usb1 with uid and gid root. Update 2 syslog output: Nov 21 23:38:40 rowan-15 usbmount[4352]: executing command: mount -tvfat -osync,noexec,nodev,noatime,nodiratime /dev/sdd1 /media/usb1 Nov 21 23:38:40 rowan-15 usbmount[4352]: executing command: run-parts /etc/usbmount/mount.d

    Read the article

  • How to retain secondary hard drive mounts at reboot and keep shares?

    - by Tom
    I'm running Ubuntu 12.04. A second hard drive connected to this computer does not mount when the computer boots. Additionally, I have set up the drive to be shared but the share is not retained, the share is lost after each boot. My main system drive and a removable drive mount OK and shares remain between boots. Additional information follows: D2Linux sda1 is the secondary hard drive L-Freeagent sdc1 is the removeable drive Here is the contents of fstab immediately after booting (D2Linux /dev/sda1 not yet mounted): '# /etc/fstab: static file system information. ' '# ' '# Use 'blkid' to print the universally unique identifier for a ' '# device; this may be used with UUID= as a more robust way to name devices ' '# that works even if disks are added and removed. See fstab(5). ' '# ' '# ' proc /proc proc nodev,noexec,nosuid 0 0 '# / was on /dev/sdb1 during installation ' UUID=43d29a82-66b3-40f3-91ed-735a27a60004 / ext4 errors=remount-ro 0 1 '# swap was on /dev/sdb5 during installation UUID=cf8e3351-11d0-487a-8a6e-e499c2e88a10 none swap sw ' 0 0 Here is the output of mount with all drives mounted (I did not restore the share): /dev/sdb1 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) gvfs-fuse-daemon on /home/tom/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=tom) /dev/sdc1 on /media/L-Freeagent type ext4 (rw,nosuid,nodev,uhelper=udisks) /dev/sda1 on /media/D2Linux type ext4 (rw,nosuid,nodev,uhelper=udisks) Thank you!

    Read the article

  • Looking for someone to point me in the right direction. I want to learn how to use hosted servers

    - by Leisure
    TL;DR: I want a Java program to run on a server, I want the server to forward a particular port from external to internal IP, I want store a few files on the server. Guides please. So I made a hack job Java program that acts as a server for my android application. It stores data in text files and HTML files, uploads them via FTP to my webhost, and manages socket connections (using port forwarding) with any phones connected. Right now I'm running it on NetBeans on my home computer. I know that it will probably slow down or crash once about 50 phones are connected at once. Is there any way I can run this program on a server with a high bandwidth? Can someone please find me a guide for that? I'm noob and don't know where to start looking. I seriously don't know anything about renting or using servers - I need a nice guide, and recommendations. My requirements for the server: Can handle about 2k socket connections at once Can run my Java code and store my txt files Can give me a port and an IP address so TCP/IP clients can be connected My budget: $50 CAD per month. Please someone set my ship sailing in the right direction, I really don't know where to look for resources.

    Read the article

  • Docker vs ESXi for Startup Projects - Deploying Code for Dev Testing

    - by JasonG
    Why hello there little programmer dude! I have a question for you and all of your experience and knowledge. I have an ESXi whitebox that I built which is an 8 dude that sits in the corner. I made a mistake recently and took the key that had ESXi, formatted it and used it for something else. No big deal because the last project I worked on had stalled out. I'm about to pick up another project and now I need to spin up a whole bunch of stuff for CI, qa + db, ticket tracker, wikis etc etc. I've been hearing a lot about Docker recently and as this is just a consumer grade machine, I'm wondering if it may make more sense for me to use Docker on OpenOS and then put everything there - bamboo or hudson, jira, confluence, postgress for the tools to use, then a qa env. I can't really seem to find any documents that directly compare traditional VM infrastructure vs docker solutions and I'm wondering if it is fair to compare. Is there any reason why CoreOS w/ containers would be a strictly worse solution? Or do you have any insight into why I may want to stick with ESXi? I've looked on multiple occasions and can't find a good reason not to. I'm not going to run a production env on the server so I don't need to have HA if updating security or OS for example where esxi would allow me to restart one vm at a time. I can just shut the thing down and bring it back up if I need a reboot no problem. So what's up with this container stuff? Is it a fair replacement for ESXi? I'm guessing the atlassian products would run much better and my ram would go a lot farther using docker. Probably the CPU would run much cooler too and my expensive HDD space would be better utilized.

    Read the article

  • Unable to start VMWare Workstation after upgrade to 13.04

    - by pst007x
    After upgrading to 13.04 I am unable to start VMWorkstation. I get the following message: Before you can run VMware, several modules must be compiled and loaded into the running kernel. Kernel Headers 3.8.0-19-generic Kernel headers for version 3.8.0-19-generic were not found. If you have installed them in a non-default path you can specify the path below. Does anyone have any idea what to do next? Ubuntu 13.04 64bit If I direct the path to: /usr/src/linux-headers-3.8.0-19-generic I get the following message: C header files matching your running kernel were not found. Thanks Additional: As suggested I run this in terminal: cd /lib/modules/$(uname -r)/build/include/linux sudo ln -s ../generated/utsrelease.h sudo ln -s ../generated/autoconf.h sudo ln -s ../generated/uapi/linux/version.h However, now I get the following: Before you can run VMware, several modules must be compiled and loaded into the kernel CANCEL / INSTALL I INSTALL, the window closes and nothing happens.... Any ideas? ADDITIONAL: I installed this: sudo apt-get install open-vm-tools open-vm-tools-dev open-vm-dkms open-vm-toolbox open-vm-tools-dev And it all launched... Many thanks for the suggestions and help... This is what I love about Ubuntu... it has a great helpful community... ! Note: Also found this which may help others too: HERE ADDITIONAL ERROR: Could not open /dev/vmmon: Is a directory. Please make sure that the kernel module `vmmon' is loaded. Failed to initialize monitor device. Monitor settings all greyed out RESOLUTION: Re-installation of Nvidia Drivers

    Read the article

  • Why does running this program on 11.10 give a 'GLIBC_2.15 not found' error?

    - by RafLance
    I am trying to install Absinthe 2.0.4 on Ubuntu 11.10 on a netbook. When I try to run the install file, this keeps on happening: rafael@RafLaptop:~/Desktop/absinthe-linux-2.0.4$ ./absinthe.x86 ./absinthe.x86: /lib/i386-linux-gnu/libc.so.6: version `GLIBC_2.15' not found (required by ./absinthe.x86) Do I need to upgrade GLIBC? If so, how do I do that? Since I'm on a netbook I can't use a LiveCD so I wanted to know if there was a way I could fix this issue without reinstalling my whole OS. Any explanations about what GLIBC is exactly would be great too since this is a learning experience for me. I know that GLIBC is a part of libc.so.6 and so I tried to run sudo apt-get install libc.so.6 but was told that it was up to date. But GLIBC isn't? I hope this articulates my problem well, if there are any pieces of missing info or questions to clarity my question, please let me know! ~-~ EDIT/UPDATE: So after some help on the AskUbuntu chat room from user izx, I have gathered the following information/understanding: -I need to run this program with Ubuntu 12.04 or recompile it from source -Upgrading libc on Oneiric to 2.15 while possible, is not an easy task and is not officially supported.

    Read the article

  • DVD-drive detected by BIOS and UEFI but not by Ubuntu 12.04

    - by user97591
    I have build a new tower containing; Asrock Z77 extreme4 (motherboard) Core i7 (processor) Hitachi LGE-DMGH 12 L (B) GH15F SATA (SATA DVD-drive) Problem is; BIOS and UEFI has no problems detecting the DVD-drive but it is not detected by Ubuntu 12.04. It is not present in /etc/fstab or /etc/mtab. Below is the contents of fstab and mtab. Thanks for all your help. fstab: # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda1 during installation UUID=f007bc60-da4c-4f36-99a7-77083c5f3654 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=5d59949c-aed9-442a-877d-5abf1ccaadc3 none swap sw 0 0 mtab: /dev/sda1 / ext4 rw,errors=remount-ro 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 none /sys/fs/fuse/connections fusectl rw 0 0 none /sys/kernel/debug debugfs rw 0 0 none /sys/kernel/security securityfs rw 0 0 udev /dev devtmpfs rw,mode=0755 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0 tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0 none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0 none /run/shm tmpfs rw,nosuid,nodev 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,noexec,nosuid,nodev 0 0 gvfs-fuse-daemon /home/tom/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,user=tom 0 0

    Read the article

  • Non use of persisted data – Part deux

    - by Dave Ballantyne
    In my last blog I showed how persisted data may not be used if you have used the base data on an include on an index. That wasn't the only problem ive had that showed the same symptom.  Using the same code as before,  I was executing similar to the below : select BillToAddressID,SOD.SalesOrderDetailID,SOH.CleanedGuid from sales.salesorderheader SOH join Sales.SalesOrderDetail SOD on SOH.SalesOrderID = SOD.SalesOrderID But,  due to a distribution error in statistics i found it necessary to use a table hint.  In this case, I wanted to force a loop join select BillToAddressID,SOD.SalesOrderDetailID,SOH.CleanedGuid from sales.salesorderheader SOH inner loop join Sales.SalesOrderDetail SOD on SOH.SalesOrderID = SOD.SalesOrderID   But, being the diligent  TSQL developer that I am ,looking at the execution plan I noticed that the ‘compute scalar’ operator was again calling the function.  Again,  profiler is a more graphic way to view this…..   All very odd,  just because ive forced a join , that has NOTHING, to do with my persisted data then something is causing the data to be re-evaluated. Not sure if there is any easy fix you can do to the TSQL here, but again its a lesson learned (or rather reinforced) examine the execution plan of every query you write to ensure that it is operating as you thought it would.

    Read the article

  • Should you create a class within a method?

    - by Amndeep7
    I have made a program using Java that is an implementation of this project: http://nifty.stanford.edu/2009/stone-random-art/sml/index.html. Essentially, you create a mathematical expression and, using the pixel coordinate as input, make a picture. After I initially implemented this in serial, I then implemented it in parallel due to the fact that if the picture size is too large or if the mathematical expression is too complex (especially considering the fact that I made the expression recursively), it takes a really long time. During this process, I realized that I needed two classes which implemented the Runnable interface as I had to put in parameters for the run method, which you aren't allowed to do directly. One of these classes ended up being a medium sized static inner class (not large enough to make an independent class file for it though). The other though, just needed a few parameters to determine some indexes and the size of the for loop that I was making run in parallel - here it is: class DataConversionRunnable implements Runnable { int jj, kk, w; DataConversionRunnable(int column, int matrix, int wid) { jj = column; kk = matrix; w = wid; } public void run() { for(int i = 0; i < w; i++) colorvals[kk][jj][i] = (int) ((raw[kk][jj][i] + 1.0) * 255 / 2.0); increaseCounter(); } } My question is should I make it a static inner class or can I just create it in a method? What is the general programming convention followed in this case?

    Read the article

  • A short but intense GCC Gathering in London

    - by user817571
    About one week ago I joined in London many long time GCC friends and acquaintances for a gathering organized by Google (in particular I guess should be thanked Diego and Ian). Only a weekend, and I wasn't able to attend on Sunday morning, but a very good occasion to raise some issues in a very relaxed way, in particular those at the border between areas of competence, which are the most difficult to discuss during the normal work days. If you are interested in a general overview and some notes this is a good link: http://gcc.gnu.org/wiki/GCCGathering2011 As you may easily guess, the third topic is mine, which I managed to have up quite early on Friday morning thanks to the votes of some good friends like Dodji (the ordering of the topics resulted from democratic voting on Friday evening!). I learned a lot from the discussion: for example that certainly the new C++11 'final' should be exploited largely in the c++ front-end; the various reasons why devirtualization can be quite trick (but I'm really confident that Martin and Honza are going to make a good progress also basing on a set of short testcases which I promised to collect); that, as explained by Ian, the gold linker already implements the nice --icf (Identical Code Folding) facility, which some friends of mine are definitely going to like (however, see: http://sourceware.org/bugzilla/show_bug.cgi?id=12919). I also enjoyed the observations made by Lawrence, where he remarked that in C+11 we are going to see more pointer iterations implicitly produced by the new range-based for-loop and we really want to make sure the loop optimizers are able to deal with those as well as loops explicitly using a counter. All in all, I really hope we are going to do it again!

    Read the article

< Previous Page | 282 283 284 285 286 287 288 289 290 291 292 293  | Next Page >