Search Results

Search found 926 results on 38 pages for 'ignoring'.

Page 31/38 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • How to configure a NSPopupButton for displaying multiple values in a TableView?

    - by jekmac
    Hi there! I'm using two entities A and B with to-many-to-many relationship. Lets say I got an entity A with attribute aAttrib and a to-many relationship aRelat to another entity B with attribute bAttrib and a to-many relationship bRelat with entity A. Now I am building an interface with two tables one for entity A and another for entity B. The table for entity B has two columns one for bAttrib and one for the relationship aRelat. The aRelat-column should be a NSPopupButtonCell to display multiple aAttrib values. I'd like to set all the bindings in InterfaceBuilder in Table Column Bindings: -- I have two NSArrayController each for one entity: Object Controller Mode:Entity Array Controller Bindings: Parameters Managed Object Context bind to File's Owner -- One Table Cloumn with a PopUpButtonCell: TableCloumnBindings Content bind to Entity A with ControllerKey arrangedObjects; Content Values bind to Entity A with ModelKeyPath aAttrib Selected Object bind to Entity B with ModelKeyPath bRelat I know that this configuration doesn't allow multiple value setting. But I don't know how to do the right one. Getting the following message: HIToolbox: ignoring exception 'Unacceptable type of value for to-many relationship: property = "bRelat"; desired type = NSSet; given type = NSCFString; value = testValue.' that raised inside Carbon event dispatch... Does anyone have any idea?

    Read the article

  • Spinner setonitemselectedlistener is not called

    - by Gabrielle
    I have a strange problem. I need to do something when an item from spinner is selected. Here is my code : language = (Spinner) findViewById(R.id.current_language_text); ArrayAdapter adapter = new ArrayAdapter(this, com.Orange.R.layout.my_spinner_textview, languages); adapter.setDropDownViewResource(com.Orange.R.layout.multiline_spinner_dropdown_item); language.setAdapter(adapter); language.setSelection(Integer.valueOf(language_id) - 1); language.setOnItemSelectedListener(new OnItemSelectedListener() { @Override public void onItemSelected(AdapterView<?> parentView, View selectedItemView, int position, long id) { System.out.println("position "+position); Toast.makeText(Settings.this, "Hello Toast",Toast.LENGTH_SHORT).show(); } @Override public void onNothingSelected(AdapterView<?> parentView) { // your code here } }); The problem is that onItemSelectedListener is not called. I put System.out.println in onItemSelected() but I don't get it in LogCat. I tried with Toast, and I get the same, it doesn't appear. Every time I select an item from spinner, in LogCat I get this warning : Window already focused, ignoring focus gain of: com.android.internal.view.IInputMethodClient$Stub$Proxy@2b1dabd0 Any idea why onItemSelectedListener is not called ?

    Read the article

  • How to refactor models without breaking WPF views?

    - by Tim Murphy
    I've just started learning WPF and like the power of databinding it presents; that is ignoring the complexity and confusion for a noob. My concern is how do you safely refactor your models/viewmodels without breaking the views that use them? Take the following snippet of a view for example: <Grid> <ListView ItemsSource="{Binding Contacts}"> <ListView.View> <GridView> <GridViewColumn Header="First Name" DisplayMemberBinding="{Binding Path=FirstName}"/> <GridViewColumn Header="Last Name" DisplayMemberBinding="{Binding Path=FirstName}"/> <GridViewColumn Header="DOB" DisplayMemberBinding="{Binding Path=DateOfBirth}"/> <GridViewColumn Header="# Pets" DisplayMemberBinding="{Binding Path=NumberOfPets}"/> <GridViewColumn Header="Male" DisplayMemberBinding="{Binding Path=IsMale}"/> </GridView> </ListView.View> </ListView> </Grid> The list is bound to the Contacts property, IList(Of Contact), of the windows DataSource and each of the properties for a Contact is bound to a GridViewColumn. Now if I change the name of the NumberOfPets property in the Contact model to PetCount the view will break. How do I prevent the view breaking?

    Read the article

  • Adding a column to a model at runtime (without additional tables) in rails

    - by Marek
    I'm trying to give admins of my web application the ability to add some new fields to a model. The model is called Artwork and i would like to add, for instante, a test_column column at runtime. I'm just teting, so i added a simple link to do it, it will be of course parametric. I managed to do it through migrations: def test_migration_create Artwork.add_column :test_column, :integer flash[:notice] = "Added Column test_column to artworks" redirect_to :action => 'index' end def test_migration_delete Artwork.remove_column :test_column flash[:notice] = "Removed column test_column from artworks" redirect_to :action => 'index' end It works, the column gets added/ removed to/from the databse without issues. I'm using active_scaffold at the moment, so i get the test_column field in the form without adding anything. When i submit a create or an update, however, the test_column does not get updated and stay empty. Inspecting the parameters, i can see: Parameters: {"commit"=>"Update", "authenticity_token"=>"37Bo5pT2jeoXtyY1HgkEdIhglhz8iQL0i3XAx7vu9H4=", "id"=>"62", "record"=>{"number"=>"test_artwork", "author"=>"", "title"=>"Opera di Test", "test_column"=>"TEEST", "year"=>"", "description"=>""}} the test_column parameter is passed correctly. So why active record keeps ignoring it? I tried to restart the server too without success. I'm using ruby 1.8.7, rails 2.3.5, and mongrel with an sqlite3 database. Thanks

    Read the article

  • change postgres date format

    - by Jay
    Is there a way to change the default format of a date in Postgres? Normally when I query a Postgres database, dates come out as yyyy-mm-dd hh:mm:ss+tz, like 2011-02-21 11:30:00-05. But one particular program the dates come out yyyy-mm-dd hh:mm:ss.s, that is, there is no time zone and it shows tenths of a second. Apparently something is changing the default date format, but I don't know what or where. I don't think it's a server-side configuration parameter, because I can access the same database with a different program and I get the format with the timezone. I care because it appears to be ignoring my "set timezone" calls in addition to changing the format. All times come out EST. Additional info: If I write "select somedate from sometable" I get the "no timezone" format. But if I write "select to_char(somedate::timestamptz, 'yyyy-mm-dd hh24:mi:ss-tz')" then timezones work as I would expect. This really sounds to me like something is setting all timestamps to implicitly be "to_char(date::timestamp, 'yyyy-mm-dd hh24:mi:ss.m')". But I can't find anything in the documentation about how I would do this if I wanted to, nor can I find anything in the code that appears to do this. Though as I don't know what to look for, that doesn't prove much.

    Read the article

  • IntelliJ inspection -- non-thrown exception

    - by skiaddict1
    This is a follow-up question to 1832203. I'm making it a new question as well, because it seems that posting an answer to a question doesn't change its position on the java page and so I'm worried that it won't get seen. Apologies if I've just stepped on some etiquette toes. I'm an IntelliJ newbie -- started using it two days ago and I'm absolutely head-over-heels in love! One of the things I adore is the code inspections. However... In one of my classes I often create exceptions without throwing them. If I can't turn off (or downgrade) the inspection warning for this then I can see I'm going to end up ignoring inspections on at least that file (if not the entire project), which would be a real pity. I've done a search in the inspection settings for "exception", and found nothing that relates exactly, so I turned them all off just to see, and it's still doing it (even after a rebuild...BTW when are inspections redone? at save? at rebuild? ???), so I would really like some help on how to make this one into an info/typo level -- which I can then ignore. Using the free version, if that makes any difference TIA to all those experienced IntelliJ warriors out there!

    Read the article

  • jQuery .load() call doesn't execute javascript in loaded html file

    - by Mike
    This seems to be a problem related to Safari only. I've tried 4 on mac and 3 on windows and am still having no luck. What I'm trying to do is load an external html file and have the Javascript that is embedded execute. The code I'm trying to use is this: $("#myBtn").click(function() { $("#myDiv").load("trackingCode.html"); }); trackingCode.html looks like this (simple now, but will expand once/if I get this working): <html> <head> <title>Tracking HTML File</title> <script language="javascript" type="text/javascript"> alert("outside the jQuery ready"); $(function() { alert("inside the jQuery ready"); }); </script> </head> <body> </body> </html> I'm seeing both alert messages in IE (6 & 7) and Firefox (2 & 3). However, I am not able to see the messages in Safari (the last browser that I need to be concerned with - project requirements - please no flame wars). Any thoughts on why Safari is ignoring the Javascript in the trackingCode.html file? Eventually I'd like to be able to pass Javascript objects to this trackingCode.html file to be used within the jQuery ready call, but I'd like to make sure this is possible in all browsers before I go down that road. Thanks for your help!

    Read the article

  • How do I call Matlab in a script on Windows?

    - by Benjamin Oakes
    I'm working on a project that uses several languages: SQL for querying a database Perl/Ruby for quick-and-dirty processing of the data from the database and some other bookkeeping Matlab for matrix-oriented computations Various statistics languages (SAS/R/SPSS) for processing the Matlab output Each language fits its niche well and we already have a fair amount of code in each. Right now, there's a lot of manual work to run all these steps that would be much better scripted. I've already done this on Linux, and it works relatively well. On Linux: matlab -nosplash -nodesktop -r "command" or echo "command" | matlab -nosplash -nodesktop ...opens Matlab in a "command line" mode. (That is, no windows are created -- it just reads from STDIN, executes, and outputs to STDOUT/STDERR.) My problem is that on Windows (XP and 7), this same code opens up a window and doesn't read from / write to the command line. It just stares me blankly in the face, totally ignoring STDIN and STDOUT. How can I script running Matlab commands on Windows? I basically want something that will do: ruby database_query.rb perl legacy_code.pl ruby other_stuff.rb matlab processing_step_1.m matlab processing_step_2.m # etc, etc. I've found out that Matlab has an -automation flag on Windows to start an "automation server". That sounds like overkill for my purposes, and I'd like something that works on both platforms. What options do I have for automating Matlab in this workflow?

    Read the article

  • On counting pairs of words that differ by one letter

    - by Quintofron
    Let us consider n words, each of length k. Those words consist of letters over an alphabet (whose cardinality is n) with defined order. The task is to derive an O(nk) algorithm to count the number of pairs of words that differ by one position (no matter which one exactly, as long as it's only a single position). For instance, in the following set of words (n = 5, k = 4): abcd, abdd, adcb, adcd, aecd there are 5 such pairs: (abcd, abdd), (abcd, adcd), (abcd, aecd), (adcb, adcd), (adcd, aecd). So far I've managed to find an algorithm that solves a slightly easier problem: counting the number of pairs of words that differ by one GIVEN position (i-th). In order to do this I swap the letter at the ith position with the last letter within each word, perform a Radix sort (ignoring the last position in each word - formerly the ith position), linearly detect words whose letters at the first 1 to k-1 positions are the same, eventually count the number of occurrences of each letter at the last (originally ith) position within each set of duplicates and calculate the desired pairs (the last part is simple). However, the algorithm above doesn't seem to be applicable to the main problem (under the O(nk) constraint) - at least not without some modifications. Any idea how to solve this?

    Read the article

  • Does ReleaseStringUTF do more than free memory?

    - by Bayou Bob
    Consider the following C code segments. Segment 1: char * getSomeString(JNIEnv *env, jstring jstr) { char * retString; retString = (*env)->GetStringUTFChars(env, jstr, NULL); return retString; } void useSomeString(JNIEnv *env, jobject jobj, char *mName) { jclass cl = (*env)->GetObjectClass(env, jobj); jmethodId mId = (*env)->GetMethodID(env, cl, mName, "()Ljava/lang/String;"); jstring jstr = (*env)->CallObjectMethod(env, obj, id, NULL); char * myString = getSomeString(env, jstr); /* ... use myString without modifing it */ free(myString); } Because myString is freed in useSomeString, I do not think I am creating a memory leak; however, I am not sure. The JNI spec specifically requires the use of ReleaseStringUTFChars. Since I am getting a C style 'char *' pointer from GetStringUTFChars, I believe the memory reference exists on the C stack and not in the JAVA heap so it is not in danger of being Garbage Collected; however, I am not sure. I know that changing getSomeString as follows would be safer (and probably preferable). Segment 2: char * getSomeString(JNIEnv *env, jstring jstr) { char * retString; char * intermedString; intermedString = (*env)->GetStringUTFChars(env, jstr, NULL); retString = strdup(intermedString); (*env)->ReleaseStringUTFChars(env, jstr, intermedString); return retString; } Because of our 'process' I need to build an argument on why getSomeString in Segment 2 is preferable to Segment 1. Is anyone aware of any documentation or references which detail the behavior of GetStringUTFChars and ReleaseStringUTFChars in relation to where memory is allocated or what (if any) additional bookkeeping is done (i.e. local Reference Pointer to the Java Heap being created, etc). What are the specific consequences of ignoring that bookkeeping. Thanks in advance.

    Read the article

  • http connection timeout issues

    - by Mark
    I'm running into an issue when i try to use the HttpClient connecting to a url. The http connection is taking a longer time to timeout, even after i set a connection timeoout. int timeoutConnection = 5000; HttpConnectionParams.setConnectionTimeout(httpParameters, timeoutConnection); int timeoutSocket = 5000; HttpConnectionParams.setSoTimeout(httpParameters, timeoutSocket); It works perfect most of the time. However, every once in while, the http connection runs for ever and ignore the setconnectiontimeout, especailly when the phone is connected to wifi, and the phone was idling. So after the phone is idling, the first time i try to connect, the http connection ignores the setconnectiontimeout and runs forever, after i cancel it and try again, it works like charm everytime. But that one time that doesn't work it creates a threadtimeout error, i tried using a different thread, it works, but i know that the thread is running for long time. I understand that the wifi goes to sleep on idle, but i dont understand why its ignoring the setconnectiontimeout. Anyone can help, id really appreciated.

    Read the article

  • git contributors not showing up properly in github/etc.

    - by RobH
    I'm working in a team on a big project, but when I'm doing the merges I'd like the developers name to appear in github as the author -- currently, I'm the only one showing up since I'm merging. Context: There are 4 developers, and we're using the "integration manager" workflow using GitHub. Our "blessed" repo is under the organization, and each developer manages their pub/private repo. I've been tasked with being the integration manager, so I'm doing the merges, etc. Where I could be messing up is that I'm basically working out of my rob/project.git instead of the org/project.git -- so when I do local merges I operate on my repo then I push to both my public and the org public. (Make sense?) When I push to the blessed repo nobody else shows up as an author, since all commits are coming from me -- how can I get around this? -- Also, we all forked org/project.git, yet in the network graph nobody is showing up -- did we mess this up too? I'm used to working with git solo and don't have too much experience with handling a team of devs. Merging seems like the right thing to do, but I'm being thrown off since GitHub is kind of ignoring the other contributors. If this makes no sense at all, how do you use GitHub to manage a single project across 4 developers? (preferably the integration mgr workflow, branching i think would solve the problem) Thanks for any help

    Read the article

  • Using ddply() to Get Frequency of Certain IDs, by Appearance in Multiple Rows (in R)

    - by EconomiCurtis
    Goal If the following description is hard follow, please see the example "before" and "after" to see a straightforward example. I have bartering data, with unique trade ids, and two sides of the trade. Side1 and Side2 are baskets, lists of item ids that represent both sides of the barter transaction. I'd like to count the frequency each ITEM appears in TRADES. E.g, if item "001" appeared in 3 trades, I'd have a count of 3 (ignoring how many times the item appeared in each trade). Further, I'd like to do this with the plyr ddply function. (If you're interested as to my motivation, I working over many hundreds of thousands of transactions and am already using a ddply to calculate several other summary statistics. I'd like to add this to the ddply I'm already using, rather than calculate it after, and merge it into the ddply output.... sorry if that was difficult to follow.) In terms of pseudo code I'm working off of: merge each row of Side1 and Side2 by row, get unique() appearances of each item id apply table() function transpose and relabel output from table Example of the structure of my data, and the output I desire. Data Example (before): df <- data.frame(TradeID = c("01","02","03","04")) df$Side1 = list(c("001","001","002"), c("002","002","003"), c("001","004"), c("001","002","003","004")) df$Side2 = list(c("001"),c("007"),c("009"),c()) Desired Output (after): df.ItemRelFreq_byTradeID <- data.frame(ItemID = c("001","002","003","004","007","009"), RelFreq_byTrade = c(3,3,2,2,1,1)) One method to do this without ddply I've worked out one way to do this below. My problem is that I can't quite seem to get ddply to do this for me. temp <- table(unlist(sapply(mapply(c,df$Side1,df$Side2), unique))) df.ItemRelFreq_byTradeID <- data.frame(ItemID = names(temp), RelFreq_byTrade = temp[]) Thanks for any help you can offer! Curtis

    Read the article

  • int foo(type& bar); is a bad practice?

    - by Earlz
    Well, here we are. Yet another proposed practice that my C++ book has an opinion on. It says "a returning-value(non-void) function should not take reference types as a parameter." So basically if you were to implement a function like this: int read_file(int& into){ ... } and used the integer return value as some sort of error indicator (ignoring the fact that we have exceptions) then that function would be poorly written and it should actually be like void read_file(int& into, int& error){ } Now to me, the first one is much clearer and nice to use. If you want to ignore the error value, you do so with ease. But this book suggests the later. Note that this book does not say returning value functions are bad. It rather says that you should either only return a value or you should only use references. What are your thoughts on this? Is my book full of crap? (again)

    Read the article

  • Oracle Blob as img src in PHP page

    - by menkes
    I have a site that currently uses images on a file server. The images appear on a page where the user can drag and drop each as is needed. This is done with jQuery and the images are enclosed in a list. Each image is pretty standard: <img src='//network_path/image.png' height='80px'> Now however I need to reference images stored as a BLOB in an Oracle database (no choice on this, so not a merit discussion). I have no problem retrieving the BLOB and displaying on it's own using: $sql = "SELECT image FROM images WHERE image_id = 123"; $stid = oci_parse($conn, $sql); oci_execute($stid); $row = oci_fetch_array($stid, OCI_ASSOC+OCI_RETURN_NULLS); $img = $row['IMAGE']->load(); header("Content-type: image/jpeg"); print $img; But I need to [efficiently] get that image as the src attribute of the img tag. I tried imagecreatefromstring() but that just returns the image in the browser, ignoring the other html. I looked at data uri, but the IE8 size limit rules that out. So now I am kind of stuck. My searches keep coming up with using a src attribute that loads another page that contains the image. But I need the image itself to actually show on the page. (Note: I say image, meaning at least one image but as many as eight on a page). Any help would be greatly appreciated.

    Read the article

  • GPGPU

    WhatGPU obviously stands for Graphics Processing Unit (the silicon powering the display you are using to read this blog post). The extra GP in front of that stands for General Purpose computing.So, altogether GPGPU refers to computing we can perform on GPU for purposes beyond just drawing on the screen. In effect, we can use a GPGPU a bit like we already use a CPU: to perform some calculation (that doesn’t have to have any visual element to it). The attraction is that a GPGPU can be orders of magnitude faster than a CPU.WhyWhen I was at the SuperComputing conference in Portland last November, GPGPUs were all the rage. A quick online search reveals many articles introducing the GPGPU topic. I'll just share 3 here: pcper (ignoring all pages except the first, it is a good consumer perspective), gizmodo (nice take using mostly layman terms) and vizworld (answering the question on "what's the big deal").The GPGPU programming paradigm (from a high level) is simple: in your CPU program you define functions (aka kernels) that take some input, can perform the costly operation and return the output. The kernels are the things that execute on the GPGPU leveraging its power (and hence execute faster than what they could on the CPU) while the host CPU program waits for the results or asynchronously performs other tasks.However, GPGPUs have different characteristics to CPUs which means they are suitable only for certain classes of problem (i.e. data parallel algorithms) and not for others (e.g. algorithms with branching or recursion or other complex flow control). You also pay a high cost for transferring the input data from the CPU to the GPU (and vice versa the results back to the CPU), so the computation itself has to be long enough to justify the overhead transfer costs. If your problem space fits the criteria then you probably want to check out this technology.HowSo where can you get a graphics card to start playing with all this? At the time of writing, the two main vendors ATI (owned by AMD) and NVIDIA are the obvious players in this industry. You can read about GPGPU on this AMD page and also on this NVIDIA page. NVIDIA's website also has a free chapter on the topic from the "GPU Gems" book: A Toolkit for Computation on GPUs.If you followed the links above, then you've already come across some of the choices of programming models that are available today. Essentially, AMD is offering their ATI Stream technology accessible via a language they call Brook+; NVIDIA offers their CUDA platform which is accessible from CUDA C. Choosing either of those locks you into the GPU vendor and hence your code cannot run on systems with cards from the other vendor (e.g. imagine if your CPU code would run on Intel chips but not AMD chips). Having said that, both vendors plan to support a new emerging standard called OpenCL, which theoretically means your kernels can execute on any GPU that supports it. To learn more about all of these there is a website: gpgpu.org. The caveat about that site is that (currently) it completely ignores the Microsoft approach, which I touch on next.On Windows, there is already a cross-GPU-vendor way of programming GPUs and that is the DirectX API. Specifically, on Windows Vista and Windows 7, the DirectX 11 API offers a dedicated subset of the API for GPGPU programming: DirectCompute. You use this API on the CPU side, to set up and execute the kernels that run on the GPU. The kernels are written in a language called HLSL (High Level Shader Language). You can use DirectCompute with HLSL to write a "compute shader", which is the term DirectX uses for what I've been referring to in this post as a "kernel". For a comprehensive collection of links about this (including tutorials, videos and samples) please see my blog post: DirectCompute.Note that there are many efforts to build even higher level languages on top of DirectX that aim to expose GPGPU programming to a wider audience by making it as easy as today's mainstream programming models. I'll mention here just two of those efforts: Accelerator from MSR and Brahma by Ananth. Comments about this post welcome at the original blog.

    Read the article

  • My ASP.NET news sources

    - by Jon Galloway
    I just posted about the ASP.NET Daily Community Spotlight. I was going to list a bunch of my news sources at the end, but figured this deserves a separate post. I've been following a lot of development blogs for a long time - for a while I subscribed to over 1500 feeds and read them all. That doesn't scale very well, though, and it's really time consuming. Since the community spotlight requires an interesting ASP.NET post every day of the year, I've come up with a few sources of ASP.NET news. Top Link Blogs Chris Alcock's The Morning Brew is a must-read blog which highlights each day's best blog posts across the .NET community. He covers the entire Microsoft development, but generally any of the top ASP.NET posts I see either have already been listed on The Morning Brew or will be there soon. Elijah Manor posts a lot of great content, which is available in his Twitter feed at @elijahmanor, on his Delicious feed, and on a dedicated website - Web Dev Tweets. While not 100% ASP.NET focused, I've been appreciating Joe Stagner's Weekly Links series, partly since he includes a lot of links that don't show up on my other lists. Twitter Over the past few years, I've been getting more and more of my information from my Twitter network (as opposed to RSS or other means). Twitter is as good as your network, so if getting good information off Twitter sounds crazy, you're probably not following the right people. I already mentioned Elijah Manor (@elijahmanor). I follow over a thousand people on Twitter, so I'm not going to try to pick and choose a list, but one good way to get started building out a Twitter network is to follow active Twitter users on the ASP.NET team at Microsoft: @scottgu (well, not on the ASP.NET team, but their great grand boss, and always a great source of ASP.NET info) @shanselman @haacked @bradwilson @davidfowl @InfinitiesLoop @davidebbo @marcind @DamianEdwards @stevensanderson @bleroy @humancompiler @osbornm @anurse I'm sure I'm missing a few, and I'll update the list. Building a Twitter network that follows topics you're interested in allows you to use other tools like Cadmus to automatically summarize top content by leveraging the collective input of many users. Twitter Search with Topsy You can search Twitter for hashtags (like #aspnet, #aspnetmvc, and #webmatrix) to get a raw view of what people are talking about on Twitter. Twitter's search is pretty poor; I prefer Topsy. Here's an example search for the #aspnetmvc hashtag: http://topsy.com/s?q=%23aspnetmvc You can also do combined queries for several tags: http://topsy.com/s?q=%23aspnetmvc+OR+%23aspnet+OR+%23webmatrix Paper.li Paper.li is a handy service that builds a custom daily newspaper based on your social network. They've turned a lot of people off by automatically tweeting "The SuperDevFoo Daily is out!!!" messages (which can be turned off), but if you're ignoring them because of those message, you're missing out on a handy, free service. My paper.li page includes content across a lot of interests, including ASP.NET: http://paper.li/jongalloway When I want to drill into a specific tag, though, I'll just look at the Paper.li post for that hashtag. For example, here's the #aspnetmvc paper.li page: http://paper.li/tag/aspnetmvc Delicious I mentioned previously that I use Delicious for managing site links. I also use their network and search features. The tag based search is pretty good: Even better, though, is that I can see who's bookmarked these links, and add them to my Delicious network. After having built out a network, I can optimize by doing less searching and more leaching leveraging of collective intelligence. Community Sites I scan DotNetKicks, the weblogs.asp.net combined feed, and the ASP.NET Community page, CodeBetter, Los Techies,  CodeProject,  and DotNetSlackers from time to time. They're hit and miss, but they do offer more of an opportunity for finding original content which others may have missed. Terms of Enrampagement When someone's on a tear, I just manually check their sites more often. I could use RSS for that, but it changes pretty often. I just keep a mental note of people who are cranking out a lot of good content and check their sites more often. What works for you?

    Read the article

  • Grouping data in LINQ with the help of group keyword

    - by vik20000in
    While working with any kind of advanced query grouping is a very important factor. Grouping helps in executing special function like sum, max average etc to be performed on certain groups of data inside the date result set. Grouping is done with the help of the Group method. Below is an example of the basic group functionality.     int[] numbers = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 };         var numberGroups =         from num in numbers         group num by num % 5 into numGroup         select new { Remainder = numGroup.Key, Numbers = numGroup };  In the above example we have grouped the values based on the reminder left over when divided by 5. First we are grouping the values based on the reminder when divided by 5 into the numgroup variable.  numGroup.Key gives the value of the key on which the grouping has been applied. And the numGroup itself contains all the records that are contained in that group. Below is another example to explain the same. string[] words = { "blueberry", "abacus", "banana", "apple", "cheese" };         var wordGroups =         from num in words         group num by num[0] into grp         select new { FirstLetter = grp.Key, Words = grp }; In the above example we are grouping the value with the first character of the string (num[0]). Just like the order operator the group by clause also allows us to write our own logic for the Equal comparison (That means we can group Item by ignoring case also by writing out own implementation). For this we need to pass an object that implements the IEqualityComparer<string> interface. Below is an example. public class AnagramEqualityComparer : IEqualityComparer<string> {     public bool Equals(string x, string y) {         return getCanonicalString(x) == getCanonicalString(y);     }      public int GetHashCode(string obj) {         return getCanonicalString(obj).GetHashCode();     }         private string getCanonicalString(string word) {         char[] wordChars = word.ToCharArray();         Array.Sort<char>(wordChars);         return new string(wordChars);     } }  string[] anagrams = {"from   ", " salt", " earn", "  last   ", " near "}; var orderGroups = anagrams.GroupBy(w => w.Trim(), new AnagramEqualityComparer()); Vikram  

    Read the article

  • SQL SERVER – How to Ignore Columnstore Index Usage in Query

    - by pinaldave
    Earlier I wrote about SQL SERVER – Fundamentals of Columnstore Index and very first question I received in email was as following. “We are using SQL Server 2012 CTP3 and so far so good. In our data warehouse solution we have created 1 non-clustered columnstore index on our large fact table. We have very unique situation but your article did not cover it. We are running few queries on our fact table which is working very efficiently but there is one query which earlier was running very fine but after creating this non-clustered columnstore index this query is running very slow. We dropped the columnstore index and suddenly this one query is running fast but other queries which were benefited by this columnstore index it is running slow. Any workaround in this situation?” In summary the question in simple words “How can we ignore using columnstore index in selective queries?” Very interesting question – you can use I can understand there may be the cases when columnstore index is not ideal and needs to be ignored the same. You can use the query hint IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX to ignore the columnstore index. SQL Server Engine will use any other index which is best after ignoring the columnstore index. Here is the quick script to prove the same. We will first create sample database and then create columnstore index on the same. Once columnstore index is created we will write simple query. This query will use columnstore index. We will then show the usage of the query hint. USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO Now we have created columnstore index so if we run following query it will use for sure the same index. -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO We can specify Query Hint IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX as described in following query and it will not use columnstore index. -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID OPTION (IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX) GO Let us clean up the database. -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO Again, make sure that you use hint sparingly and understanding the proper implication of the same. Make sure that you test it with and without hint and select the best option after review of your administrator. Here is the question for you – have you started to use SQL Server 2012 for your validation and development (not on production)? It will be interesting to know the answer. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Ubuntu-one syncs single files, but not directories [closed]

    - by Luiz Cláudio Duarte
    I'm using Ubuntu 10.10, fully updated. I have tried to sync my ~/Documents and ~/Pictures folders; U1 replicates the directory structure, but no files are uploaded. Next I tried to sync a single file inside ~/Ubuntu One and it was synced. Then I tried to put a directory inside ~/Ubuntu One and, again, the directory structure was replicated, but no files were synced. All the files have the "syncing" icon, however. The latest syncdaemon.log is below: 2011-03-30 07:41:50,752 - ubuntuone.SyncDaemon.fsm - INFO - loading updated metadata 2011-03-30 07:41:55,081 - ubuntuone.SyncDaemon.fsm - INFO - initialized: idx_path: 266, idx_node_id: 266, shares: 1 2011-03-30 07:41:55,082 - ubuntuone.SyncDaemon.GeneralINotProc - INFO - Ignoring files: ['\\A#.*\\Z', '\\A.*~\\Z', '\\A.*\\.py[oc]\\Z', '\\A.*\\.sw[nopx]\\Z', '\\A.*\\.swpx\\Z', '\\A\\..*\\.tmp\\Z'] 2011-03-30 07:41:55,083 - ubuntuone.SyncDaemon.HQ - INFO - HashQueue: _hasher started 2011-03-30 07:41:55,902 - ubuntuone.SyncDaemon.DBus - INFO - DBusInterface initialized. 2011-03-30 07:41:55,903 - ubuntuone.SyncDaemon.Main - INFO - Using '/home/l_claudius/Ubuntu One' as root dir 2011-03-30 07:41:55,903 - ubuntuone.SyncDaemon.Main - INFO - Using '/home/l_claudius/.local/share/ubuntuone/syncdaemon' as data dir 2011-03-30 07:41:55,903 - ubuntuone.SyncDaemon.Main - INFO - Using '/home/l_claudius/.local/share/ubuntuone/shares' as shares root dir 2011-03-30 07:41:55,903 - ubuntuone.SyncDaemon.Main - NOTE - ---- MARK (state: <State: 'INIT' (queues IDLE connection 'Not User Not Network')>; queues: metadata: 0; content: 0; hash: 0, fsm-cache: hit=1 miss=266) ---- 2011-03-30 07:41:55,904 - ubuntuone.SyncDaemon.Main - NOTE - Local rescan starting... 2011-03-30 07:41:55,904 - ubuntuone.SyncDaemon.local_rescan - INFO - start scan all volumes 2011-03-30 07:41:55,906 - ubuntuone.SyncDaemon.local_rescan - INFO - processing trash 2011-03-30 07:41:56,044 - ubuntuone.SyncDaemon.local_rescan - INFO - processing move limbo 2011-03-30 07:41:56,491 - ubuntuone.SyncDaemon.Main - NOTE - Local rescan finished! 2011-03-30 07:41:56,492 - ubuntuone.SyncDaemon.Main - INFO - hash queue empty. We are ready! 2011-03-30 07:42:15,583 - ubuntuone.SyncDaemon.DBus - INFO - u'CredentialsFound': callbacking with credentials (token_name: None). 2011-03-30 07:42:15,584 - ubuntuone.SyncDaemon.DBus - INFO - connect: credential request was successful, pushing SYS_USER_CONNECT. 2011-03-30 07:42:15,617 - ubuntuone.SyncDaemon.ActionQueue - INFO - Connection started to host fs-1.one.ubuntu.com, port 443. 2011-03-30 07:42:15,977 - ubuntuone.SyncDaemon.ActionQueue - INFO - Connection made. 2011-03-30 07:42:15,978 - ubuntuone.SyncDaemon.StorageClient - INFO - Connection made. 2011-03-30 07:42:16,581 - ubuntuone.SyncDaemon.ActionQueue - INFO - The request 'protocol_version' finished OK. 2011-03-30 07:42:16,774 - ubuntuone.SyncDaemon.ActionQueue - INFO - The request 'caps_raising_if_not_accepted' finished OK. 2011-03-30 07:42:16,966 - ubuntuone.SyncDaemon.ActionQueue - INFO - The request 'caps_raising_if_not_accepted' finished OK. 2011-03-30 07:42:17,722 - ubuntuone.SyncDaemon.ActionQueue - INFO - The request 'oauth_authenticate' finished OK. 2011-03-30 07:42:17,723 - ubuntuone.SyncDaemon.ActionQueue - NOTE - Session ID: '563bc960-35fa-4f44-b9b6-125819656dc3' 2011-03-30 07:42:19,258 - ubuntuone.SyncDaemon.ActionQueue - INFO - The request 'list_volumes' finished OK. 2011-03-30 07:43:55,903 - ubuntuone.SyncDaemon.Main - NOTE - ---- MARK (state: <State: 'QUEUE_MANAGER' (queues IDLE connection 'With User With Network')>; queues: metadata: 0; content: 0; hash: 0, fsm-cache: hit=1059 miss=266) ---- 2011-03-30 07:45:55,903 - ubuntuone.SyncDaemon.Main - NOTE - ---- MARK (state: <State: 'QUEUE_MANAGER' (queues IDLE connection 'With User With Network')>; queues: metadata: 0; content: 0; hash: 0, fsm-cache: hit=1059 miss=266) ---- 2011-03-30 07:47:55,903 - ubuntuone.SyncDaemon.Main - NOTE - ---- MARK (state: <State: 'QUEUE_MANAGER' (queues IDLE connection 'With User With Network')>; queues: metadata: 0; content: 0; hash: 0, fsm-cache: hit=1059 miss=266) ---- 2011-03-30 07:49:55,903 - ubuntuone.SyncDaemon.Main - NOTE - ---- MARK (state: <State: 'QUEUE_MANAGER' (queues IDLE connection 'With User With Network')>; queues: metadata: 0; content: 0; hash: 0, fsm-cache: hit=1059 miss=266) ---- 2011-03-30 07:51:55,903 - ubuntuone.SyncDaemon.Main - NOTE - ---- MARK (state: <State: 'QUEUE_MANAGER' (queues IDLE connection 'With User With Network')>; queues: metadata: 0; content: 0; hash: 0, fsm-cache: hit=1059 miss=266) ----

    Read the article

  • How to detect and configure an output with xrandr?

    - by ysap
    I have a DELL U2410 monitor connected to a Compaq 100B desktop equipped with an integrated AMD/ATI graphics card (AMD E-350). The installed O/S is Ubuntu 10.04 LTS. The computer is connected to the monitor via the DVI connection. The problem is that I cannot set the desktop resolution to the native 1920x1200. The maximum allowed resolution is 1600x1200. Doing some research I found about the xrandr utility. Unfortunately, when trying to use it I cannot configure it to the required resolution. First, it does not report the output name (which supposed to be DVI-0), saying default instead. Without it I cannot use the --fb option. The EDID utility seems to identify the monitor well. Here's the output from get-edid: # EDID version 1 revision 3 Section "Monitor" # Block type: 2:0 3:ff # Block type: 2:0 3:fc Identifier "DELL U2410" VendorName "DEL" ModelName "DELL U2410" # Block type: 2:0 3:ff # Block type: 2:0 3:fc # Block type: 2:0 3:fd HorizSync 30-81 VertRefresh 56-76 # Max dot clock (video bandwidth) 170 MHz # DPMS capabilities: Active off:yes Suspend:yes Standby:yes Mode "1920x1200" # vfreq 59.950Hz, hfreq 74.038kHz DotClock 154.000000 HTimings 1920 1968 2000 2080 VTimings 1200 1203 1209 1235 Flags "-HSync" "+VSync" EndMode # Block type: 2:0 3:ff # Block type: 2:0 3:fc # Block type: 2:0 3:fd EndSection but the xrandr -q command returns: Screen 0: minimum 640 x 400, current 1600 x 1200, maximum 1600 x 1200 default connected 1600x1200+0+0 0mm x 0mm 1600x1200 0.0* 1280x1024 0.0 1152x864 0.0 1024x768 0.0 800x600 0.0 640x480 0.0 720x400 0.0 When I try to set the resolution, I get: $ xrandr --fb 1920x1200 xrandr: screen cannot be larger than 1600x1200 (desired size 1920x1200) $ xrandr --output DVI-0 --auto warning: output DVI-0 not found; ignoring How can I set the screen resolution to 1920x1200? Why doesn't xrandr identify the DVI-0 output? Note that the same computer running Ubuntu version higher than 10.04 detects the correct resolution with no problems. On this machine I cannot upgrade due to some legacy hardware compatibility problems. Also, I don't see any optional screen drivers available in the Hardware Drivers dialog. ---- UPDATE: following the answer to this question, I got some advance. Now the required mode is listed in the xrandr -q list, but I can't switch to that mode. Using the Monitors applet (which now shows the new mode), I get the response that: The selected configuration for displays could not be applied. Could not set the configuration to CRTC 262. From the command line it looks like this: $ cvt 1920 1200 60 # 1920x1200 59.88 Hz (CVT 2.30MA) hsync: 74.56 kHz; pclk: 193.25 MHz Modeline "1920x1200_60.00" 193.25 1920 2056 2256 2592 1200 1203 1209 1245 -hsync +vsync $ xrandr --newmode "1920x1200_60.00" 193.25 1920 2056 2256 2592 1200 1203 1209 1245 -hsync +vsync $ xrandr -q Screen 0: minimum 640 x 400, current 1600 x 1200, maximum 1600 x 1200 default connected 1600x1200+0+0 0mm x 0mm 1600x1200 0.0* 1280x1024 0.0 1152x864 0.0 1024x768 0.0 800x600 0.0 640x480 0.0 720x400 0.0 1920x1200_60.00 (0x120) 193.0MHz h: width 1920 start 2056 end 2256 total 2592 skew 0 clock 74.5KHz v: height 1200 start 1203 end 1209 total 1245 clock 59.8Hz $ xrandr --addmode default 1920x1200_60.00 $ xrandr -q Screen 0: minimum 640 x 400, current 1600 x 1200, maximum 1600 x 1200 default connected 1600x1200+0+0 0mm x 0mm 1600x1200 0.0* 1280x1024 0.0 1152x864 0.0 1024x768 0.0 800x600 0.0 640x480 0.0 720x400 0.0 1920x1200_60.00 59.8 $ xrandr --output default --mode 1920x1200_60.00 xrandr: Configure crtc 0 failed Another piece of info (if it helps anyone): $ sudo lshw -c video *-display UNCLAIMED description: VGA compatible controller product: ATI Technologies Inc vendor: ATI Technologies Inc physical id: 1 bus info: pci@0000:00:01.0 version: 00 width: 32 bits clock: 33MHz capabilities: pm pciexpress msi bus_master cap_list configuration: latency=0 resources: memory:c0000000-cfffffff(prefetchable) ioport:f000(size=256) memory:feb00000-feb3ffff

    Read the article

  • Help trying to get two-finger scrolling to work on Asus UL80VT

    - by Dan2k3k4
    Multi-touch works fine on Windows 7 with: two-fingers scroll vertical and horizontally, two-finger tap for middle click, and three-finger tap for right click. However with Ubuntu, I've never been able to get multi-touch to "save" and work, I was able to get it to work a few times but after restarting - it would just reset back. I have the settings for two-finger scrolling on: Mouse and Touchpad Touchpad Two-finger scrolling (selected) Enable horizontal scrolling (ticked) The cursor stops moving when I try to scroll with two fingers, but it doesn't actually scroll the page. When I perform xinput list, I get: Virtual core pointer id=2 [master pointer (3)] ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ETPS/2 Elantech ETF0401 id=13 [slave pointer (2)] I've tried to install some 'synaptics-dkms' bug-fix (from a few years back) but that didn't work, so I removed that. I've tried installing 'uTouch' but that didn't seem to do anything so removed it. Here's what I have installed now: dpkg --get-selections installed-software grep 'touch\|mouse\|track\|synapt' installed-software libsoundtouch0 --- install libutouch-evemu1 --- install libutouch-frame1 --- install libutouch-geis1 --- install libutouch-grail1 --- install printer-driver-ptouch --- install ptouch-driver --- install xserver-xorg-input-multitouch --- install xserver-xorg-input-mouse --- install xserver-xorg-input-vmmouse --- install libnetfilter-conntrack3 --- install libxatracker1 --- install xserver-xorg-input-synaptics --- install So, I'll start again, what should I do now to get two-finger scrolling to work and ensure it works after restarting? Also doing: synclient TapButton1=1 TapButton2=2 TapButton3=3 ...works but doesn't save after restarting. However doing: synclient VertTwoFingerScroll=1 HorizTwoFingerScroll=1 Does NOT work to fix the two-finger scrolling. Output of: cat /var/log/Xorg.0.log | grep -i synaptics [ 4.576] (II) LoadModule: "synaptics" [ 4.577] (II) Loading /usr/lib/xorg/modules/input/synaptics_drv.so [ 4.577] (II) Module synaptics: vendor="X.Org Foundation" [ 4.577] (II) Using input driver 'synaptics' for 'ETPS/2 Elantech ETF0401' [ 4.577] (II) Loading /usr/lib/xorg/modules/input/synaptics_drv.so [ 4.584] (--) synaptics: ETPS/2 Elantech ETF0401: x-axis range 0 - 1088 [ 4.584] (--) synaptics: ETPS/2 Elantech ETF0401: y-axis range 0 - 704 [ 4.584] (--) synaptics: ETPS/2 Elantech ETF0401: pressure range 0 - 255 [ 4.584] (--) synaptics: ETPS/2 Elantech ETF0401: finger width range 0 - 16 [ 4.584] (--) synaptics: ETPS/2 Elantech ETF0401: buttons: left right middle double triple scroll-buttons [ 4.584] (--) synaptics: ETPS/2 Elantech ETF0401: Vendor 0x2 Product 0xe [ 4.584] (--) synaptics: ETPS/2 Elantech ETF0401: touchpad found [ 4.588] (**) synaptics: ETPS/2 Elantech ETF0401: (accel) MinSpeed is now constant deceleration 2.5 [ 4.588] (**) synaptics: ETPS/2 Elantech ETF0401: MaxSpeed is now 1.75 [ 4.588] (**) synaptics: ETPS/2 Elantech ETF0401: AccelFactor is now 0.154 [ 4.589] (--) synaptics: ETPS/2 Elantech ETF0401: touchpad found Tried installing synaptiks but that didn't seem to work either, so removed it. Temporary Fix (works until I restart) Doing the following commands: modprobe -r psmouse modprobe psmouse proto=imps Works but now xinput list shows up as: Virtual core pointer id=2 [master pointer (3)] ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ImPS/2 Generic Wheel Mouse id=13 [slave pointer (2)] Instead of Elantech, and it gets reset when I reboot. Solution (not ideal for most people) So, I ended up reinstalling a fresh 12.04 after indirectly playing around with burg and plymouth then removing plymouth which removed 50+ packages (I saw the warnings but was way too tired and assumed I could just 'reinstall' them all after (except that didn't work). Right now xinput list shows up as: ? Virtual core pointer --- id=2 [master pointer (3)] ? ? Virtual core XTEST pointer --- id=4 [slave pointer (2)] ? ? ETPS/2 Elantech Touchpad --- id=13 [slave pointer (2)] grep 'touch\|mouse\|track\|synapt' installed-software libnetfilter-conntrack3 --- install libsoundtouch0 --- install libutouch-evemu1 --- install libutouch-frame1 --- install libutouch-geis1 --- install libutouch-grail1 --- install libxatracker1 --- install mousetweaks --- install printer-driver-ptouch --- install xserver-xorg-input-mouse --- install xserver-xorg-input-synaptics --- install xserver-xorg-input-vmmouse --- install cat /var/log/Xorg.0.log | grep -i synaptics [ 4.890] (II) LoadModule: "synaptics" [ 4.891] (II) Loading /usr/lib/xorg/modules/input/synaptics_drv.so [ 4.892] (II) Module synaptics: vendor="X.Org Foundation" [ 4.892] (II) Using input driver 'synaptics' for 'ETPS/2 Elantech Touchpad' [ 4.892] (II) Loading /usr/lib/xorg/modules/input/synaptics_drv.so [ 4.956] (II) synaptics: ETPS/2 Elantech Touchpad: ignoring touch events for semi-multitouch device [ 4.956] (--) synaptics: ETPS/2 Elantech Touchpad: x-axis range 0 - 1088 [ 4.956] (--) synaptics: ETPS/2 Elantech Touchpad: y-axis range 0 - 704 [ 4.956] (--) synaptics: ETPS/2 Elantech Touchpad: pressure range 0 - 255 [ 4.956] (--) synaptics: ETPS/2 Elantech Touchpad: finger width range 0 - 15 [ 4.956] (--) synaptics: ETPS/2 Elantech Touchpad: buttons: left right double triple [ 4.956] (--) synaptics: ETPS/2 Elantech Touchpad: Vendor 0x2 Product 0xe [ 4.956] (--) synaptics: ETPS/2 Elantech Touchpad: touchpad found [ 4.980] () synaptics: ETPS/2 Elantech Touchpad: (accel) MinSpeed is now constant deceleration 2.5 [ 4.980] () synaptics: ETPS/2 Elantech Touchpad: MaxSpeed is now 1.75 [ 4.980] (**) synaptics: ETPS/2 Elantech Touchpad: AccelFactor is now 0.154 [ 4.980] (--) synaptics: ETPS/2 Elantech Touchpad: touchpad found So, if all else fails, reinstall Linux :/

    Read the article

  • LINQ and ordering of the result set

    - by vik20000in
    After filtering and retrieving the records most of the time (if not always) we have to sort the record in certain order. The sort order is very important for displaying records or major calculations. In LINQ for sorting data the order keyword is used. With the help of the order keyword we can decide on the ordering of the result set that is retrieved after the query.  Below is an simple example of the order keyword in LINQ.     string[] words = { "cherry", "apple", "blueberry" };     var sortedWords =         from word in words         orderby word         select word; Here we are ordering the data retrieved based on the string ordering. If required the order can also be made on any of the property of the individual like the length of the string.     var sortedWords =         from word in words         orderby word.Length         select word; You can also make the order descending or ascending by adding the keyword after the parameter.     var sortedWords =         from word in words         orderby word descending         select word; But the best part of the order clause is that instead of just passing a field you can also pass the order clause an instance of any class that implements IComparer interface. The IComparer interface holds a method Compare that Has to be implemented. In that method we can write any logic whatsoever for the comparision. In the below example we are making a string comparison by ignoring the case. string[] words = { "aPPLE", "AbAcUs", "bRaNcH", "BlUeBeRrY", "cHeRry"}; var sortedWords = words.OrderBy(a => a, new CaseInsensitiveComparer());  public class CaseInsensitiveComparer : IComparer<string> {     public int Compare(string x, string y)     {         return string.Compare(x, y, StringComparison.OrdinalIgnoreCase);     } }  But while sorting the data many a times we want to provide more than one sort so that data is sorted based on more than one condition. This can be achieved by proving the next order followed by a comma.     var sortedWords =         from word in words         orderby word , word.length         select word; We can also use the reverse() method to reverse the full order of the result set.     var sortedWords =         from word in words         select word.Reverse();                                 Vikram

    Read the article

  • initrd.lz is corrupted error occured while installing 11.10

    - by zubendra
    C:\ubuntu\install\boot\initrd.lz is corrupted. Error pop-up comes up every time i am trying to install ubuntu-11.10-desktop-i386 using wubi. error comes when the installation process is almost completed. can anyone suggest a solution for this problem. Its occurring regularly. 03-19 18:01 DEBUG TaskList: ## Running copy_installation_files... 03-19 18:01 DEBUG WindowsBackend: Copying C:\DOCUME~1\HP_OWN~1.YOU\LOCALS~1\Temp\pyl59.tmp\data\custom-installation -> C:\ubuntu\install\custom-installation 03-19 18:01 DEBUG WindowsBackend: Copying C:\DOCUME~1\HP_OWN~1.YOU\LOCALS~1\Temp\pyl59.tmp\winboot -> C:\ubuntu\winboot 03-19 18:01 DEBUG WindowsBackend: Copying C:\DOCUME~1\HP_OWN~1.YOU\LOCALS~1\Temp\pyl59.tmp\data\images\Ubuntu.ico -> C:\ubuntu\Ubuntu.ico 03-19 18:01 DEBUG TaskList: ## Finished copy_installation_files 03-19 18:01 DEBUG TaskList: ## Running get_iso... 03-19 18:01 DEBUG CommonBackend: Trying to use pre-specified ISO X:\ubuntu-11.10-desktop-i386.iso 03-19 18:01 DEBUG TaskList: New task is_valid_iso 03-19 18:01 DEBUG TaskList: ### Running is_valid_iso... 03-19 18:01 DEBUG Distro: checking Ubuntu ISO X:\ubuntu-11.10-desktop-i386.iso 03-19 18:01 INFO Distro: Found a valid iso for Ubuntu: X:\ubuntu-11.10-desktop-i386.iso 03-19 18:01 DEBUG TaskList: ### Finished is_valid_iso 03-19 18:01 DEBUG TaskList: New task check_iso 03-19 18:01 DEBUG TaskList: ### Running check_iso... 03-19 18:01 DEBUG CommonBackend: Checking X:\ubuntu-11.10-desktop-i386.iso 03-19 18:01 DEBUG Distro: checking Ubuntu ISO X:\ubuntu-11.10-desktop-i386.iso 03-19 18:01 INFO Distro: Found a valid iso for Ubuntu: X:\ubuntu-11.10-desktop-i386.iso 03-19 18:01 DEBUG CommonBackend: Using distro Ubuntu i386 instead of Ubuntu amd64 03-19 18:01 DEBUG TaskList: New task get_metalink 03-19 18:01 DEBUG TaskList: #### Running get_metalink... 03-19 18:01 DEBUG downloader: downloading http://releases.ubuntu.com/11.10/ubuntu-11.10-desktop-i386.metalink > C:\ubuntu\install 03-19 18:01 ERROR CommonBackend: Cannot download metalink file http://releases.ubuntu.com/11.10/ubuntu-11.10-desktop-i386.metalink err=[Errno 4] IOError: <urlopen error (7, 'getaddrinfo failed')> 03-19 18:01 DEBUG downloader: downloading http://cdimage.ubuntu.com/daily-live/current/oneiric-desktop-i386.metalink > C:\ubuntu\install 03-19 18:01 ERROR CommonBackend: Cannot download metalink file2 http://cdimage.ubuntu.com/daily-live/current/oneiric-desktop-i386.metalink err=[Errno 4] IOError: <urlopen error (7, 'getaddrinfo failed')> 03-19 18:01 DEBUG TaskList: #### Finished get_metalink 03-19 18:01 ERROR CommonBackend: ERROR: the metalink file is not available, cannot check the md5 for X:\ubuntu-11.10-desktop-i386.iso, ignoring 03-19 18:01 DEBUG TaskList: ### Finished check_iso 03-19 18:01 DEBUG TaskList: New task copy_file 03-19 18:01 DEBUG CommonBackend: Copying X:\ubuntu-11.10-desktop-i386.iso > C:\ubuntu\install\installation.iso 03-19 18:01 DEBUG TaskList: ### Running copy_file... 03-19 18:01 DEBUG TaskList: ### Finished copy_file 03-19 18:01 DEBUG TaskList: ## Finished get_iso 03-19 18:01 DEBUG TaskList: ## Running extract_kernel... 03-19 18:01 DEBUG CommonBackend: Extracting files from ISO C:\ubuntu\install\installation.iso 03-19 18:01 DEBUG WindowsBackend: extracting md5sum.txt from C:\ubuntu\install\installation.iso 03-19 18:01 DEBUG WindowsBackend: extracting casper\vmlinuz from C:\ubuntu\install\installation.iso 03-19 18:01 DEBUG WindowsBackend: extracting casper\initrd.lz from C:\ubuntu\install\installation.iso 03-19 18:01 DEBUG CommonBackend: Checking kernel, initrd and md5sums 03-19 18:01 DEBUG CommonBackend: checking C:\ubuntu\install\boot\vmlinuz 03-19 18:01 DEBUG CommonBackend: C:\ubuntu\install\boot\vmlinuz md5 = fde150f5c6fd2de66ed7876efbfcc4c7 == fde150f5c6fd2de66ed7876efbfcc4c7 03-19 18:01 DEBUG CommonBackend: checking C:\ubuntu\install\boot\initrd.lz 03-19 18:01 DEBUG CommonBackend: C:\ubuntu\install\boot\initrd.lz md5 = 8900200c764438c1b124dff5ae92c763 != d6baee1e11f1d6de6eba6bd43dbde352 03-19 18:01 ERROR TaskList: File C:\ubuntu\install\boot\initrd.lz is corrupted Traceback (most recent call last): File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\common\backend.py", line 623, in extract_kernel Exception: File C:\ubuntu\install\boot\initrd.lz is corrupted 03-19 18:01 DEBUG TaskList: # Cancelling tasklist 03-19 18:01 ERROR root: File C:\ubuntu\install\boot\initrd.lz is corrupted Traceback (most recent call last): File "\lib\wubi\application.py", line 58, in run File "\lib\wubi\application.py", line 132, in select_task File "\lib\wubi\application.py", line 158, in run_installer File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\common\backend.py", line 623, in extract_kernel Exception: File C:\ubuntu\install\boot\initrd.lz is corrupted 03-19 18:01 DEBUG TaskList: # Finished tasklist

    Read the article

  • Encode two integers into colour values and compare them in a HLSL shader

    - by Ben Slinger
    I am writing a 2D point and click adventure game in Monogame, and I'd like to be able to create an image mask for every room which defines which parts of the background a character can walk behind, and at which Y value a character needs to be at for the background to be drawn above the character. I haven't done any shader work before but after doing some reading I thought the following solution should work: Create a mask for the room with different walk behind areas painted in a colour that defines the baseline Y value (Walk Behind Mask) Render all objects to a RenderTarget2D (Base Texture) Render all objects to a different RenderTarget2D, but changing every pixel of each object to a colour that defines its Y value (Position Mask) Pass these two textures plus the image mask into the shader, and for each pixel compare the colour of the image mask to the colour of the Position Mask to the Walk Behind Mask - if the Position Mask pixel is larger (thus lower on the screen and closer to the camera) than the Walk Behind Mask, draw the pixel from the Base Texture, otherwise draw a transparent pixel (allowing the background to show through). I've got it mostly working, but I'm having trouble packing and unpacking the Y values into colours and retrieving them correctly in the shader. Here are some code examples of how I'm doing it so far: (When drawing to the Position Mask RenderTarget2D) Color posColor = new Color(((int)Position.Y >> 16) & 255, ((int)Position.Y >> 8) & 255, (int)Position.Y & 255); So as far as I can tell, this should be taking the first 3 bytes of the position integer and encoding them into a 4 byte colour (ignoring the alpha as the 4th byte). This seems to work fine, as when my character is at Y = 600, the resulting Color from this is: {[Color: R=0, G=2, B=88, A=255, PackedValue=4283957760]}. I then have an area in my Walk Behind Mask that I only want the character to be displayed behind if his Y value is lower than 655, so I've painted it with R=0, G=2, B=143, A=255. Now, I think I have the shader OK as well, here's what I have: sampler BaseTexture : register(s0); sampler MaskTexture : register(s1); sampler PositionTexture : register(s2); float4 mask( float2 coords : TEXCOORD0 ) : COLOR0 { float4 color = tex2D(BaseTexture, coords); float4 maskColor = tex2D(MaskTexture, coords); float4 positionColor = tex2D(PositionTexture, coords); float maskCompare = (maskColor.r * pow(2,24)) + (maskColor.g * pow(2,16)) + (maskColor.b * pow(2,8)); float positionCompare = (positionColor.r * pow(2,24)) + (positionColor.g * pow(2,16)) + (positionColor.b * pow(2,8)); return positionCompare < maskCompare ? float4(0,0,0,0) : color; } technique Technique1 { pass NoEffect { PixelShader = compile ps_3_0 mask(); } } This isn't working, however - currently all characters are displayed behind the walk behind area, regardless of their Y value. I tried printing out some debug info by grabbing the pixel from both the Position Mask and the Walk Under Mask under the current mouse position, and it seems like maybe the colours aren't being rendered to the Position Mask correctly? When calculating the colour in that code above I'm getting R=0, G=2, B=88, A=255, but when I mouseover my character I get R=0, G=0, B=30, A=255. Any ideas what I'm doing wrong? It seems like maybe I'm losing some information when rendering to the RenderTarget2D, but I'm now knowledgeable enough to figure out what's happening. Also, I should probably ask, is this an efficient way to do this? Will there be a performance impact? Edit: Whoops, turns out there was a bug that I'd introduced myself, I was drawing out the Position Mask with the position Color, left over from some early testing I was doing. So this solution is working perfectly, though I'm still interested in whether this is an efficient solution performance wise.

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >