Search Results

Search found 34207 results on 1369 pages for 'query output'.

Page 534/1369 | < Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >

  • A linq join combined with a regex

    - by Geert Beckx
    Is it possible to combine these 2 queries or would this make my code too complex? Also I think there should be a performance gain by combining these queries since I think in the near future my source table could be over 11000 records. This is what i came up with so far : Dim lit As LiteralControl ' check characters not in alphabet Dim r As New Regex("^[^a-zA-Z]+") Dim query = From o In source.ToTable _ Where r.IsMatch(o.Field(Of String)("nam")) lit = New LiteralControl(String.Format("letter: {0}, count: {1}<br />", "0-9", query.Count)) plhAlpabetLinks.Controls.Add(lit) Dim q = From l In "ABCDEFGHIJKLMNOPQRSTUVWXYZ".ToLower.ToCharArray _ Group Join o In source.ToTable _ On l Equals o.Field(Of String)("nam").ToLowerInvariant(0) Into g = Group _ Select l, g.Count ' iterate the alphabet to generate all the links. For Each letter In q.AsEnumerable lit = New LiteralControl(String.Format("letter: {0}, count: {1}<br />", letter.l, letter.Count)) plhAlpabetLinks.Controls.Add(lit) Next Kind regards, G.

    Read the article

  • determine which value produced a hit in SOLR multivalued field type

    - by harschware
    If I have a multiValued field type of text, and I put values [cat,dog,green,blue] in it. Is there a way to tell when I execute a query against that field for dog, that it was in the 1st element position for that multiValued field? Assumption: client does not have any pre-knowledge of what the field type of the field being queried is. (i.e. Solr must provide the answer and the client can't post process the return doc to figure it out because it would not know how SOLR matched the query to the result). Disclosure: I posted to solr-user list and am getting no traction so I post here now.

    Read the article

  • How to determine the end of an integer array when manipulating with integer pointer?

    - by AKN
    Here is the code: int myInt[] ={ 1, 2, 3, 4, 5 }; int *myIntPtr = &myInt[0]; while( *myIntPtr != NULL ) { cout<<*myIntPtr<<endl; myIntPtr++; } Output: 12345....<junks>.......... For Character array: (Since we have a NULL character at the end, no problem while iterating) char myChar[] ={ 'A', 'B', 'C', 'D', 'E', '\0' }; char *myCharPtr = &myChar[0]; while( *myCharPtr != NULL ) { cout<<*myCharPtr<<endl; myCharPtr++; } Output: ABCDE My question is since we say to add NULL character as end of the strings, we rule out such issues! If in case, it is rule to add 0 to the end of integer array, we could have avoided this problem. What say?

    Read the article

  • Does SQL Server 2005 error message numbers back to the asp.net application?

    - by Duke
    I'd like to get the message number and severity level information from SQL Server upon execution of an erroneous query. For example, when a user attempts to delete a row being referenced by another record, and the cascade relationship is "no action", I'd like the application to be able to check for error message 547 ("The DELETE statement conflicted with the REFERENCE constraint...") and return a user friendly and localized message to the user. When running such a query directly on SQL Server, the following message is printed: Msg 547, Level 16, State 0, Line 1 <Error message...> In an Asp.Net app is this information available in an event handler parameter or elsewhere? Also, I don't suppose anyone knows where I can find a definitive reference of SQL Server message numbers?

    Read the article

  • How Serializable works with insert in SQL Server 2005

    - by Spence
    G'day I think I have a misunderstanding of serializable. I have two tables (data, transaction) which I insert information into in a serializable transaction (either they are both in, or both out, but not in limbo). SET TRANSACTION ISOLATION LEVEL SERIALIZABLE BEGIN TRANSACTION INSERT INTO dbo.data (ID, data) VALUES (@Id, data) INSERT INTO dbo.transactions(ID, info) VALUES (@ID, @info) COMMIT TRANSACTION I have a reconcile query which checks the data table for entries where there is no transaction at read committed isolation level. INSERT INTO reconciles (ReconcileID, DataID) SELECT Reconcile = @ReconcileID, ID FROM Data WHERE NOT EXISTS (SELECT 1 FROM TRANSACTIONS WHERE data.id = transactions.id) Note that the ID is actually a composite (2 column) key, so I can't use a NOT IN operator My understanding was that the second query would exclude any values written into data without their transaction as this insert was happening at serializable and the read was occurring at read committed. I have evidence that reconcile is picking up entries

    Read the article

  • Python instances and attributes: is this a bug or i got it totally wrong?

    - by Mirko Rossini
    Suppose you have something like this: class intlist: def __init__(self,l = []): self.l = l def add(self,a): self.l.append(a) def appender(a): obj = intlist() obj.add(a) print obj.l if __name__ == "__main__": for i in range(5): appender(i) A function creates an instance of intlist and calls on this fresh instance the method append on the instance attribute l. How comes the output of this code is: [0] [0, 1] [0, 1, 2] [0, 1, 2, 3] [0, 1, 2, 3, 4] ? If i switch obj = intlist() with obj = intlist(l=[]) I get the desired output [0] [1] [2] [3] [4] Why this happens? Thanks

    Read the article

  • transferring subversion changes between linux and windows

    - by andreas buykx
    Hi all, What is the best way to transfer changes that include new and deleted directories and/or new and deleted (actually moved) files in those directories from a subversion repository on linux to windows? I do my developments on linux using a subversion repository, but I have to test my changes on windows as well. My windows machine has a tortoisesvn repository which I tried to patch with a svn diff output. This failed miserably since my patch contains a renamed (i.e. deleted and added under a different name) directory, a new directory and the files in there. Do I do things wrong by just applying the svn diff output as a patch in tortoisesvn? For now I think that my best option is to have the windows tree on the same svn version as the linux tree and just copy the entire changed directory over the existing directory. Would that work?

    Read the article

  • fread how to Count total from Counter text

    - by snikolov
    hey i have a counter text here and i need to know how to calculate the total this is my information $filename = "data.txt"; $handle = fopen($filename, "r"); $contents = fread($handle, filesize($filename)); $expode = explode("\n",$contents); /** output 1024 1024 1024 1024 1024 1024 1024 1024 1024 1024 1024 1024 1024 */ I i need to calculate the total by exploding "\n" so i will output 12288 need to understand how to do this i have done this foreach ($expode as $v) { $total = $total + $v; echo $total; } i did not get good results with this

    Read the article

  • MySQL break out group clause from subquery

    - by Anton Gildebrand
    Here is my query SELECT COALESCE(js.name,'Lead saknas'), count(j.id) FROM jobs j LEFT JOIN job_sources js ON j.job_source=js.id LEFT JOIN (SELECT * FROM quotes GROUP BY job_id) q ON j.id=q.job_id GROUP BY j.job_source The problem is that it's allowed for each job to have more than one quote. Because of that i group the quotes by job_id. Now sure, this works. But i don't like the solution with a subquery. How can i break out the group clause from the subquery to the main query? I have tried to add q.job_id to the main group clause, both before and after the existing one but don't get the same results.

    Read the article

  • What is a django QuerySet?

    - by gath
    Guys, When i do this >>> b = Blog.objects.all() >>> b i get this >>>[<Blog: Blog Title>,<Blog: Blog Tile>] When i query what type b is, >>> type(b) i get this >>> <class 'django.db.models.query.QuerySet'> What does this mean? is it a data type like dict, list etc? An example of how i can build data structure like a QuerySet will be appreciated. I would want to know how django build that QuerySet (the gory details) Gath.

    Read the article

  • How to create initializeDB() method for java database

    - by Holly
    I am working on a Java project for class and have not worked much with incorporating databases into Java. I can't find much on the initializeDB() method, but if I could get some help I would really appreciate it. Below is the code being used for the intializeDB() method: private void initializeDB() { try { // Load the JDBC driver System.out.println("Driver loaded"); // Establish a connection System.out.println("Database connected"); // Create a statement // Create a SQL Query string // Execute the query to create a recordset } catch (Exception ex) { ex.printStackTrace(); } }

    Read the article

  • Find groups with both validated, unvalidated users

    - by Matchu
    (Not my real MySQL schema, but illustrates what needs done.) Users can belong to many groups, and groups have many users. users: id INT validated TINYINT(1) groups: id INT name VARCHAR(20) groups_users: group_id INT user_id INT I need to find groups that contain both validated and unvalidated users (validated being 1 or 0, respectively), in order to perform a specific manual maintenance task. There are thousands of users, all belong to at least one group, but a group usually only has 2-5 users. This is a live production server, so I could probably craft a query myself, but the last one I tried took a matter of minutes before I killed it. (I'm not one of those brilliant SQL wizards.) I suppose I could take the server down for maintenance, but, if possible, a query that gets this job done in a matter of seconds would be fantastic. Thanks!

    Read the article

  • incorrect variable value outside main()

    - by cru3l
    i have this code #import <Foundation/Foundation.h> int testint; NSString *teststring; int Test() { NSLog(@"%d",testint); NSLog(@"%@",teststring); } int main (int argc, const char * argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; testint = 5; NSString *teststring = [[NSString alloc] initWithString:@"test string"]; Test(); [pool drain]; return 0; } in output i have: 5 (null) why Test function doesn't see correct teststring value? What should I do, to have correct "test string" in output?

    Read the article

  • Using a singleton database class in functions and multiple scripts(PHP) - best use methods

    - by dscher
    I have a singleton db connection which I get with: $dbConnect = myDatabase::getInstance(); which is easy enough. My question is what is the least rhetorical and legitimate way of using this connection in functions and classes? It seems silly to have to declare the variable global, pass it into every single function, and/or recreate this variable within every function. Is there another answer for this? Obviously I'm a noob and I can work my way around this problem 10 different ways, none of which is really attractive to me. It would be a lot easier if I could have that $dbConnect variable accessible in any function without needing to declare it global or pass it in. I do know I can add the variable to the $_SERVER array...is there something wrong with doing this? It seems somewhat inappropriate to me. Another quick question: Is it bad practice to do this: $result = myDatabase::getInstance()-query($query); from directly within a function?

    Read the article

  • Interesting Row_Number() bug

    - by Joel Coehoorn
    I was playing with the Stack Exchange Data Explorer and ran this query: http://odata.stackexchange.com/stackoverflow/q/2828/rising-stars-top-50-users-ordered-on-rep-per-day Notice down in the results, rows 11 and 12 have the same value and so are mis-numbered, even though the row_number() function takes the same order by parameter as the query. I know the correct fix here is to specify an additional tie-breaker column in the order by clauses, but I'm more curious as to why/how the row_number() function returned different results on the same data? If it makes a difference anywhere, this runs on Azure.

    Read the article

  • PHP: URL detection (regexp) includes line breaks

    - by marco92w
    I want to have a function which gets a text as the input and gives back the text with URLs made to HTML links as the output. My draft is as follows: function autoLink($text) { return preg_replace('/https?:\/\/[\S]+/i', '<a href="\0">\0</a>', $text); } But this doesn't work properly. For the input text which contains ... http://www.google.de/ ... I get the following output: <a href="http://www.google.de/<br">http://www.google.de/<br</a> /> Why does it include the line breaks? How could I limit it to the real URL? Thanks in advance!

    Read the article

  • Eager load this rails association

    - by dombesz
    Hi, I have rails app which has a list of users. I have different relations between users, for example worked with, friend, preferred. When listing the users i have to decide if the current user can add a specific user to his friends. -if current_user.can_request_friendship_with(user) =add_to_friends(user) -else =remove_from_friends(user) -if current_user.can_request_worked_with(user) =add_to_worked_with(user) -else =remove_from_worked_with(user) The can_request_friendship_with(user) looks like: def can_request_friendship_with(user) !self.eql?(user) && !self.friendships.find_by_friend_id(user) end My problem is that this means in my case 4 query per user. Listing 10 users means 40 query. Could i somehow eager load this?

    Read the article

  • Python IPC, popen too slow

    - by UnableToLoad
    i need to run a subprocess (./myProgram) form python script and get output, actually i do this: import subprocess proc = subprocess.Popen('./generate_out', shell=False, stdout=subprocess.PIPE, ) while proc.poll() is None: out = proc.stdout.readline() data = doStuff(out) print(data) but is slow, sometimes pass a lot of time between the output produced by ./generate_out and the print(data), knowing that my doStuff() function is very fast, i think there is some buffer slowing down my pipe... Notes: ./generate_out, generates potentially an unlimited number of lines of finite length each. It seems that when too few chars are put in the pipe between the two processes nothing happens, then when enough is produced i get a huge print (non the expected behaviour!) sometimes i wait many seconds (10-20 and more) between generate_out print and python print) what can i do? maybe communicate() is faster? anithing else? Thank you a lot!

    Read the article

  • Wrapping quotes around a php id in XML

    - by Simon Hume
    I've got this line of code: $xml_output .= "\t<Event=" . $x . ">\n"; And it will output: <Event=0> <Event=1> <Event=2> etc etc through my loop. I need it to output as this (with the quotes around the number): <Event="0"> <Event="1"> <Event="2"> Any help, and I'm sure it's simple would be greatly appreciated!

    Read the article

  • JavaScript based diff utility

    - by poke
    I'm looking for a diff equivalent written in JavaScript that only returns/prints relevant lines. I don't want both full text displayed next to each other with the differences highlighted, but just want the actual differences (plus some buffer lines to know where the difference is), similar to the output from the linux diff utility. Does anybody know a javascript function that does this? All differences should be recognized (even changed whitespace). Thanks. edit I have seen jsdifflib but in the example it always shows the full source, so unless there is a way to change the output somehow, it doesn't fully meet my requirements.

    Read the article

  • How to print lines from a file that have repeated more than six times

    - by Mike
    I have a file containing the data shown below. The first comma-delimited field may be repeated any number of times, and I want to print only the lines after the sixth repetition of any value of this field For example, there are eight fields with 1111111 as the first field, and I want to print only the seventh and eighth of these records Input file: 1111111,aaaaaaaa,14 1111111,bbbbbbbb,14 1111111,cccccccc,14 1111111,dddddddd,14 1111111,eeeeeeee,14 1111111,ffffffff,14 1111111,gggggggg,14 1111111,hhhhhhhh,14 2222222,aaaaaaaa,14 2222222,bbbbbbbb,14 2222222,cccccccc,14 2222222,dddddddd,14 2222222,eeeeeeee,14 2222222,ffffffff,14 2222222,gggggggg,14 3333333,aaaaaaaa,14 3333333,bbbbbbbb,14 3333333,cccccccc,14 3333333,dddddddd,14 3333333,eeeeeeee,14 3333333,ffffffff,14 3333333,gggggggg,14 3333333,hhhhhhhh,14 Output: 1111111,gggggggg,14 1111111,hhhhhhhh,14 2222222,gggggggg,14 3333333,gggggggg,14 3333333,hhhhhhhh,14 What I have tried is to transponse the 2nd and 3rd fields with respect to 1st, so that I can use nawk on the field of $7 or $8 #!/usr/bin/ksh awk -F"," '{ a[$1]; b[$1]=b[$1]","$2 c[$1]=c[$1]","$3} END{ for(i in a){ print i","b[i]","c[i]} } ' file > output.txt

    Read the article

  • How to run a jar file in hadoop

    - by Arihant
    I have created a jar file using the java file from this blog using following statements javac -classpath /usr/local/hadoop/hadoop-core-1.0.3.jar -d /home/hduser/dir Dictionary.java /usr/lib/jvm/jdk1.7.0_07/bin/jar cf Dictionary.jar /home/hduser/dir Now i have tried running this jar in hadoop by hit and trial of various commands 1hduser@ubuntu:~$ /usr/local/hadoop/bin/hadoop jar Dictionary.jar Output: Warning: $HADOOP_HOME is deprecated. RunJar jarFile [mainClass] args... 2.hduser@ubuntu:~$ /usr/local/hadoop/bin/hadoop jar Dictionary.jar Dictionary Output: Warning: $HADOOP_HOME is deprecated. Exception in thread "main" java.lang.ClassNotFoundException: Dictionary at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:423) at java.lang.ClassLoader.loadClass(ClassLoader.java:356) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.util.RunJar.main(RunJar.java:149) How can i run the jar in hadoop? I have the right DFS Locations as per needed by my program.

    Read the article

  • How to learn more XMPP/Jabber command

    - by user359277
    Hi, I am using ejabberd as a chatting server now. And I am writing a client to chat and register new user. Right now, I know some of the protocol to register a new account, like sending the following command to register new user: <iq type="set"><query xmlns="jabber:iq:register"><username>wfwfewegwegwewefg</username><password>wfwefwefwefwef</password></query></iq> My question is: I want to learn more command/protocol to talk to the server. So where can I learn more? For example, How can I ask the server if the user name exists or not. How can I ask the server to unregister a user. What is the key word I should search for? Should I search for Jabber XMPP protocol or what? Thanks

    Read the article

  • Neural Network Inputs and Outputs to meaningful values

    - by Micheal
    I'm trying to determine how to transform my "meaningful input" into data for an Artificial Neural Network and how to turn the output into "meaningful output". The way I can always see of doing it is by convering everything to categories with binary values. For example, rather than outputting age, having a 0-1 for <10, a 0-1 for 10 - 19, etc. Same with the inputs, where I might be using for example, hair colour. Is the only way to turn this into input to have Blonde 0-1, Brown 0-1, etc? Am I missing some entire topic of ANNs? Most of the books and similar I read use theoretical examples.

    Read the article

  • Scalable way of doing self join with many to many table

    - by johnathan
    I have a table structure like the following: user id name profile_stat id name profile_stat_value id name user_profile user_id profile_stat_id profile_stat_value_id My question is: How do I evaluate a query where I want to find all users with profile_stat_id and profile_stat_value_id for many stats? I've tried doing an inner self join, but that quickly gets crazy when searching for many stats. I've also tried doing a count on the actual user_profile table, and that's much better, but still slow. Is there some magic I'm missing? I have about 10 million rows in the user_profile table and want the query to take no longer than a few seconds. Is that possible?

    Read the article

< Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >