Search Results

Search found 17593 results on 704 pages for 'wmi query'.

Page 366/704 | < Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >

  • A linq join combined with a regex

    - by Geert Beckx
    Is it possible to combine these 2 queries or would this make my code too complex? Also I think there should be a performance gain by combining these queries since I think in the near future my source table could be over 11000 records. This is what i came up with so far : Dim lit As LiteralControl ' check characters not in alphabet Dim r As New Regex("^[^a-zA-Z]+") Dim query = From o In source.ToTable _ Where r.IsMatch(o.Field(Of String)("nam")) lit = New LiteralControl(String.Format("letter: {0}, count: {1}<br />", "0-9", query.Count)) plhAlpabetLinks.Controls.Add(lit) Dim q = From l In "ABCDEFGHIJKLMNOPQRSTUVWXYZ".ToLower.ToCharArray _ Group Join o In source.ToTable _ On l Equals o.Field(Of String)("nam").ToLowerInvariant(0) Into g = Group _ Select l, g.Count ' iterate the alphabet to generate all the links. For Each letter In q.AsEnumerable lit = New LiteralControl(String.Format("letter: {0}, count: {1}<br />", letter.l, letter.Count)) plhAlpabetLinks.Controls.Add(lit) Next Kind regards, G.

    Read the article

  • How Serializable works with insert in SQL Server 2005

    - by Spence
    G'day I think I have a misunderstanding of serializable. I have two tables (data, transaction) which I insert information into in a serializable transaction (either they are both in, or both out, but not in limbo). SET TRANSACTION ISOLATION LEVEL SERIALIZABLE BEGIN TRANSACTION INSERT INTO dbo.data (ID, data) VALUES (@Id, data) INSERT INTO dbo.transactions(ID, info) VALUES (@ID, @info) COMMIT TRANSACTION I have a reconcile query which checks the data table for entries where there is no transaction at read committed isolation level. INSERT INTO reconciles (ReconcileID, DataID) SELECT Reconcile = @ReconcileID, ID FROM Data WHERE NOT EXISTS (SELECT 1 FROM TRANSACTIONS WHERE data.id = transactions.id) Note that the ID is actually a composite (2 column) key, so I can't use a NOT IN operator My understanding was that the second query would exclude any values written into data without their transaction as this insert was happening at serializable and the read was occurring at read committed. I have evidence that reconcile is picking up entries

    Read the article

  • Does SQL Server 2005 error message numbers back to the asp.net application?

    - by Duke
    I'd like to get the message number and severity level information from SQL Server upon execution of an erroneous query. For example, when a user attempts to delete a row being referenced by another record, and the cascade relationship is "no action", I'd like the application to be able to check for error message 547 ("The DELETE statement conflicted with the REFERENCE constraint...") and return a user friendly and localized message to the user. When running such a query directly on SQL Server, the following message is printed: Msg 547, Level 16, State 0, Line 1 <Error message...> In an Asp.Net app is this information available in an event handler parameter or elsewhere? Also, I don't suppose anyone knows where I can find a definitive reference of SQL Server message numbers?

    Read the article

  • need to count the frequency of each terms inside a document

    - by Wai Loon II
    hi, i need to calculate the frequency of all the terms inside a document. How can i do that ? i do not ask for codes. I am just asking for guidance. Actually i am doing some similarity calculation between a document and query. I have calculated the term frequency for the query. But i do not know how to calculate the tern frequency for EACH words inside a document. Can anyone guide me ? Thank you for your attention.

    Read the article

  • Interesting Row_Number() bug

    - by Joel Coehoorn
    I was playing with the Stack Exchange Data Explorer and ran this query: http://odata.stackexchange.com/stackoverflow/q/2828/rising-stars-top-50-users-ordered-on-rep-per-day Notice down in the results, rows 11 and 12 have the same value and so are mis-numbered, even though the row_number() function takes the same order by parameter as the query. I know the correct fix here is to specify an additional tie-breaker column in the order by clauses, but I'm more curious as to why/how the row_number() function returned different results on the same data? If it makes a difference anywhere, this runs on Azure.

    Read the article

  • Find groups with both validated, unvalidated users

    - by Matchu
    (Not my real MySQL schema, but illustrates what needs done.) Users can belong to many groups, and groups have many users. users: id INT validated TINYINT(1) groups: id INT name VARCHAR(20) groups_users: group_id INT user_id INT I need to find groups that contain both validated and unvalidated users (validated being 1 or 0, respectively), in order to perform a specific manual maintenance task. There are thousands of users, all belong to at least one group, but a group usually only has 2-5 users. This is a live production server, so I could probably craft a query myself, but the last one I tried took a matter of minutes before I killed it. (I'm not one of those brilliant SQL wizards.) I suppose I could take the server down for maintenance, but, if possible, a query that gets this job done in a matter of seconds would be fantastic. Thanks!

    Read the article

  • How to create initializeDB() method for java database

    - by Holly
    I am working on a Java project for class and have not worked much with incorporating databases into Java. I can't find much on the initializeDB() method, but if I could get some help I would really appreciate it. Below is the code being used for the intializeDB() method: private void initializeDB() { try { // Load the JDBC driver System.out.println("Driver loaded"); // Establish a connection System.out.println("Database connected"); // Create a statement // Create a SQL Query string // Execute the query to create a recordset } catch (Exception ex) { ex.printStackTrace(); } }

    Read the article

  • Issues accessing an object's array values - returns null or 0s

    - by PhatNinja
    The function below should return an array of objects with this structure: TopicFrequency = { name: "Chemistry", //This is dependent on topic data: [1,2,3,4,5,6,7,8,9,10,11,12] //This would be real data }; so when I do this: myData = this.getChartData("line"); it should return two objects: {name : "Chemistry", data : [1,2,3,4,51,12,0,0, 2,1,41, 31]} {name : "Math", data : [0,0,41,4,51,12,0,0, 2,1,41, 90]} so when I do console.log(myData); it's perfect, returns exactly this. However when I do console.log(myData[0].data) it returns all 0s, not the values. I'm not sure what this issues is known as, and my question is simple what is this problem known as? Here is the full function. Somethings were hardcoded and other variables (notable server and queryContent) removed. Those parts worked fine, it is only when manipulated/retreiving the returned array's values that I run into problems. Note this is async. so not sure if that is also part of the problem. getChartData: function (chartType) { var TopicsFrequencyArray = new Array(); timePairs = this.newIntervalSet("Month"); topicList = new Array("Chemistry", "Math");//Hard coded for now var queryCopy = { //sensitive information }; for (i = 0; i < topicList.length; i++) { var TopicFrequency = { name: null, data: this.newFilledArray(12, 0) }; j = 0; TopicFrequency.name = topicList[i]; while (j < timePairs.length) { queryCopy.filter = TopicFrequency.name; //additional queryCopy parameter changes made here var request = esri.request({ url: server, content: queryCopy, handleAs: "json", load: sucess, error: fail }); j = j + 1; function sucess(response, io) { var topicCountData = 0; query = esri.urlToObject(io.url); var dateString = query.query.fromDate.replace("%", " "); dateString = dateString.replace(/-/g, "/"); dateString = dateString.split("."); date = new Date(dateString[0]); dojo.forEach(response.features, function (feature) { if (feature.properties.count > 0) { topicCountData = feature.properties.count; } TopicFrequency.data[date.getMonth()] = topicCountData; }); } function fail(error) { j = j + 1; alert("There was an unspecified error with this request"); console.log(error); } } TopicsFrequencyArray.push(TopicFrequency); } },

    Read the article

  • determine which value produced a hit in SOLR multivalued field type

    - by harschware
    If I have a multiValued field type of text, and I put values [cat,dog,green,blue] in it. Is there a way to tell when I execute a query against that field for dog, that it was in the 1st element position for that multiValued field? Assumption: client does not have any pre-knowledge of what the field type of the field being queried is. (i.e. Solr must provide the answer and the client can't post process the return doc to figure it out because it would not know how SOLR matched the query to the result). Disclosure: I posted to solr-user list and am getting no traction so I post here now.

    Read the article

  • Using a singleton database class in functions and multiple scripts(PHP) - best use methods

    - by dscher
    I have a singleton db connection which I get with: $dbConnect = myDatabase::getInstance(); which is easy enough. My question is what is the least rhetorical and legitimate way of using this connection in functions and classes? It seems silly to have to declare the variable global, pass it into every single function, and/or recreate this variable within every function. Is there another answer for this? Obviously I'm a noob and I can work my way around this problem 10 different ways, none of which is really attractive to me. It would be a lot easier if I could have that $dbConnect variable accessible in any function without needing to declare it global or pass it in. I do know I can add the variable to the $_SERVER array...is there something wrong with doing this? It seems somewhat inappropriate to me. Another quick question: Is it bad practice to do this: $result = myDatabase::getInstance()-query($query); from directly within a function?

    Read the article

  • How to learn more XMPP/Jabber command

    - by user359277
    Hi, I am using ejabberd as a chatting server now. And I am writing a client to chat and register new user. Right now, I know some of the protocol to register a new account, like sending the following command to register new user: <iq type="set"><query xmlns="jabber:iq:register"><username>wfwfewegwegwewefg</username><password>wfwefwefwefwef</password></query></iq> My question is: I want to learn more command/protocol to talk to the server. So where can I learn more? For example, How can I ask the server if the user name exists or not. How can I ask the server to unregister a user. What is the key word I should search for? Should I search for Jabber XMPP protocol or what? Thanks

    Read the article

  • Eager load this rails association

    - by dombesz
    Hi, I have rails app which has a list of users. I have different relations between users, for example worked with, friend, preferred. When listing the users i have to decide if the current user can add a specific user to his friends. -if current_user.can_request_friendship_with(user) =add_to_friends(user) -else =remove_from_friends(user) -if current_user.can_request_worked_with(user) =add_to_worked_with(user) -else =remove_from_worked_with(user) The can_request_friendship_with(user) looks like: def can_request_friendship_with(user) !self.eql?(user) && !self.friendships.find_by_friend_id(user) end My problem is that this means in my case 4 query per user. Listing 10 users means 40 query. Could i somehow eager load this?

    Read the article

  • acts-as-taggable-on: find tags with name LIKE, sort by tag_counts?

    - by James
    Hi I'm using the rails plugin acts-as-taggable-onand I'm trying to find the top 5 most used tags whose names match and partially match a given query. When I do User.skill_counts.order('count DESC').limit(5).where('name LIKE ?', params[:query]) This return the following error: ActiveRecord::StatementInvalid: SQLite3::SQLException: ambiguous column name: name: SELECT tags.*, COUNT(*) AS count FROM "tags" INNER JOIN users ON users.id = taggings.taggable_id LEFT OUTER JOIN taggings ON tags.id = taggings.tag_id AND taggings.context = 'skills' WHERE (taggings.taggable_type = 'User') AND (taggings.taggable_id IN(SELECT users.id FROM "users")) AND (name LIKE 'asd') GROUP BY tags.id, tags.name HAVING COUNT(*) > 0 ORDER BY count DESC LIMIT 5 But when I do User.skill_counts.first.name this returns "alliteration" I'd appreciate any help on this matter.

    Read the article

  • Is there a way to give a subquery an alias in Oracle 11g SQL?

    - by Matt Pascoe
    Is there a way to give a subquery in Oracle 11g an alias like: select * from (select client_ref_id, request from some_table where message_type = 1) abc, (select client_ref_id, response from some_table where message_type = 2) defg where abc.client_ref_id = def.client_ref_id; Otherwise is there a way to join the two subqueries based on the client_ref_id. I realize there is a self join, but on the database I am running on a self join can take up to 5 min to complete (there is some extra logic in the actual query I am running but I have determined the self join is what is causing the issue). The individual subqueries only take a few seconds to complete by them selves. The self join query looks something like: select st.request, st1.request from some_table st, some_table st1 where st.client_ref_id = st1.client_ref_id;

    Read the article

  • SQL Native Client 10 Performance miserable (due to server-side cursors)

    - by namezero
    we have an application that uses ODBC via CDatabase/CRecordset in MFC (VS2010). We have two backends implemented. MSSQL and MySQL. Now, when we use MSSQL (with the Native Client 10.0), retrieving records with SELECT is dramatically slow via slow links (VPN, for example). The MySQL ODBC driver does not exhibit this nasty behavior. For example: CRecordset r(&m_db); r.Open(CRecordset::snapshot, L"SELECT a.something, b.sthelse FROM TableA AS a LEFT JOIN TableB AS b ON a.ID=b.Ref"); r.MoveFirst(); while(!r.IsEOF()) { // Retrieve CString strData; crs.GetFieldValue(L"a.something", strData); crs.MoveNext(); } Now, with the MySQL driver, everything runs as it should. The query is returned, and everything is lightning fast. However, with the MSSQL Native Client, things slow down, because on every MoveNext(), the driver communicates with the server. I think it is due to server-side cursors, but I didn't find a way to disable them. I have tried using: ::SQLSetConnectAttr(m_db.m_hdbc, SQL_ATTR_ODBC_CURSORS, SQL_CUR_USE_ODBC, SQL_IS_INTEGER); But this didn't help either. There are still long-running exec's to sp_cursorfetch() et al in SQL Profiler. I have also tried a small reference project with SQLAPI and bulk fetch, but that hangs in FetchNext() for a long time, too (even if there is only one record in the resultset). This however only happens on queries with LEFT JOINS, table-valued functions, etc. Note that the query doesn't take that long - executing the same SQL via SQL Studio over the same connection returns in a reasonable time. Question1: Is is possible to somehow get the native client to "cache" all results locally use local cursors in a similar fashion as the MySQL driver seems to do it? Maybe this is the wrong approach altogether, but I'm not sure how else to do this. All we want is to retrieve all data at once from a SELECT, then never talk the server again until the next query. We don't care about recordset updates, deletes, etc or any of that nonsense. We only want to retrieve data. We take that recordset, get all the data, and delete it. Question2: Is there a more efficient way to just retrieve data in MFC with ODBC?

    Read the article

  • SQL join from multiple tables

    - by Kenny Anderson
    Hi all We've got a system (MS SQL 2008 R2-based) that has a number of "input" database and a one "output" database. I'd like to write a query that will read from the output DB, and JOIN it to data in one of the source DB. However, the source table may be one or more individual tables :( The name of the source DB is included in the output DB; ideally, I'd like to do something like the following (pseudo-SQL ahoy) SELECT output.UID, output.description, input.data from output.dbo.description LEFT JOIN (SELECT input.UID, input.data FROM [output.sourcedb].dbo.datatable ) AS input ON input.UID=output.UID Is there any way to do something like the above - "dynamically" specify the database and table to be joined on for each row in the query?

    Read the article

  • Scalable way of doing self join with many to many table

    - by johnathan
    I have a table structure like the following: user id name profile_stat id name profile_stat_value id name user_profile user_id profile_stat_id profile_stat_value_id My question is: How do I evaluate a query where I want to find all users with profile_stat_id and profile_stat_value_id for many stats? I've tried doing an inner self join, but that quickly gets crazy when searching for many stats. I've also tried doing a count on the actual user_profile table, and that's much better, but still slow. Is there some magic I'm missing? I have about 10 million rows in the user_profile table and want the query to take no longer than a few seconds. Is that possible?

    Read the article

  • Linq to SQL Strange SQL Translation

    - by Master Morality
    I have a simple query that is generating some odd SQL translations, which is blowing up my code when the object is saturated. from x in DataContext.MyEntities select new { IsTypeCDA = x.EntityType == "CDA" //x.EntityType is a string and EntityType.CDA is a const string... } I would expect this query should translate to: SELECT (CASE WHEN [t0].[EntityType] = @p1 THEN 1 ELSE 0 END) as [IsTypeCDA] ... Instead I get this : SELECT (CASE WHEN @p1 = [t0].[EntityType] THEN 1 WHEN NOT (@p1 = [t0].[EntityType]) THEN 0 ELSE NULL END) AS [IsTypeCDA] ... Since I'm saturating a POCO where IsTypeCDA is a bool, it blows up stating I can't assign null to bool. Any thoughts? Edit: fixed the property names so they make sense...

    Read the article

  • Post High Score and Retrieve Position

    - by majman
    I'm not so savvy with MYSQL, so my apologies in advance is this is a dumb question. I've created a super basic PHP High Scores table. Upon inserting a new score into the DB Table, I'd like to retrieve the position of that score so that I can get 10 results with the persons score falling within that range. My INSERT Query looks something like: $stmt = $mysqli->prepare("INSERT INTO highscores (name, time, score) VALUES (?, ?, ?)"); $stmt->bind_param('sdi', $name, $time, $score); UPDATE - I'm looking for a way to do this with as few queries as possible. I recall reading something about getting an INSERT ID when making an insert, but I would then still have to make a second query to get those results.

    Read the article

  • Insert record into mysql db with Entity Framework

    - by sanfra1983
    Hi, the problem is that it will insert a new record in a mysql table, I have already done the mapping of the mysql db and I have already done tests returning data and everything works. Now I read from a file, where there are queries written, I have them run me back and the result of true or false based on the final outcome of single query written to the file. Txt; I did this: using (var w = new demotestEntities ()) ( foreach (var l listaqueri) ( var p = we.CreateQuery <category> (l); we.SaveChanges (); result = true; ) ) but it does not work, I sense that it returns no errors, but neither the result given written in the query. txt file is as follows: INSERT INTO category (id, name) VALUES (null, 'test2') anyone can help me?

    Read the article

  • insert a date in mysql database

    - by kawtousse
    I use a jquery datepicker then i read it in my servlet like that: String dateimput=request.getParameter("datepicker");//1 then parse it like that: System.out.println("datepicker:" +dateimput); DateFormat df = new SimpleDateFormat("MM/dd/yyyy"); java.util.Date dt = null; try { dt = df.parse(dateimput); System.out.println("date imput parssé1 est:" +dt); System.out.println("date imput parsée2 est:" +df.format(dt)); } catch (ParseException e) { e.printStackTrace(); } and insert query like that: String query = "Insert into dailytimesheet(trackingDate,activity,projectCode) values ("+df.format(dt)+", \""+activity+"\" ,\""+projet+"\")"; it pass successfully untill now but if i check the record inserted i found the date: 01/01/0001 00:00:00 l've tried to fix it but it still a mess for me.

    Read the article

  • Paging in SQL Server problems

    - by Manh Trinh
    I have searched for paging in SQL Server. I found most of the solution look like that What is the best way to paginate results in SQL Server But it don't meet my expectation. Here is my situation: I work on JasperReport, for that: to export the report I just need pass the any Select query into the template, it will auto generated out the report EX : I have a select query like this: Select * from table A I don't know any column names in table A. So I can't use Select ROW_NUMBER() Over (Order By columsName) And I also don't want it order by any columns. Anyone can help me do it? PS: In Oracle , it have rownum very helpful in this case. Select * from tableA where rownum > 100 and rownum <200 Paging with Oracle

    Read the article

  • Parsing XML file using a for loop

    - by Johnny Spintel
    I have been working on this program which inserts an XML file into a MYSQL database. I'm new to the whole .jar idea by inserting packages. Im having an issue with parse(), select(), and children(). Can someone inform me how I could fix this issue? Here is my stack trace and my program below: Exception in thread "main" java.lang.Error: Unresolved compilation problems: The method select(String) is undefined for the type Document The method children() is undefined for the type Element The method children() is undefined for the type Element The method children() is undefined for the type Element The method children() is undefined for the type Element at jdbc.parseXML.main(parseXML.java:28) import java.io.*; import java.sql.*; import org.jsoup.Jsoup; import org.w3c.dom.*; import javax.xml.parsers.*; public class parseXML{ public static void main(String xml) { try{ BufferedReader br = new BufferedReader(new FileReader(new File("C:\\staff.xml"))); String line; StringBuilder sb = new StringBuilder(); while((line=br.readLine())!= null){ sb.append(line.trim()); } Document doc = Jsoup.parse(line); StringBuilder queryBuilder; StringBuilder columnNames; StringBuilder values; for (Element row : doc.select("row")) { // Start the query queryBuilder = new StringBuilder("insert into customer("); columnNames = new StringBuilder(); values = new StringBuilder(); for (int x = 0; x < row.children().size(); x++) { // Append the column name and it's value columnNames.append(row.children().get(x).tagName()); values.append(row.children().get(x).text()); if (x != row.children().size() - 1) { // If this is not the last item, append a comma columnNames.append(","); values.append(","); } else { // Otherwise, add the closing paranthesis columnNames.append(")"); values.append(")"); } } // Add the column names and values to the query queryBuilder.append(columnNames); queryBuilder.append(" values("); queryBuilder.append(values); // Print the query System.out.println(queryBuilder); } }catch (Exception err) { System.out.println(" " + err.getMessage ()); } } }

    Read the article

  • Propel: How the "Affected Rows" Returned from doUpdate is defined

    - by Ngu Soon Hui
    In propel there is this doUpdate function, that will return the numbers of affected rows by this query. The question is, if there is no need to update the row ( because the set value is already the same as the field value), will those rows counted as the affected row? Take for example, I have the following table: ID | Name | Books 1 | S1oon | Me 2 | S1oon | Me Let's assume that I write a ORM function of the equivalent of the following query: update `new table` set Books='Me' where Name='S1oon'; What will the doUpdate result return? Will it return 0 ( because all the Books column are already Me, hence there is no need to update), or will it be 2 ( because there are 2 rows that fulfill the where condition) ?

    Read the article

< Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >