Search Results

Search found 4685 results on 188 pages for 'queries'.

Page 130/188 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • TSQL - compare tables

    - by Rya
    I want to create a stored procedure that compares the results of two queries. If the results of the 2nd table can be found in the first, print 'YES', otherwise, print 'No'. Table 1: SELECT dbo.Roles.RoleName, dbo.UserRoles.RoleID FROM dbo.Roles LEFT OUTER JOIN dbo.UserRoles ON dbo.Roles.RoleID = dbo.UserRoles.RoleID WHERE (dbo.Roles.PortalID = 0) AND (dbo.UserRoles.UserID = 2) Table 2: Declare @RowData as nvarchar(2000) Set @RowData = ( SELECT EditPermissions FROM vw_XMP_DMS_Documents where DocumentID = 2) Select Data from dbo.split(@RowData, ',') For example. Table 1: John Jack James Table 2: John Sally Jane Print 'YES' Is this possible??? Thank you all very much. -R

    Read the article

  • Specify sorting order for a GROUP BY query to retrieve oldest or newest record for each group

    - by Beau Simensen
    I need to get the most recent record for each device from an upgrade request log table. A device is unique based on a combination of its hardware ID and its MAC address. I have been attempting to do this with GROUP BY but I am not convinced this is safe since it looks like it may be simply returning the "top record" (whatever SQLite or MySQL thinks that is). I had hoped that this "top record" could be hinted at by way of ORDER BY but that does not seem to be having any impact as both of the following queries returns the same records for each device, just in opposite order: SELECT extHwId, mac, created FROM upgradeRequest GROUP BY extHwId, mac ORDER BY created DESC SELECT extHwId, mac, created FROM upgradeRequest GROUP BY extHwId, mac ORDER BY created ASC Is there another way to accomplish this? I've seen several somewhat related posts that have all involved sub selects. If possible, I would like to do this without subselects as I would like to learn how to do this without that.

    Read the article

  • Is there a way to split the results of a select query into two equal halfs?

    - by Matthias
    I'd like to have a query returning two ResultSets each of which holding exactly half of all records matching a certain criteria. I tried using TOP 50 PERCENT in conjunction with an Order By but if the number of records in the table is odd, one record will show up in both resultsets. Example: I've got a simple table with TheID (PK) and TheValue fields (varchar(10)) and 5 records. Skip the where clause for now. SELECT TOP 50 PERCENT * FROM TheTable ORDER BY TheID asc results in the selected id's 1,2,3 SELECT TOP 50 PERCENT * FROM TheTable ORDER BY TheID desc results in the selected id's 3,4,5 3 is a dup. In real life of course the queries are fairly complicated with a ton of where clauses and subqueries.

    Read the article

  • How can I sqldump a huge database?

    - by meder
    SELECT count(*) from table gives me 3296869 rows. The table only contains 4 columns, storing dropped domains. I tried to dump the sql through: $backupFile = $dbname . date("Y-m-d-H-i-s") . '.gz'; $command = "mysqldump --opt -h $dbhost -u $dbuser -p $dbpass $dbname | gzip > $backupFile"; However, this just dumps an empty 20 KB gzipped file. My client is using shared hosting so the server specs and resource usage aren't top of the line. I'm not even given ssh access or access directly to the database so I have to make queries through PHP scripts I upload via FTP ( SFTP isn't an option, again ). Is there some way I can perhaps sequentially download portions of it, or pass an argument to mysqldump that will optimize it? I came across http://jeremy.zawodny.com/blog/archives/000690.html which mentions the -q flag and tried that but it didn't seem to do anything differently.

    Read the article

  • [MySQL, InnoDb] Rating place

    - by Pavel
    I'm trying to generate rating place table using following receipt http://stackoverflow.com/questions/1776821/assign-places-in-the-rating-mysql-php but my database is high loaded. I tried not to create table, but use MEMORY TABLE and update it using following SQL query insert into tops (uid) select uid from users order by exp desc; but got the following MySQL error Deadlock found when trying to get lock; try restarting transaction because there are too many queries until SQL select is being executed. How to solve this problem? P.S. CREATE TABLE tops as SELECT work almost fine except high server load... up to load average: 50 if tops are non-memory table. My table users has near 4.5 millions of rows. Thanks for any advices.

    Read the article

  • Neo4j+Gremlin : Trouble with T.gte and floating point node attributes

    - by letronje
    For a type of nodes in my graph, attribute values for an attribute(named 'some_count') is either missing or an integer or a float. I'm trying to write gremlin to filter these nodes based on minimum value for this attribute. I first verified that the values are indeed present by firing the following gremlin g.v(XXX)._().in('category').hasNot('some_count', T.eq, null).back(1).some_count Next, I tried filtering by exact value and that works and show me the matching nodes or gives an empty array if there is no match g.v(XXX)._().in('category').hasNot('some_count', T.eq, null).back(1).has('some_count', T.eq, 120000.0d) But the following query that uses the 'greater than or equal to' comparator doesn't work. g.v(XXX)._().in('category').hasNot('some_count', T.eq, null).back(1).has('some_count', T.gte, 1.0d) this returns nil (I'm querying via ruby/rails using Neo4j AR Adapter ) Instead of returning an empty array for no match, it returns a nil, which tells me something could be wrong with the query itself. I'm running neo4j community server 1.8. Is there a way I can ask Neo4j to log errors/queries, to see what could be going wrong ?

    Read the article

  • Index for wildcard match of end of string

    - by Anders Abel
    I have a table of phone numbers, storing the phone number as varchar(20). I have a requirement to implement searching of both entire numbers, but also on only the last part of the number, so a typical query will be: SELECT * FROM PhoneNumbers WHERE Number LIKE '%1234' How can I put an index on the Number column to make those searchs efficient? Is there a way to create an index that sorts the records on the reversed string? Another option might be to reverse the numbers before storing them, which will give queries like: SELECT * FROM PhoneNumbers WHERE ReverseNumber LIKE '4321%' However that will require all users of the database to always reverse the string. It might be solved by storing both the normal and reversed number and having the reversed number being updated by a trigger on insert/update. But that kind of solution is not very elegant. Any other suggestions?

    Read the article

  • Aggregate Functions in Index with IBMDB2

    - by Erkan
    Is there any way to pre aggregate results of aggregate functionts (f.i. count()) and store it in an index? The background is: i want to speed up count() queries. So that: Select count(users) from TE123 where region = 'A'; would be supported by an index like Region Count(Users) A 548 E 458 I know that MQTs would also help for this problem. However, in this case it is not possible to use MQT, as we use kind of an ORM and we don't want to define Entities on MQTs. I just slightly remember - one DBA told me - that there is such a function planned for DB2 V10.

    Read the article

  • what good orm api will work well with scala or erlang

    - by Emotu Balogun
    I'm considering taking up scala programming but i'm really concerned about what will become of my ORM based applications. I currently use hibernate as my ORM and i find it a really reliable tool. I'd like to know if there's any ORM tool as efficient but written in scala, or will hibernate work seamlessly with it. i don't want to have to start writing endless sql queries again (like the days of JDBC). I also have the same thought about erlang. is there a good orm out there for erlang?? and can i use erlang with other DBMS like oracle and mysql with ORM

    Read the article

  • Enumerable.Range in and Expression and Entity Framework

    - by eka808
    I'm currently developping an expression method (used in linq to entity queries) who has to give me a daycount for a given period (start date and end date) decrementing this daycount if specials days are in the period. My idea was the following : Generate an enumerable with all the dates (and with Enumerable.Range) Make a .Where on this enumerable to remove the specials dates Like a MyEnumerable.Where(a = a != "20120101") After that, return a MyEnumerable.Count() I come with this code : return (p) => Enumerable .Range(1, 4) .Where(a => a != 20120101) .AsQueryable() .Count() I tried to cast as a list, as a queryable, both (like the example) and no way ! it doesn't work ! I always get this error : LINQ to Entities does not recognize the method 'System.Collections.Generic.IEnumerable`1[System.Int32] Range(Int32, Int32)' method, and this method cannot be translated into a store expression. Have you got an idea about that ? Using an enumerable is of course not mandatory, any working solutions is good ^^ Thank's by advance !

    Read the article

  • Is there an open source repository for SQL code?

    - by morpheous
    I find myself writing SQL code (queries or stored procs) to solve problems that can definitely be defined as 'patterns' that occur frequently in business. Rather than having to wrack my brain each time I encounter a new problem (which must have been solved a countless times by other coders/db analysts, I wondered if there was a repository I could go to check out (peer reviewed) code - and maybe add my two pence every now and then. I know different db vendors tend to write slightly variant forms of SQL - but there could still be a repository with ANSI stuff and proprietary stuff. Hopefully, such a site would encourage more people to write standardized SQL. Is there such a site?. If no - why not? (would anyone else be interested in such a site?) If such a site exists, please provide link(s), as Google is not finding anything remotely useful.

    Read the article

  • How to recalculate primary index?

    - by JohnM2
    I have table in mysql database with autoincrement PRIMARY KEY. On regular basis rows in this table are being deleted an added. So the result is that value of PK of the latest row is growing very fast, but there is not so much rows in this table. What I want to do is to "recalculate" PK in this way, that the first row has PK = 1, second PK = 2 and so on. There are no external dependencies on PK of this table so it would be "safe". Is there anyway it can be done using only mysql queries/tools? Or I have to do it from my code?

    Read the article

  • why LINQ 2 SQL sometime add a field like select 1 as test, others valid fields....

    - by Fredou
    I have to concat 2 linq2sql query and I have an issue since the 2 query doesn't return the same number of columns, what is weird is after a .ToList() on the queries, they can concat without problem. The reason is, for some linq2sql reason, I have 2 more column named test and test2 which come from 2 left outer join that linq2sql automatically create, something like "select 1 as test, tablefields" Is there any good reason for that? how to remove this extra "1 as test" field? here a few of examples of what it look like: google result for linq 2 sql "select 1 as test"

    Read the article

  • Slow server response time - CakePHP

    - by Hasan
    I am using CakePHP 1.2 for building a website. The problem i am facing is when ever a page loads it takes a lot of time in "waiting for www.example.com". The server response time is very slow. First i thot it was my database queries, but they were executing in less than seconds time. Next i also contacted the server people. They told it was not the server. Now i am stuck very badly with "waiting for www.example.com". Is the problem is in coding or the cakePHP is mis configured. Need help badly and fast. Thanks

    Read the article

  • Finding group maxes in SQL join result

    - by Gene
    Two SQL tables. One contestant has many entries: Contestants Entries Id Name Id Contestant_Id Score -- ---- -- ------------- ----- 1 Fred 1 3 100 2 Mary 2 3 22 3 Irving 3 1 888 4 Grizelda 4 4 123 5 1 19 6 3 50 Low score wins. Need to retrieve current best scores of all contestants ordered by score: Best Entries Report Name Entry_Id Score ---- -------- ----- Fred 5 19 Irving 2 22 Grizelda 4 123 I can certainly get this done with many queries. My question is whether there's a way to get the result with one, efficient SQL query. I can almost see how to do it with GROUP BY, but not quite. In case it's relevant, the environment is Rails ActiveRecord and PostgreSQL.

    Read the article

  • nhibernate3 weaknesses

    - by Adrakadabra
    from the moment we've migrated from hibernate 2 to hibernate3 ,around 30% of queries can not execute anymore,while there were not any problem with the previous version. does anybody have such problems? for example some of errors we see r like these Specified cast is not valid: Repository<CountrySubdivision>.Find(p => p.Parent.Id == parentId); specified method is not supported: public JsonResult AllEducationDegree(string search) { var data = Repository<EducationDegree> .FindBySpecification(new EducationDegreeSpecification().Search(search)) .Take(10) .Select(p => new NameValue(p.Title, (int)p.Id)) .ToList(); // .AsDropdown(" "); return Json(data, JsonRequestBehavior.AllowGet); } public class EducationDegreeSpecification : FluentSpecification<EducationDegree> { public EducationDegreeSpecification Search(string EducationDegreeSearch) { if (!String.IsNullOrEmpty(EducationDegreeSearch)) { string[] searchs = EducationDegreeSearch.Split(' '); foreach (string search in searchs) { if (!String.IsNullOrEmpty(search)) { AddExpression(p => p.Title.Contains(search)); } } } return this; } }

    Read the article

  • Field contains foreign IDs for different tables

    - by Rich
    I am developing a php/mysql driven facebook game. I am stuck on an element the table design. When a user completes a task I want to trigger any number of events. I was thinking of something like so: tbl_events *event_id - serogate primary ID *task_id - foreign ID of the task just completed *event_type - what type of event e.g is it a facebook stream publish or a message to the user or does it unlock a new element of the game? *event_param - this is where it gets tricky... the event parameter is a problem for two reasons, 1) it will contain different foreign ids... dependent on the event_type and thus it will not be possible to join to x table. Meaning I would have to call two queries. 2) Most events require a single id or text, however some events require multiple parameters - like the facebook stream publish.

    Read the article

  • DB management for Heroku apps

    - by zetarun
    Hi all, I'm fairly new to both Rails and Heroku but I'm seriously thinking of using it as a platform to deploy my Ruby/Rails applications. I want to use all the power of Heroku, so I prefer the "embedded" PostgreSQL managed by Heroku instead of the addon for Amazon RDS for MySQL, but I'm not so confident without the possibility to access my data in a SQL client... I know that in a well made app you have no need to access DB, but there are some situations (add rows to a config table, see data not mapped in a view, update some columns for debugging issues, performance monitoring, running queries for reporting, etc.) when this can be good... How do you solve this problem? What's you experience in a real life app powered by Heroku? Thanks!

    Read the article

  • Deserializing only select properties of an Entity using JDOQL query string?

    - by user246114
    Hi, I have a rather large class stored in the datastore, for example a User class with lots of fields (I'm using java, omitting all decorations below example for clarity): @PersistenceCapable class User { private String username; private String city; private String state; private String country; private String favColor; } For some user queries, I only need the favColor property, but right now I'm doing this: SELECT FROM " + User.class.getName() + " WHERE username == 'bob' which should deserialize all of the entity properties. Is it possible to do something instead like: SELECT username, favColor FROM " + User.class.getName() + " WHERE username == 'bob' and then in this case, all of the returned User instances will only spend time deserializing the username and favColor properties, and not the city/state/country properties? If so, then I suppose all the other properties will be null (in the case of objects) or 0 for int/long/float? Thank you

    Read the article

  • How to change language/region in a YQL search.spelling/search.suggestion query?

    - by Francisco Noriega
    Hello, I'm trying to use YQL's spelling and search suggestions, but as much as I try I cant find a way to change the language/region for the query, how is this done? I want to look for spelling/suggestions in spanish/mexico ("es-MX") I'm pretty happy with the results I get for queries in English, but when looking in Spanish I get no results: select * from search.suggest where query="dolor de cabeza" <?xml version="1.0" encoding="UTF-8"?> <query xmlns:yahoo="http://www.yahooapis.com/v1/base.rng" yahoo:count="0" yahoo:created="2010-11-22T17:41:13Z" yahoo:lang="en-US"> <results/> </query> I've looked around for a way to change yahoo:lang="en-US" to yahoo:lang="es-MX" but I cant find andy documentation about it. Thanks!

    Read the article

  • How do I get every nth row in a table, or how do I break up a subset of a table into sets or rows of

    - by Jherico
    I have a table of heterogeneous pieces of data identified by a primary key (ID) and a type identifier (TYPE_ID). I would like to be able to perform a query that returns me a set of ranges for a given type broken into even page sizes. For instance, if there are 10,000 records of type '1' and I specify a page size of 1000, I want 10 pairs of numbers back representing values I can use in a BETWEEN clause in subsequent queries to query the DB 1000 records at a time. My initial attempt was something like this select id, rownum from CONTENT_TABLE where type_id = ? and mod(rownum, ?) = 0 But this doesn't work.

    Read the article

  • SQL Profiles showing high activity

    - by Wong Chi
    I am running my application locally -- ie. No external traffic and very low number of queries, fully under my control. I see tons of 'Audit Login' and 'Audit Logout' events. What are these and where are they actually stored (ie. Where is this audit log)? Are these a hint of a problem with connections, because I have only a simple connection string within my app and thought that connections would remain active throughout the operation of my app (ie. a single login at launch, and then a single logout when terminating).

    Read the article

  • building an ASP NET MVC site, should i go with linq to sql?

    - by aspm
    so i'm about to start a new website from scratch and i've spent about a week trying to figure out what technology to go with. i'm sold on ASP NET MVC. i'm 100% sure i'm going to love using that. but what i am not so sure about yet is using LINQ 2 SQL. so far i've gathered some data... 1) stack overflow uses it - can't be that bad 2) can be REALLY slow if you don't take advantage of compiled queries 3) will always be slower than ADO net, but can be almost just as fast if using #2 in the proper places 4) is NOT the preferred MS solution (there was a thread here on SO about dropping support) i'm itching to use it, but just want to make sure it's the best for me. i come from a heavy ADO/stored procedure and traditional asp net background (this will be my first experience with ASP MVC).

    Read the article

  • DB2 increase bufferpool size and compressed tables not equal better performance. Why?

    - by Mestika
    Hi, I’m working on tuning and increasing the performance of my IBM DB2 version 9.7 database. I’ve been searching around the net for the last couple of days and learned that if I created my tables in COMPRESS mode and created one more bufferpool and set both of them to access 1024mb, then the performance in my queries should increase because of the less I/Os to the disks. However, when I run my time analysis, the performance Decrease. I added the new additions to my regular database with the indexes I’ve used all the time. Each time I search google I come up with the statement that: Increased bufferpool size and several bufferpools AND a table compression SHOULD prove to get better performance. I’m very puzzled about the total unexpected result. Are there some tuning mechanisms I’ve forgot or does anyone have a explanation for this odd behavior? Sincerely Mestika

    Read the article

  • get n records at a time from a temporary table

    - by Claudiu
    I have a temporary table with about 1 million entries. The temporary table stores the result of a larger query. I want to process these records 1000 at a time, for example. What's the best way to set up queries such that I get the first 1000 rows, then the next 1000, etc.? They are not inherently ordered, but the temporary table just has one column with an ID, so I can order it if necessary. I was thinking of creating an extra column with the temporary table to number all the rows, something like: CREATE TEMP TABLE tmptmp AS SELECT ##autonumber somehow##, id FROM .... --complicated query then I can do: SELECT * FROM tmptmp WHERE autonumber>=0 AND autonumber < 1000 etc... how would I actually accomplish this? Or is there a better way? I'm using Python and PostgreSQL.

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >