Search Results

Search found 4685 results on 188 pages for 'queries'.

Page 131/188 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • [MySQL, InnoDb] Rating place

    - by Pavel
    I'm trying to generate rating place table using following receipt http://stackoverflow.com/questions/1776821/assign-places-in-the-rating-mysql-php but my database is high loaded. I tried not to create table, but use MEMORY TABLE and update it using following SQL query insert into tops (uid) select uid from users order by exp desc; but got the following MySQL error Deadlock found when trying to get lock; try restarting transaction because there are too many queries until SQL select is being executed. How to solve this problem? P.S. CREATE TABLE tops as SELECT work almost fine except high server load... up to load average: 50 if tops are non-memory table. My table users has near 4.5 millions of rows. Thanks for any advices.

    Read the article

  • Suggest Sphinx index scheme

    - by htf
    Hi. In a MySQL database I have documents of different type: some have text content, meta keys, descriptions, others have code, SKU number, size and brand name and so on. The problem is, I have to search something in all of these documents and then display a single page, where the results will be grouped by the document type, such as help page, blog post, item... It's not clear for me how to implement the Sphinx index: I want to have a single index to speed up queries, but since different docs have different structure - how can I group them? I was thinking about just concatenating them, but it just doesn't feel right.

    Read the article

  • SQL COUNT records in table 2 JOINS away

    - by Fred K
    Using MySQL, I have three tables: projects: ID name 1 "birthday party" 2 "soccer match" 3 "wine tasting evening" 4 "dig out garden" 5 "mountainbiking" 6 "making music" batches: ID projectID templateID when 1 1 1 7 days before 2 1 1 1 day before 3 4 2 21 days before 4 4 1 7 days before 5 5 1 7 days before 6 3 5 7 days before 7 3 3 14 days before 8 5 1 14 days before templates: ID message 1 "Hi, I'd like to invite ..." 2 "Dear Sir, Madam, ..." 3 "Can you please ..." 4 "Would you like to ..." 5 "To all dear friends ..." 6 "Does any of you guys ..." I would like to display a table of templates and the number of projects they're used in. So, the result should be: templateID projectCount 1 3 2 1 3 1 4 0 5 1 6 0 I've tried all kinds of SQL queries using various JOINs, but I guess this is too complicated for me. Is it possible to get this result using a single SQL statement?

    Read the article

  • Lucene search on specific field name?

    - by Rachel
    I have been playing around with an installation of SOLR that indexes some data from my database. I am able to index data and query it back but I was wondering about how field name queries work. For certain fields I am able to specify their name and the search text to have the results return as expected and for other fields, when I specify their name and search text, no results are returned. q=type:book //(this will work) q=type:book AND title:"The Title" //(no results returned) In this example, type is a required field and title is not. For the example where I search by title, I can see the document in the results of the first query having the given title so I know that a document exists that matches this search. Is making a field 'required' the only way to be able to search by field name? [edit] I'm using the default installation and the 'example' folder inside of solr, editing the xml files and using the interface available through start.jar to be able to run, index and query.

    Read the article

  • DB2 increase bufferpool size and compressed tables not equal better performance. Why?

    - by Mestika
    Hi, I’m working on tuning and increasing the performance of my IBM DB2 version 9.7 database. I’ve been searching around the net for the last couple of days and learned that if I created my tables in COMPRESS mode and created one more bufferpool and set both of them to access 1024mb, then the performance in my queries should increase because of the less I/Os to the disks. However, when I run my time analysis, the performance Decrease. I added the new additions to my regular database with the indexes I’ve used all the time. Each time I search google I come up with the statement that: Increased bufferpool size and several bufferpools AND a table compression SHOULD prove to get better performance. I’m very puzzled about the total unexpected result. Are there some tuning mechanisms I’ve forgot or does anyone have a explanation for this odd behavior? Sincerely Mestika

    Read the article

  • SQL Profiles showing high activity

    - by Wong Chi
    I am running my application locally -- ie. No external traffic and very low number of queries, fully under my control. I see tons of 'Audit Login' and 'Audit Logout' events. What are these and where are they actually stored (ie. Where is this audit log)? Are these a hint of a problem with connections, because I have only a simple connection string within my app and thought that connections would remain active throughout the operation of my app (ie. a single login at launch, and then a single logout when terminating).

    Read the article

  • Field contains foreign IDs for different tables

    - by Rich
    I am developing a php/mysql driven facebook game. I am stuck on an element the table design. When a user completes a task I want to trigger any number of events. I was thinking of something like so: tbl_events *event_id - serogate primary ID *task_id - foreign ID of the task just completed *event_type - what type of event e.g is it a facebook stream publish or a message to the user or does it unlock a new element of the game? *event_param - this is where it gets tricky... the event parameter is a problem for two reasons, 1) it will contain different foreign ids... dependent on the event_type and thus it will not be possible to join to x table. Meaning I would have to call two queries. 2) Most events require a single id or text, however some events require multiple parameters - like the facebook stream publish.

    Read the article

  • How do I get every nth row in a table, or how do I break up a subset of a table into sets or rows of

    - by Jherico
    I have a table of heterogeneous pieces of data identified by a primary key (ID) and a type identifier (TYPE_ID). I would like to be able to perform a query that returns me a set of ranges for a given type broken into even page sizes. For instance, if there are 10,000 records of type '1' and I specify a page size of 1000, I want 10 pairs of numbers back representing values I can use in a BETWEEN clause in subsequent queries to query the DB 1000 records at a time. My initial attempt was something like this select id, rownum from CONTENT_TABLE where type_id = ? and mod(rownum, ?) = 0 But this doesn't work.

    Read the article

  • Oracle: Finding Columns with only null values

    - by Jorge Valois
    I have a table with a lot of columns and a type column. Some columns seem to be always empty for a specific type. I want to create a view for each type and only show the relevant columns for each type. Working under the assumption that if a column has ONLY null values for a specific type, then that columns should not be part of the view, how can you find that out with queries? Is there a SELECT [columnName] FROM [table] WHERE [columnValues] ARE ALL [null] I know I COMPLETELY made it all up above... I'm just trying to get the idea across. Thanks in advance!

    Read the article

  • usage of intval & real_escape_string when sanitizing integers

    - by paulus
    dear All. I'm using integer PKs in some tables of mysql database. Before input from PHP script, I am doing some sanitizing, which includes intval($id) and $mysqli-real_escape_string(). The queries are quite simple insert into `tblproducts`(`supplier_id`,`description`) values('$supplier_id','$description') In this example, $description goes through real_escape_string(), while $supplier_id only being intval()'ed. I'm just curious, if there're any situations, when I need to apply both intval and real_escape_string to integer I'm inserting into DB? So basically do I really need to use? $supplier_id = intval($mysqli->real_escape_string($supplier_id)); Thank you.

    Read the article

  • Using a custom URL parameter in Wordpress (with permalinks)?

    - by kiko
    Hello - Is there a way to perhaps edit .htaccess to add a custom URL parameter to Wordpress, so that the parameter is not stripped out by permalinks? I have a Wordpress site with a page that queries a separate (non-Wordpress) database. I passed the URL parameter "pubID" to display individual books and it is working OK. Example: http://www.uglyducklingpresse.org/catalog/browse/item/?pubID=63 But the individual books are not showing up properly in Google - maybe because they all have the same auto-generated "canonical" URL meta tab - one with the "pubID" parameter stripped out. Thank you for any help.

    Read the article

  • Selecting previous and next row using sp

    - by davor
    I want to select previous and next row from table based on current id. I send current id to stored procedure and use this query: -- previous select top 1 id from table where id < @currentId order by id desc -- next select top 1 id from table where id < @currentId order by id asc The problem is when I send currentId which is the last id in the table and want to select next row. Then is nothing selected. Same problem for previous row, when I send currentId which is first id in table Is it possible to solve this in sql without additional queries?

    Read the article

  • Single logical SQL Server possible from multiple physical servers?

    - by TuffyIsHere
    Hi, With Microsoft SQL Server 2005, is it possible to combine the processing power of multiple physical servers into a single logical sql server? Is it possible on SQL Server 2008? I'm thinking, if the database files were located on a SAN and somehow one of the sql servers acted as a kind of master, then processing could be spread out over multiple physical servers, for instance even allowing simultaneous updates where there was no overlap, and in the case of read-only queries on unlocked tables no limit. We have an application that is limited by the speed of our sql server, and probably stuck with server 2005 for now. Is the only option to get a single more powerful physical server? Sorry I'm not an expert, I'm not sure if the question is a stupid one. TIA

    Read the article

  • Oracle SQL: Multiple Subqueries Unioned Without Running Original Query Multiple Times.

    - by Bob
    So I've got a very large database, and need to work on a subset ~1% of the data to dump into an excel spreadsheet to make a graph. Ideally, I could select out the subset of data and then run multiple select queries on that, which are then UNION'ed together. Is this even possible? I can't seem to find anyone else trying to do this and would improve the performance of my current query quite a bit. Right now I have something like this: SELECT ( SELECT ( SELECT( long list of requirements ) UNION SELECT( slightly different long list of requirements ) ) ) and it would be nice if i could group the commonalities of the two long requirements and have simple differences between the two select statements being unioned.

    Read the article

  • advanced search with mysql

    - by Arsenal
    I'm creating a search function for my website where the user can put in anything he likes in a textfield. It get's matched against anything (name, title, job, car brand, ... you name it) I initially wrote the query with an INNER JOIN on every table that needed to be searched. SELECT column1, column2, ... FROM person INNER JOIN person_car ON ... INNER JOIN car ... This ended up in a query with 6 or 8 INNER JOINs, and a whole lot WHERE ... LIKE '%searchvalue%' Now this query seems to cause a time'out in MySql, and I even got a warning from my hosting provider that the queries just taking up too many resources. Now obviously I'm doing this very wrong, but I was wondering how the correct approach to these kind of search functions is. Thanks!

    Read the article

  • Even lighter than SQLite

    - by Richard Fabian
    I've been looking for a C++ SQL library implementation that is simple to hook in like SQLite, but faster and smaller. My projects are in games development and there's definitely a cutoff point between needing to pass the ACID test and wanting some extreme performance. I'm willing to move away from SQL string style queries, allowing it to be code driven, but I haven't found anything out there that provides SQL like flexibility while also preferring performance over the ACID test. I don't want to go reinventing the wheel, and the idea of implementing an SQL library on my own is quite daunting, even if it's only going to be simple subset of all the calls you could make. I need the basic commands (SELECT, MODIFY, DELETE, INSERT, with JOIN, and WHERE), not data operations (like sorting, min, max, count) and don't need the database to be atomic, or even enforce consistency (I can use a real SQL service while I'm testing and debugging).

    Read the article

  • get n records at a time from a temporary table

    - by Claudiu
    I have a temporary table with about 1 million entries. The temporary table stores the result of a larger query. I want to process these records 1000 at a time, for example. What's the best way to set up queries such that I get the first 1000 rows, then the next 1000, etc.? They are not inherently ordered, but the temporary table just has one column with an ID, so I can order it if necessary. I was thinking of creating an extra column with the temporary table to number all the rows, something like: CREATE TEMP TABLE tmptmp AS SELECT ##autonumber somehow##, id FROM .... --complicated query then I can do: SELECT * FROM tmptmp WHERE autonumber>=0 AND autonumber < 1000 etc... how would I actually accomplish this? Or is there a better way? I'm using Python and PostgreSQL.

    Read the article

  • Data storage advice needed: Best way to store location + time data?

    - by sobedai
    I have a project in mind that will require the majority of queries to be keyed off of lat/long as well as date + time. Initially, I was thinking of a standard RDBMS where lat, long, and the datetime field are properly indexed. Then, I began thinking of a document based system where the document was essentially a timestamp and each document has lat/long with in it. Each document could have n objects associated with it. I'm looking for advice on what would be the best type of storage engine for this sort of thing is - which of the above idea would be better or if there is something else completely that is the ideal solution. Thanks

    Read the article

  • Create a complex SQL query?

    - by mazzzzz
    Hey guys, I have a program that allows me to run queries against a large database. I have two tables that are important right now, Deposits and withdraws. Each contains a history of every user. I need to take each table, add up every deposit and withdraws (per user), then subtract the withdraws from the deposits. I then need to return every user whos result is negative (aka they withdrew more then they deposited). Is this possible in one query? Example: Deposit Table: |ID|UserName|Amount| |1 | Use1 |100.00| |2 | Use1 |50.00 | |3 | Use2 |25.00 | |4 | Use1 | 5.00 | WithDraw Table: |ID|UserName|Amount| |2 | Use2 | 5.00 | |1 | Use1 |100.00| |4 | Use1 | 5.00 | |3 | Use2 |25.00 | So then the result would output: |OverWithdrawers| | Use2 | Is this possible (I sure don't know how to do it)? Thanks for any help, Max

    Read the article

  • connecting to oracle database from c# asp.net mvc website

    - by ooo
    I am trying to connect to oracle database. I am able to connect to it through a local SQL Developer tool by sticking something in the oranames.tns file. My question is that i will be deploying this website to a number of places. A few questions: What is the simplest way i can use to connect to this database and do very basic queries. I see some examples that have me referencing oracleclient dlls. Other methods not? Is there a best practice here? Am i going to have to update the oranames.tns file on everyone on of the machines that i deploy to ? is there any simpler way

    Read the article

  • sql: DELETE + INSERT vs UPDATE + INSERT

    - by user93422
    A similar question has been asked, but since it always depends, I'm asking for my specific situation separately. I have a web-site page that shows some data that comes from a database, and to generate the data from that database I have to do some fairly complex multiple joins queries. The data is being updated once a day (nightly). I would like to pre-generate the data for the said view to speed up the page access. For that I am creating a table that contains exact data I need. Question: for my situation, is it reasonable to do complete table wipe followed by insert? or should I do update,insert? SQL wise seems like DELETE + INSERT will be easier (it is single SQL expression). EDIT: RDBMS: MS SQL Server 2008 Ent

    Read the article

  • Passing GPS speed via the Eclipse Emulator Control

    - by Nick
    Hi, my app queries the GPS-speed using .getSpeed() on a LocationListener. Is there a way to set this speed using the Eclipse Emulator Control or the command line? I tried to feed multiple sets of coordinates to the emulator via the manual GPS-control, but it didn't pick up a speed from that. Also, using a pre-defined GPX-file and playing it doesn't work for me. I would like to test my app without having to take it on a test-drive in my car every time ;)! Thanks!

    Read the article

  • create temporary table from cursor

    - by Claudiu
    Is there any way, in PostgreSQL accessed from Python using SQLObject, to create a temporary table from the results of a cursor? Previously, I had a query, and I created the temporary table directly from the query. I then had many other queries interacting w/ that temporary table. Now I have much more data, so I want to only process 1000 rows at a time or so. However, I can't do CREATE TEMP TABLE ... AS ... from a cursor, not as far as I can see. Is the only thing to do something like: rows = cur.fetchmany(1000); cur2 = conn.cursor() cur2.execute("""CREATE TEMP TABLE foobar (id INTEGER)""") for row in rows: cur2.execute("""INSERT INTO foobar (%d)""" % row) or is there a better way? This seems awfully inefficient.

    Read the article

  • Which index is used in select and why?

    - by Lukasz Lysik
    I have the table with zip codes with following columns: id - PRIMARY KEY code - NONCLUSTERED INDEX city When I execute query SELECT TOP 10 * FROM ZIPCodes I get the results sorted by id column. But when I change the query to: SELECT TOP 10 id FROM ZIPCodes I get the results sorted by code column. Again, when I change the query to: SELECT TOP 10 code FROM ZIPCodes I get the results sorted by code column again. And finally when I change to: SELECT TOP 10 id,code FROM ZIPCodes I get the results sorted by id column. My question is in the title of the question. I know which indexes are used in the queries, but my question is, why those indexes are used? I the second query (SELECT TOP 10 id FROM ZIPCodes) wouldn't it be faster if the clusteder index was used? How the query engine chooses which index to use?

    Read the article

  • which is better, creating a view or a new table?

    - by Carson
    I have some demanding mysql queries that are needed to grap same datasets from several mysql tables. I am thinking of creating a table or view to gather all demanding columns from other tables, so as to increase performance. If I create that table, I may need to do extra insert / update / delete operation each time other tables updated. if I create view, I am worrying if the performance can be greatly improved. Because data from other tables are changing very frequently. Most likely, the view may need to be created first everytime before selecting it. Any ideas? e.g. how to cache? other extra measures I can do?

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >