Search Results

Search found 4788 results on 192 pages for 'adhoc queries'.

Page 135/192 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • Specify sorting order for a GROUP BY query to retrieve oldest or newest record for each group

    - by Beau Simensen
    I need to get the most recent record for each device from an upgrade request log table. A device is unique based on a combination of its hardware ID and its MAC address. I have been attempting to do this with GROUP BY but I am not convinced this is safe since it looks like it may be simply returning the "top record" (whatever SQLite or MySQL thinks that is). I had hoped that this "top record" could be hinted at by way of ORDER BY but that does not seem to be having any impact as both of the following queries returns the same records for each device, just in opposite order: SELECT extHwId, mac, created FROM upgradeRequest GROUP BY extHwId, mac ORDER BY created DESC SELECT extHwId, mac, created FROM upgradeRequest GROUP BY extHwId, mac ORDER BY created ASC Is there another way to accomplish this? I've seen several somewhat related posts that have all involved sub selects. If possible, I would like to do this without subselects as I would like to learn how to do this without that.

    Read the article

  • Offline database access

    - by dtech
    I have a small application which basically consists of an user-friendly CRUD interface to a few tables (and joined tables) It currently works with a MySQL database but I would like to make it available offline. My first thought was to create a SQLite "buffer" in between the MySQL database and the application, e.g. by executing all queries on the SQLite but also storing them in a log table so that they can be executed later in the main database with very basic conflict resolution (I will basically let the user solve it if a conflict is detected) Due to the simplicity of the application this shouldn't be too difficult and good exercise, but I think I would be re-inventing the wheel. So my question is: are there existing solutions or other approaches for this problem?

    Read the article

  • What's a good way to set up a development environment on OS X for ruby, rails, and git?

    - by Ein2015
    I'm going to start development on a web app using ruby, rails, probably either postgres or mysql, and most likely apache. I'll be using a git repository with the master repo on another server. I've searched through stackoverflow and done some Googling... so here's what I have so far... What are your opinions on what's described on this page?: http://robots.thoughtbot.com/post/159805668/2009-rubyists-guide-to-a-mac-os-x-development What about this one?: http://www.buildingwebapps.com/articles/79197-setting-up-rails-on-leopard-mac I don't need helping finding an editor, there's plenty out there (TextMate, TextWrangler, MacVim), but I do need help to make sure I'm setting things up correctly to code, build, and run the web app from my mac. Here's a specific set of scenarios I could use some help on: Testing various versions of rails and/or ruby. Testing performance, vulnerabilities, monitoring queries, etc. Testing different versions of gems. Working on other projects on this same machine.

    Read the article

  • why LINQ 2 SQL sometime add a field like select 1 as test, others valid fields....

    - by Fredou
    I have to concat 2 linq2sql query and I have an issue since the 2 query doesn't return the same number of columns, what is weird is after a .ToList() on the queries, they can concat without problem. The reason is, for some linq2sql reason, I have 2 more column named test and test2 which come from 2 left outer join that linq2sql automatically create, something like "select 1 as test, tablefields" Is there any good reason for that? how to remove this extra "1 as test" field? here a few of examples of what it look like: google result for linq 2 sql "select 1 as test"

    Read the article

  • Adding more OR searches with CONTAINS Brings Query to Crawl

    - by scolja
    I have a simple query that relies on two full-text indexed tables, but it runs extremely slow when I have the CONTAINS combined with any additional OR search. As seen in the execution plan, the two full text searches crush the performance. If I query with just 1 of the CONTAINS, or neither, the query is sub-second, but the moment you add OR into the mix the query becomes ill-fated. The two tables are nothing special, they're not overly wide (42 cols in one, 21 in the other; maybe 10 cols are FT indexed in each) or even contain very many records (36k recs in the biggest of the two). I was able to solve the performance by splitting the two CONTAINS searches into their own SELECT queries and then UNION the three together. Is this UNION workaround my only hope? Thanks. SELECT a.CollectionID FROM collections a INNER JOIN determinations b ON a.CollectionID = b.CollectionID WHERE a.CollrTeam_Text LIKE '%fa%' OR CONTAINS(a.*, '"*fa*"') OR CONTAINS(b.*, '"*fa*"') Execution Plan (guess I need more reputation before I can post the image):

    Read the article

  • Fusion Table Layer Caching issue

    - by user1854791
    I have created a series of fusion table layers and apply filtering to one of the layers in response to checkbox click events. The base layer of the map contains a gradient applied over county boundaries. When the base layer is unchecked, the filtered layer loses it's styling. I have added unique timestamps to all queries to avoid caching, however I get the feeling that these image tiles are still be cached for this situation. Is there any way to force the google fusion tables api to invalidate a cached image? Test site here: http://map.inquestmarketing.com/new.html Unchecking the Other - Consumer Prospects checkbox reproduces the issue. This is a pure client app, all of the source is in the single page.

    Read the article

  • Oracle: Finding Columns with only null values

    - by Jorge Valois
    I have a table with a lot of columns and a type column. Some columns seem to be always empty for a specific type. I want to create a view for each type and only show the relevant columns for each type. Working under the assumption that if a column has ONLY null values for a specific type, then that columns should not be part of the view, how can you find that out with queries? Is there a SELECT [columnName] FROM [table] WHERE [columnValues] ARE ALL [null] I know I COMPLETELY made it all up above... I'm just trying to get the idea across. Thanks in advance!

    Read the article

  • Is there a way to split the results of a select query into two equal halfs?

    - by Matthias
    I'd like to have a query returning two ResultSets each of which holding exactly half of all records matching a certain criteria. I tried using TOP 50 PERCENT in conjunction with an Order By but if the number of records in the table is odd, one record will show up in both resultsets. Example: I've got a simple table with TheID (PK) and TheValue fields (varchar(10)) and 5 records. Skip the where clause for now. SELECT TOP 50 PERCENT * FROM TheTable ORDER BY TheID asc results in the selected id's 1,2,3 SELECT TOP 50 PERCENT * FROM TheTable ORDER BY TheID desc results in the selected id's 3,4,5 3 is a dup. In real life of course the queries are fairly complicated with a ton of where clauses and subqueries.

    Read the article

  • what good orm api will work well with scala or erlang

    - by Emotu Balogun
    I'm considering taking up scala programming but i'm really concerned about what will become of my ORM based applications. I currently use hibernate as my ORM and i find it a really reliable tool. I'd like to know if there's any ORM tool as efficient but written in scala, or will hibernate work seamlessly with it. i don't want to have to start writing endless sql queries again (like the days of JDBC). I also have the same thought about erlang. is there a good orm out there for erlang?? and can i use erlang with other DBMS like oracle and mysql with ORM

    Read the article

  • Best .NET Solution for Frequently Changed Database

    - by sestocker
    I am currently architecting a small CRUD applicaton. Their database is a huge mess and will be changing frequently over the course of the next 6 months to a year. What would you recommend for my data layer: 1) ORM (if so, which one?) 2) Linq2Sql 3) Stored Procedures 4) Parametrized Queries I really need a solution that will be dynamic enough (both fast and easy) where I can replace tables and add/delete columns frequently. Note: I do not have much experience with ORM (only a little SubSonic) and generally tend to use stored procedures so maybe that would be the way to go. I would love to learn Ling2Sql or NHibernate if either would allow for the situation I've described above.

    Read the article

  • SQL Profiles showing high activity

    - by Wong Chi
    I am running my application locally -- ie. No external traffic and very low number of queries, fully under my control. I see tons of 'Audit Login' and 'Audit Logout' events. What are these and where are they actually stored (ie. Where is this audit log)? Are these a hint of a problem with connections, because I have only a simple connection string within my app and thought that connections would remain active throughout the operation of my app (ie. a single login at launch, and then a single logout when terminating).

    Read the article

  • Oracle SQL: Multiple Subqueries Unioned Without Running Original Query Multiple Times.

    - by Bob
    So I've got a very large database, and need to work on a subset ~1% of the data to dump into an excel spreadsheet to make a graph. Ideally, I could select out the subset of data and then run multiple select queries on that, which are then UNION'ed together. Is this even possible? I can't seem to find anyone else trying to do this and would improve the performance of my current query quite a bit. Right now I have something like this: SELECT ( SELECT ( SELECT( long list of requirements ) UNION SELECT( slightly different long list of requirements ) ) ) and it would be nice if i could group the commonalities of the two long requirements and have simple differences between the two select statements being unioned.

    Read the article

  • How to change language/region in a YQL search.spelling/search.suggestion query?

    - by Francisco Noriega
    Hello, I'm trying to use YQL's spelling and search suggestions, but as much as I try I cant find a way to change the language/region for the query, how is this done? I want to look for spelling/suggestions in spanish/mexico ("es-MX") I'm pretty happy with the results I get for queries in English, but when looking in Spanish I get no results: select * from search.suggest where query="dolor de cabeza" <?xml version="1.0" encoding="UTF-8"?> <query xmlns:yahoo="http://www.yahooapis.com/v1/base.rng" yahoo:count="0" yahoo:created="2010-11-22T17:41:13Z" yahoo:lang="en-US"> <results/> </query> I've looked around for a way to change yahoo:lang="en-US" to yahoo:lang="es-MX" but I cant find andy documentation about it. Thanks!

    Read the article

  • SQL COUNT records in table 2 JOINS away

    - by Fred K
    Using MySQL, I have three tables: projects: ID name 1 "birthday party" 2 "soccer match" 3 "wine tasting evening" 4 "dig out garden" 5 "mountainbiking" 6 "making music" batches: ID projectID templateID when 1 1 1 7 days before 2 1 1 1 day before 3 4 2 21 days before 4 4 1 7 days before 5 5 1 7 days before 6 3 5 7 days before 7 3 3 14 days before 8 5 1 14 days before templates: ID message 1 "Hi, I'd like to invite ..." 2 "Dear Sir, Madam, ..." 3 "Can you please ..." 4 "Would you like to ..." 5 "To all dear friends ..." 6 "Does any of you guys ..." I would like to display a table of templates and the number of projects they're used in. So, the result should be: templateID projectCount 1 3 2 1 3 1 4 0 5 1 6 0 I've tried all kinds of SQL queries using various JOINs, but I guess this is too complicated for me. Is it possible to get this result using a single SQL statement?

    Read the article

  • DB2 increase bufferpool size and compressed tables not equal better performance. Why?

    - by Mestika
    Hi, I’m working on tuning and increasing the performance of my IBM DB2 version 9.7 database. I’ve been searching around the net for the last couple of days and learned that if I created my tables in COMPRESS mode and created one more bufferpool and set both of them to access 1024mb, then the performance in my queries should increase because of the less I/Os to the disks. However, when I run my time analysis, the performance Decrease. I added the new additions to my regular database with the indexes I’ve used all the time. Each time I search google I come up with the statement that: Increased bufferpool size and several bufferpools AND a table compression SHOULD prove to get better performance. I’m very puzzled about the total unexpected result. Are there some tuning mechanisms I’ve forgot or does anyone have a explanation for this odd behavior? Sincerely Mestika

    Read the article

  • Field contains foreign IDs for different tables

    - by Rich
    I am developing a php/mysql driven facebook game. I am stuck on an element the table design. When a user completes a task I want to trigger any number of events. I was thinking of something like so: tbl_events *event_id - serogate primary ID *task_id - foreign ID of the task just completed *event_type - what type of event e.g is it a facebook stream publish or a message to the user or does it unlock a new element of the game? *event_param - this is where it gets tricky... the event parameter is a problem for two reasons, 1) it will contain different foreign ids... dependent on the event_type and thus it will not be possible to join to x table. Meaning I would have to call two queries. 2) Most events require a single id or text, however some events require multiple parameters - like the facebook stream publish.

    Read the article

  • laravel multiple where clauses within a loop

    - by user1424508
    Pretty much I want the query to select all records of users that are 25 years old AND are either between 150-170cm OR 190-200cm. I have this query written down below. However the problem is it keeps getting 25 year olds OR people who are 190-200cm instead of 25 year olds that are 150-170 OR 25 year olds that 190-200cm tall. How can I fix this? thanks $heightarray=array(array(150,170),array(190,200)); $user->where('age',25); for($i=0;$i<count($heightarray);i++){ if($i==0){ $user->whereBetween('height',$heightarray[$i]) }else{ $user->orWhereBetween('height',$heightarray[$i]) } } $user->get(); Edit: I tried advanced wheres (http://laravel.com/docs/queries#advanced-wheres) and it doesn't work for me as I cannot pass the $heightarray parameter into the closure. from laravel documentation DB::table('users') ->where('name', '=', 'John') ->orWhere(function($query) { $query->where('votes', '>', 100) ->where('title', '<>', 'Admin'); }) ->get();

    Read the article

  • Embedded Java Databases for Large Data Sets

    - by ExAmerican
    I would like to port a PHP/MySQL-based client/server application to be a standalone desktop application written in Java. The database has grown to be fairly large, with several tables with hundreds of thousands of rows. I expect these could grow to over a million entries for certain tables. What embedded database would best handle this? HSQLDB and Sqlite seem to be the obvious choices, though I'm guessing there are others out there as well. My main priorities are the ability to perform queries on large amounts of data efficiently (this thread seems to confirm Sqlite can handle this) and the ease with which I can import old data from MySQL (I remember HSQLDB being kind of a pain for that). Note: I am aware that similar questions comparing embedded databases have been posted before (for example here and here) but as my priorities differ somewhat from most applications considering the large data migration I thought it justified a new question.

    Read the article

  • Using a custom URL parameter in Wordpress (with permalinks)?

    - by kiko
    Hello - Is there a way to perhaps edit .htaccess to add a custom URL parameter to Wordpress, so that the parameter is not stripped out by permalinks? I have a Wordpress site with a page that queries a separate (non-Wordpress) database. I passed the URL parameter "pubID" to display individual books and it is working OK. Example: http://www.uglyducklingpresse.org/catalog/browse/item/?pubID=63 But the individual books are not showing up properly in Google - maybe because they all have the same auto-generated "canonical" URL meta tab - one with the "pubID" parameter stripped out. Thank you for any help.

    Read the article

  • [MySQL, InnoDb] Rating place

    - by Pavel
    I'm trying to generate rating place table using following receipt http://stackoverflow.com/questions/1776821/assign-places-in-the-rating-mysql-php but my database is high loaded. I tried not to create table, but use MEMORY TABLE and update it using following SQL query insert into tops (uid) select uid from users order by exp desc; but got the following MySQL error Deadlock found when trying to get lock; try restarting transaction because there are too many queries until SQL select is being executed. How to solve this problem? P.S. CREATE TABLE tops as SELECT work almost fine except high server load... up to load average: 50 if tops are non-memory table. My table users has near 4.5 millions of rows. Thanks for any advices.

    Read the article

  • Create a complex SQL query?

    - by mazzzzz
    Hey guys, I have a program that allows me to run queries against a large database. I have two tables that are important right now, Deposits and withdraws. Each contains a history of every user. I need to take each table, add up every deposit and withdraws (per user), then subtract the withdraws from the deposits. I then need to return every user whos result is negative (aka they withdrew more then they deposited). Is this possible in one query? Example: Deposit Table: |ID|UserName|Amount| |1 | Use1 |100.00| |2 | Use1 |50.00 | |3 | Use2 |25.00 | |4 | Use1 | 5.00 | WithDraw Table: |ID|UserName|Amount| |2 | Use2 | 5.00 | |1 | Use1 |100.00| |4 | Use1 | 5.00 | |3 | Use2 |25.00 | So then the result would output: |OverWithdrawers| | Use2 | Is this possible (I sure don't know how to do it)? Thanks for any help, Max

    Read the article

  • What approach should be suitable for user authentification in simle client/server app

    - by TerryS
    My previous question was closed so I will be more specific. I need to create an application, desktop one written in C#, that will ask for user credentials and after verification opens the GUI allowing to work with DB (black box for users). It should be used from everywhere, not LAN or SQL domain. I assume I would need to do the following: Create a client and a server applications that will deal with authentification. That would mean a lot of socketing stuff.. Once the user is verified, the client queries would be sent to database (client-server-DB). The server would need to send the DB data sets back to the client. As you can see, this is just my guess but I have no idea whether its too complicated or completely wrong. The main thing is that it must be desktop app (not web based one) and accessible from everywhere. I am interested in main points how to design the system and will be extremely grateful for that.

    Read the article

  • Running script constantly in background: daemon, lock file with crontab, or simply loop?

    - by Mauritz Hansen
    I have a Perl script that queries a database for a list of files to process processes the files and then exits Upon startup this script creates a file (let's say script.lock), and upon exit it removes this file. I have a crontab entry that runs this script every minute. If the lockfile exists then the script exits, assuming that another instance of itself is running. The above process works fine but I am not very happy with the robustness of this approach. Specifically, if for some reason the script exits prematurely and the lockfile is not removed then a new instance will not execute properly. I would appreciate some advice on the following: Is using the lock file a good approach or is there a better/more robust way to do this? Is using crontab for this a good idea or could I better write an endless loop with sleep()? Should I use the GNU 'daemon' program or the Perl Proc::Daemon module (or some other equivalent) for this?

    Read the article

  • which is better, creating a view or a new table?

    - by Carson
    I have some demanding mysql queries that are needed to grap same datasets from several mysql tables. I am thinking of creating a table or view to gather all demanding columns from other tables, so as to increase performance. If I create that table, I may need to do extra insert / update / delete operation each time other tables updated. if I create view, I am worrying if the performance can be greatly improved. Because data from other tables are changing very frequently. Most likely, the view may need to be created first everytime before selecting it. Any ideas? e.g. how to cache? other extra measures I can do?

    Read the article

  • Which index is used in select and why?

    - by Lukasz Lysik
    I have the table with zip codes with following columns: id - PRIMARY KEY code - NONCLUSTERED INDEX city When I execute query SELECT TOP 10 * FROM ZIPCodes I get the results sorted by id column. But when I change the query to: SELECT TOP 10 id FROM ZIPCodes I get the results sorted by code column. Again, when I change the query to: SELECT TOP 10 code FROM ZIPCodes I get the results sorted by code column again. And finally when I change to: SELECT TOP 10 id,code FROM ZIPCodes I get the results sorted by id column. My question is in the title of the question. I know which indexes are used in the queries, but my question is, why those indexes are used? I the second query (SELECT TOP 10 id FROM ZIPCodes) wouldn't it be faster if the clusteder index was used? How the query engine chooses which index to use?

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >