Search Results

Search found 4788 results on 192 pages for 'adhoc queries'.

Page 136/192 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • create temporary table from cursor

    - by Claudiu
    Is there any way, in PostgreSQL accessed from Python using SQLObject, to create a temporary table from the results of a cursor? Previously, I had a query, and I created the temporary table directly from the query. I then had many other queries interacting w/ that temporary table. Now I have much more data, so I want to only process 1000 rows at a time or so. However, I can't do CREATE TEMP TABLE ... AS ... from a cursor, not as far as I can see. Is the only thing to do something like: rows = cur.fetchmany(1000); cur2 = conn.cursor() cur2.execute("""CREATE TEMP TABLE foobar (id INTEGER)""") for row in rows: cur2.execute("""INSERT INTO foobar (%d)""" % row) or is there a better way? This seems awfully inefficient.

    Read the article

  • which is better, creating a view or a new table?

    - by Carson
    I have some demanding mysql queries that are needed to grap same datasets from several mysql tables. I am thinking of creating a table or view to gather all demanding columns from other tables, so as to increase performance. If I create that table, I may need to do extra insert / update / delete operation each time other tables updated. if I create view, I am worrying if the performance can be greatly improved. Because data from other tables are changing very frequently. Most likely, the view may need to be created first everytime before selecting it. Any ideas? e.g. how to cache? other extra measures I can do?

    Read the article

  • MySQLi String comparisons using keys

    - by asdasd
    I have a table with lets say 2 columns. id number, and value. Value is a string (var char). Lets say i have a number x, and a list of numbers a1, a2, a3, a4, a5..... where x is not in the list. All of these numbers correspond to a unique row in the table. I want to know if the string value for x in the table is contained in one of the string values for any table entry for a1, a2, a3, a4... Lets say i have these rows: x, aaa a1, bbb a2, ccc a3, ddd a4, aaabbbcc then i want somehow a confirmation that yes, the value for x is included in one of the values in my list of numbers (a4 contains x). I know i can do this in a couple queries and shove it down some PHP and get my answer. But can i do this with one query?

    Read the article

  • Which index is used in select and why?

    - by Lukasz Lysik
    I have the table with zip codes with following columns: id - PRIMARY KEY code - NONCLUSTERED INDEX city When I execute query SELECT TOP 10 * FROM ZIPCodes I get the results sorted by id column. But when I change the query to: SELECT TOP 10 id FROM ZIPCodes I get the results sorted by code column. Again, when I change the query to: SELECT TOP 10 code FROM ZIPCodes I get the results sorted by code column again. And finally when I change to: SELECT TOP 10 id,code FROM ZIPCodes I get the results sorted by id column. My question is in the title of the question. I know which indexes are used in the queries, but my question is, why those indexes are used? I the second query (SELECT TOP 10 id FROM ZIPCodes) wouldn't it be faster if the clusteder index was used? How the query engine chooses which index to use?

    Read the article

  • How to return relationships in a custom un-typed dataservice provider

    - by monkey_p
    I have a custom .Net DataService and can't figure out how to return the data for relationships. The data base has 2 tables (Customer, Address). A Customer can have multiple addresses, but each address can only have on customer. I'm using Dictionary<string,object> as my data type. My question, for the following 2 urls how do i return the data. http://localhost/DataService/Customer(1)/Address http://localhost/DataService/Address(1)/Customer For the none relational queries I return a List<Dictionary<string,object>> So I imagined for the relation I should just populate the element with a either a Dictionary<string,object> for the single ones and a List<Dictionary<string,object>> for many relationships. But this just gives me a NullRefferenceException So what am I doing wrong?

    Read the article

  • Fusion Table Layer Caching issue

    - by user1854791
    I have created a series of fusion table layers and apply filtering to one of the layers in response to checkbox click events. The base layer of the map contains a gradient applied over county boundaries. When the base layer is unchecked, the filtered layer loses it's styling. I have added unique timestamps to all queries to avoid caching, however I get the feeling that these image tiles are still be cached for this situation. Is there any way to force the google fusion tables api to invalidate a cached image? Test site here: http://map.inquestmarketing.com/new.html Unchecking the Other - Consumer Prospects checkbox reproduces the issue. This is a pure client app, all of the source is in the single page.

    Read the article

  • How can an SQL query return data from multiple tables

    - by Fluffeh
    I would like to know how to get data from multiple tables in my database, what types of methods are there to do this, what are joins and unions and how are they different from one another? When should I use each one compared to the others? I am planning to use this in my (for example - PHP) application, but don't want to run multiple queries against the database, what options do I have to get data from multiple tables in a single query? Note: I am writing this as I would like to be able to link to a well written guide on the numerous questions that I constantly come across in the PHP queue, so I can link to this for further detail when I post an answer. The answers cover off the following: Part 1 - Joins and Unions Part 2 - Subqueries Part 3 - Tricks and Efficient Code

    Read the article

  • getting rid of repeated customer id's in mysql query

    - by bsandrabr
    I originally started by selecting customers from a group of customers and then for each customer querying the records for the past few days and presenting them in a table row. All working fine but I think I might have got too ambitious as I tried to pull in all the records at once having heard that mutiple queries are a big no no. here is the mysqlquery i came up with to pull in all the records at once SELECT morning, afternoon, date, date2, fname, lname, customers.customerid FROM customers LEFT OUTER JOIN attend ON ( customers.customerid = attend.customerid ) RIGHT OUTER JOIN noattend ON ( noattend.date2 = attend.date ) WHERE noattend.date2 BETWEEN '$date2' AND '$date3' AND DayOfWeek( date2 ) %7 >1 AND group ={$_GET['group']} ORDER BY lname ASC , fname ASC , date2 DESC tables are customer-customerid,fname,lname attend-customerid,morning,afternoon,date noattend-date2 (a table of all the days to fill in the blanks) Now the problem I have is how to start a new row in the table when the customer id changes My query above pulls in customer 1 morning 2 customer 1 morning 1 customer 2 morning 2 customer 2 morning 1 whereas I'm trying to get customer1 morning2 morning1 customer2 morning2 morning1 I dont know whether this is possible in the sql or more likely in the php

    Read the article

  • What approach should be suitable for user authentification in simle client/server app

    - by TerryS
    My previous question was closed so I will be more specific. I need to create an application, desktop one written in C#, that will ask for user credentials and after verification opens the GUI allowing to work with DB (black box for users). It should be used from everywhere, not LAN or SQL domain. I assume I would need to do the following: Create a client and a server applications that will deal with authentification. That would mean a lot of socketing stuff.. Once the user is verified, the client queries would be sent to database (client-server-DB). The server would need to send the DB data sets back to the client. As you can see, this is just my guess but I have no idea whether its too complicated or completely wrong. The main thing is that it must be desktop app (not web based one) and accessible from everywhere. I am interested in main points how to design the system and will be extremely grateful for that.

    Read the article

  • What is a good solution to link different tables in Hibernate based on some field value?

    - by serg555
    I have article table and several user tables a_user, b_user, ... with exactly the same structure (but different data). I can't change anything in *_user tables except their table name prefix but can change everything else (user tables contain only user information, there is nothing about article or user_type in them). I need to link article to a user (many-to-one), but user table name is defined by user_type field. For example Article table record: ... user_id="5" user_type="a" means that it is linked to a user with id=5 from a_user table (id 5 is not unique in users scope, each user table can have its id 5). Any suggestions how to handle this situation? How can I map this relation in Hibernate (xml mapping, no annotations) so it will automatically pick up correct user for an article during select/update? How should I map user tables (one or multiple classes?)? I would need to run some queries like this: from Article a where a.userType=:type and a.user.name=:name Thanks.

    Read the article

  • globally get any field value in user table of logged in user

    - by Jugga
    Im making a gaming community and i wanna be able to grab any info of the user on any page without so instead of having much of queries on all pages i made this function. Is it better to do this? Will this slow down the site? /** * ??????? ???????? ?? ????? ??????? authed ?????????????. */ function UserData($f) { global $_SESSION; return mysql_result(mysql_query("SELECT `$f` FROM `users` WHERE `id` = ".intval($_SESSION['id'])), 0, $f); }

    Read the article

  • What will happen if I change the type of a column from int to year?

    - by MachinationX
    I have a table in MySQL 4.0 which currently has a year field as a smallint(6) type. What will happen if I convert it directly to a Year type with a query like the following: ALTER TABLE t MODIFY y YEAR(4) NOT NULL DEFAULT CURRENT_TIMESTAMP; When the current members of column y have values like 2010? I assume that because the year type is technically values from 1-255, values above that will be truncated or broken. So if MySQL isn't smart enough to realize that 2010(int) = 110(year), what would be the simplest query or queries to convert the values? Thanks for your help!

    Read the article

  • SSAS Reporting Services - Set specific language / translation

    - by Chris
    Hi all, in the data warehouse there's a default language for the measures, and I added a translation for German captions. In a Visual Studio Report Server project, when creating a query with my German OS, the cube and its measures are displayed in German language. When dragging measures to the mdx query windows, the default measure name is used. That's what I want and what I expect, since when writing MDX queries I would like to use the default measure names. But when executing the query, the columns created for each measure is translated to German again. This resuls in having German columns names within my dataset, which I dont want. I'd like to have the english column names. I already tried to change the connection string to: Data Source=server;Initial Catalog=DataWarehouse;LocaleIdentifier=1033 But that doesn't help, I still see German translations. Anyone knows how to set a specific translation?

    Read the article

  • Dynamic SQL Rows & Columns...cells require subsequent query. Best approach?

    - by Pyrrhonist
    I have the following tables below City --------- CityID StateID Name Description Reports --------- ReportID HeaderID FooterID Description I’m trying to generate a grid for use in a .Net control (Gridview, Listview…separate issue about which will be the ‘best’ one to use for my purposes) which will assign the reports as the columns and the cities as the rows. Which cities get displayed is based on the state selected, and is easy enough SELECT * FROM CITIES WHERE STATEID=@StateID However, the user is able to select which reports are being generated for each City (Demographics, Sales, Land Area, etc.). Further, the resultant cells (City * Report) is a sub-query on different tables based on the city selected and the report. Ie. Column Sales selected yields SELECT * FROM SALES WHERE CITYID=@CityID I’ve programmed a VERY inelegant solution using multiple queries and brute-forcing the grid to be created (line by line, row by row creation of data elements), but I’m positive there’s got to be a better way of accomplishing this…? Any / all suggestions appreciated here as the brute force approach I’ve gotten is slow and cumbersome…and this will have to be used often by the client, so I’m not sure it’ll be acceptable in it’s current implementation.

    Read the article

  • Offline database access

    - by dtech
    I have a small application which basically consists of an user-friendly CRUD interface to a few tables (and joined tables) It currently works with a MySQL database but I would like to make it available offline. My first thought was to create a SQLite "buffer" in between the MySQL database and the application, e.g. by executing all queries on the SQLite but also storing them in a log table so that they can be executed later in the main database with very basic conflict resolution (I will basically let the user solve it if a conflict is detected) Due to the simplicity of the application this shouldn't be too difficult and good exercise, but I think I would be re-inventing the wheel. So my question is: are there existing solutions or other approaches for this problem?

    Read the article

  • laravel multiple where clauses within a loop

    - by user1424508
    Pretty much I want the query to select all records of users that are 25 years old AND are either between 150-170cm OR 190-200cm. I have this query written down below. However the problem is it keeps getting 25 year olds OR people who are 190-200cm instead of 25 year olds that are 150-170 OR 25 year olds that 190-200cm tall. How can I fix this? thanks $heightarray=array(array(150,170),array(190,200)); $user->where('age',25); for($i=0;$i<count($heightarray);i++){ if($i==0){ $user->whereBetween('height',$heightarray[$i]) }else{ $user->orWhereBetween('height',$heightarray[$i]) } } $user->get(); Edit: I tried advanced wheres (http://laravel.com/docs/queries#advanced-wheres) and it doesn't work for me as I cannot pass the $heightarray parameter into the closure. from laravel documentation DB::table('users') ->where('name', '=', 'John') ->orWhere(function($query) { $query->where('votes', '>', 100) ->where('title', '<>', 'Admin'); }) ->get();

    Read the article

  • db optimization - have a total field or query table?

    - by Dorian Fife
    I have an app where users get points for actions they perform - either 1 point for an easy action or 2 for a difficult one. I wish to display to the user the total number of points he got in my app and the points obtained this week (since Monday at midnight). I have a table that records all actions, along with their time and number of points. I have two alternatives and I'm not sure which is better: Every time the user sees the report perform a query and sum the points the user got Add two fields to each user that records the number of points obtained so far (total and weekly). The weekly points value will be set to 0 every Monday at midnight. The first option is easier, but I'm afraid that as I'll get many users and actions queries will take a long time. The second option bares the risk of inconsistency between the table of actions and the summary values. I'm very interested in what you think is the best alternative here. Thanks, Dorian

    Read the article

  • Integrate Lucene or any other search product with SQL server 2005

    - by HBACHARYA
    Hi, I need to use full text search with SQL server 2005 and I have explored its inbuilt search approach (SQL server full text indexing) but it seems less powerful. I have also looked features of Lucene. Now my questions: Is is possible to integrate lucene and SQL server in anyway? 1. Can my T-Sql queries use Lucene index for returning results? (May be uses CLR based function internally) 2. How to update Lucene index while data in the tables are getting updated 3. What can be overall architecutre? 4. Are there any commercial products avaliable which provides this kind of support? Thanks, HB

    Read the article

  • Embedded Java Databases for Large Data Sets

    - by ExAmerican
    I would like to port a PHP/MySQL-based client/server application to be a standalone desktop application written in Java. The database has grown to be fairly large, with several tables with hundreds of thousands of rows. I expect these could grow to over a million entries for certain tables. What embedded database would best handle this? HSQLDB and Sqlite seem to be the obvious choices, though I'm guessing there are others out there as well. My main priorities are the ability to perform queries on large amounts of data efficiently (this thread seems to confirm Sqlite can handle this) and the ease with which I can import old data from MySQL (I remember HSQLDB being kind of a pain for that). Note: I am aware that similar questions comparing embedded databases have been posted before (for example here and here) but as my priorities differ somewhat from most applications considering the large data migration I thought it justified a new question.

    Read the article

  • SQL Server GROUP BY troubles!

    - by Lucas311
    I'm getting a frustrating error in one of my SQL Server 2008 queries. It parses fine, but crashes when I try to execute. The error I get is the following: Msg 8120, Level 16, State 1, Line 4 Column 'customertraffic_return.company' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. SELECT * FROM (SELECT ctr.sp_id AS spid, Substring(ctr.company, 1, 20) AS company, cci.email_address AS tech_email, CASE WHEN rating IS NULL THEN 'unknown' ELSE rating END AS rating FROM customer_contactinfo cci INNER JOIN customertraffic_return ctr ON ctr.sp_id = cci.sp_id WHERE cci.email_address <> '' AND cci.email_address NOT LIKE '%hotmail%' AND cci.email_address IS NOT NULL AND ( region LIKE 'Europe%' OR region LIKE 'Asia%' ) AND SERVICE IN ( '1', '2' ) AND ( rating IN ( 'Premiere', 'Standard', 'unknown' ) OR rating IS NULL ) AND msgcount >= 5000 GROUP BY ctr.sp_id, cci.email_address) AS a WHERE spid NOT IN (SELECT spid FROM customer_exclude) GROUP BY spid, tech_email

    Read the article

  • Some help needed with a SQL query

    - by Psyche
    Hello, I need some help with a MySQL query. I have two tables, one with offers and one with statuses. An offer can has one or more statuses. What I would like to do is get all the offers and their latest status. For each status there's a table field named 'added' which can be used for sorting. I know this can be easily done with two queries, but I need to make it with only one because I also have to apply some filters later in the project. Here's my setup: CREATE TABLE `test`.`offers` ( `id` INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY , `client` TEXT NOT NULL , `products` TEXT NOT NULL , `contact` TEXT NOT NULL ) ENGINE = MYISAM ; CREATE TABLE `statuses` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `offer_id` int(11) NOT NULL, `options` text NOT NULL, `deadline` date NOT NULL, `added` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1

    Read the article

  • I want to exchange the Value of a column in two different rows in Microsoft SQL server

    - by Silmaril89
    Hi, I want to do the following two SQL Queries in Microsoft SQL SERVER UPDATE Partnerships SET sortOrder = 2 WHERE sortOrder = 1; UPDATE Partnerships SET sortOrder = 1 WHERE sortOrder = 2; The only problem is, I don't allow for sortOrder to contain the same value, it is a unique key. How could I get around this, because the first query violates the unique key rule and terminates? Or will I have to get rid of the unique key rule I have? Thanks!

    Read the article

  • Is there a good open source search engine including indexing bot which can be used to make up speci

    - by Skuta
    Hello, Our application (C#/.NET) needs a lot of queries to search. Google's 50,000 policy per day is not enough. We need something that would crawl Internet websites by specific rules we set (for ex. country domains) and gather URLs, Texts, keywords, name of websites and create our own internal catalogue so we wouldn't be limited to any massive external search engine like Google or Yahoo. Is there any free open source solution we could use to install it on our server? No point in re-inventing the wheel.

    Read the article

  • How to improve the speed of a loop containing a sqlalchemy query statement as conditional

    - by LtPinback
    This loop checks if a record is in the sqlite database and builds a list of dictionaries for those records that are missing and then executes a multiple insert statement with the list. This works but it is very slow (at least i think it is slow) as it takes 5 minutes to loop over 3500 queries. I am a complete newbie in python, sqlite and sqlalchemy so I wonder if there is a faster way of doing this. list_dict = [] session = Session() for data in data_list: if session.query(Class_object).filter(Class_object.column_name_01 == data[2]).filter(Class_object.column_name_00 == an_id).count() == 0: list_dict.append({'column_name_00':a_id, 'column_name_01':data[2]}) conn = engine.connect() conn.execute(prices.insert(),list_dict) conn.close() session.close() edit: I moved session = Session() outside the loop. Did not make a difference.

    Read the article

  • mysql stored routine vs. mysql-alternative?

    - by user522962
    We are using a mysql database w/ about 150,000 records (names) total. Our searches on the 'names' field is done through an autocomplete function in php. We have the table indexed but still feel that the searching is a bit sluggish (a few full seconds vs. something like Google Finance w/ near-instant response). We came up w/ 2 possibilities, but wanted to get more insight: Can we create a bunch (many thousands or more) of stored procedures to speed up searches, or will creating that many stored procedures bog-down the db? Is there a faster alternative to mysql for "select" statements (speed on inserting & updating rows isn't too important so we can sacrifice that, if necessary). I've vaguely heard of BigTable & others that don't support JOIN statements....we need JOIN statements for some of our other queries we do. thx

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >