Search Results

Search found 7116 results on 285 pages for 'nested queries'.

Page 201/285 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • Temporary intermediate table

    - by user289429
    In our project to generate massive reports in oracle we use some permanent table to hold intermediate results. For example to generate one report we run few queries and populate the table, at the final step we join the intermediate table with huge application tables. These intermediate tables are cleared for next report run. We have few concerns in performance areas. These intermediate tables are transactional and don't have statistics. Is it good idea to join these with application tables which are partitioned and have up to date statistics. We need these results stored in the intermediate tables to be available across requests from UI hence we are not in a position to use oracle provided temporary tables. Any thoughts on what could be done would be appreciated.

    Read the article

  • Combining query rows in a loop

    - by icemanind
    I have the following ColdFusion 9 code: <cfloop from="1" to="#arrayLen(tagArray)#" index="i"> <cfquery name="qryGetSPFAQs" datasource="#application.datasource#"> EXEC searchFAQ '#tagArray[i]#' </cfquery> </cfloop> The EXEC executes a stored procedure on the database server, which returns rows of data, depending on what the parameter is. What I am trying to do is combine the queries into one query object. In other words, if it loops 3 times and each loop returns 4 rows, I want a query object that has all 12 rows in one object. How do I acheive this?

    Read the article

  • How to limit the number of connections to a SQL Server server from my tomcat deployed java applicati

    - by CJ
    I have an application that is deployed on tomcat on server A and sends queries to a huge variety of SQL Server databases on an server B. I am concerned that my application could overload this SQL Server database server and would like some way to preventing it making requests to connect to any database on that server if some arbitrary number of connections were already in existence and unclosed. I am looking at using connection pooling but am under the impression that this will only pool connections to a specific database on the SQL Server server, I want to control the total of these combined connections that will occur to many different databases (incidentally I can only find out the names of individual db's dynamically as they change day to day). Will connection pooling take care of this for me, are am I looking at this from the wrong perspective? I have no access to the configuration of the SQL Server server. Links to tutorials or working examples of your suggested solution are most welcome!

    Read the article

  • Which is quicker? Memcache or file query? (using maxmind geoip.dat file)

    - by tomcritchlow
    Hi, I'm using Python on Appengine and am looking up the geolocation of an IP address like this: import pygeoip gi = pygeoip.GeoIP('GeoIP.dat') Location = gi.country_code_by_addr(self.request.remote_addr) (pygeoip can be found here: http://code.google.com/p/pygeoip/) I want to geolocate each page of my app for a user so currently I lookup the IP address once then store it in memcache. My question - which is quicker? Looking up the IP address each time from the .dat file or fetching it from memcache? Are there any other pros/cons I need to be aware of? For general queries like this, is there a good guide to teach me how to optimise my code and run speed tests myself? I'm new to python and coding in general so apologies if this is a basic concept. Thanks! Tom

    Read the article

  • How can I write a MySQL query to check multiple rows?

    - by Matt
    I have a MySQL table containing data on product features: feature_id feature_product_id feature_finder_id feature_text feature_status_yn 1 1 1 Webcam y 2 1 1 Speakers y 3 1 1 Bluray n I want to write a MySQL query that allows me to search for all products that have a 'y' feature_status_yn value for a given feature_product_id and return the feature_product_id. The aim is to use this as a search tool to allow me to filter results to product IDs only matching the requested feature set. A query of SELECT feature_id FROM product_features WHERE feature_finder_id = '1' AND feature_status_yn = 'y' will return all of the features of a given product. But how can I select all products (feature_product_id) that have a 'y' value when they are on separate lines? Multiple queries might be one way to do it, but I'm wondering whether there's a more elegant solution based purely in SQL.

    Read the article

  • Oracle (Old?) Joins - A tool for conversion?

    - by Grasper
    I have been porting oracle selects, and I have been running across a lot of queries like so: SELECT e.last_name, d.department_name FROM employees e, departments d WHERE e.department_id(+) = d.department_id; ...and: SELECT last_name, d.department_id FROM employees e, departments d WHERE e.department_id = d.department_id(+); Are there any guides/tutorials for converting all of the variants of the (+) syntax? What is that syntax even called (so I can scour google)? Even better.. Is there a tool that will do this conversion for me? When was this standard phased out? Any info is appreciated.

    Read the article

  • Can I improve performance by refactoring SQL commands into C# classes?

    - by Matthew Jones
    Currently, my entire website does updating from SQL parameterized queries. It works, we've had no problems with it, but it can occasionally be very slow. I was wondering if it makes sense to refactor some of these SQL commands into classes so that we would not have to hit the database so often. I understand hitting the database is generally the slowest part of any web application For example, say we have a class structure like this: Project (comprised of) Tasks (comprised of) Assignments Where Project, Task, and Assignment are classes. At certain points in the site you are only working on one project at a time, and so creating a Project class and passing it among pages (using Session, Profile, something else) might make sense. I imagine this class would have a Save() method to save value changes. Does it make sense to invest the time into doing this? Under what conditions might it be worth it?

    Read the article

  • What techniques are available for filtering collections of objects when using zodb?

    - by Omega
    As the title says: What techniques are available for filtering objects when using zodb? The equivalent in SQL terms would be something like filtering results by a date range. Or only returning rows with a particular value set in a column. If I had a series of blog posts and only wanted ones done in the past month, what would I have to do? Is there any way to optimize these kinds of "queries"? My gut tells me iterating over all the objects in a relationship simply to perform a test is less than optimal.

    Read the article

  • Performing complex query with Dynamics CRM 4.0

    - by dub
    Hi, I have two custom entites, Product and ProductType, linked together in many-to-one relationship. Product has a lookup field to ProductType. I'm trying to write a query to fetch Type1 products with a price over 100, and Type2 products with a price lower than 100. Here's how I would do it in SQL : select * from Product P inner join ProductType T on T.Id = P.TypeId where (T.Code = 'Type1' and P.Price >= 100) or (T.Code = 'Type2' and P.Price < 100) I can't figure out a way to build a QueryExpression to do exactly that. I know I could do it with two queries, but I'd like to minimize roundtrips to the server. Is there a way to perform that query in only one operation ? Thanks!

    Read the article

  • what this `^` mean here in solr

    - by Rahul Mehta
    I am confuse her but i want to clear my doubt. I think it is stupid question but i want to know. Use a TokenFilter that outputs two tokens (one original and one lowercased) for each input token. For queries, the client would need to expand any search terms containing upper case characters to two terms, one lowercased and one original. The original search term may be given a boost, although it may not be necessary given that a match on both terms will produce a higher score. text:NeXT ==> (text:NeXT^10 OR text:next) what this ^ mean here . http://wiki.apache.org/solr/SolrRelevancyCookbook#Relevancy_and_Case_Matching

    Read the article

  • How To Configure Query Cacheing in EclipseLink

    - by rustyshelf
    I have a collection of states, that I want to cache for the life of the application, preferably after it is called for the first time. I'm using EclipseLink as my persistence provider. In my EJB3 entity I have the following code: @Cache @NamedQueries({ @NamedQuery( name = "State.findAll", query = "SELECT s FROM State s", hints = { @QueryHint(name=QueryHints.CACHE_USAGE, value=CacheUsage.CheckCacheThenDatabase), @QueryHint(name=QueryHints.READ_ONLY, value=HintValues.TRUE) } ) }) This doesn't seem to do anything though, if I monitor the SQL queries going to MySQL it still does a select each time my Session Bean uses this NamedQuery. What is the correct way to configure this query so that it is only ever read once from the database, preferably across all sessions? Edit: I am calling the query like this: Query query = em.createNamedQuery("State.findAll"); List<State> states = query.getResultList();

    Read the article

  • Including associations optimization in Rails

    - by Vitaly
    Hey, I'm looking for help with Ruby optimization regarding loading of associations on demand. This is simplified example. I have 3 models: Post, Comment, User. References are: Post has many comments and Comment has reference to User (:author). Now when I go to the post page, I expect to see post body + all comments (and their respective authors names). This requires following 2 queries: select * from Post -- to get post data (1 row) select * from Comment inner join User -- to get comment + usernames (N rows) In the code I have: Post.find(params[:id], :include => { :comments => [:author] } But it doesn't work as expected: as I see in the back end, there're still N+1 hits (some of them are cached though). How can I optimize that?

    Read the article

  • JPA in distributed Java EE configuration

    - by sof
    Hello, I'm developing a JEE application to run on Glassfish: Database (javaDB, MS SQL, MySQL or Oracle) EJB layer with JPA (Toplink essentials - from Glassfish) for database access JSF/Icefaces based web UI accessing the EJB layer The application will have a lot of concurrent web client, so I want to run it on different physical servers and use a load-balancer. My problem is now how to keep the applications synchronized. I intend to set up multiple servers, each running Glassfish with my EAR app installed. Whenever on one of the servers data is added to or removed from the database (via JPA, no direct SQL queries), this change should be reflected in the JPA layer on the other servers. I've been looking around for solutions to this, but couldn't find anything I really like (the full Toplink from Oracle claims to have a solution, but don't know). Doing a refresh before every access to a JPA entity could work, but is far from efficient. Are there any patterns, libraries, ... that could help here? Thanks a lot!

    Read the article

  • Inheritance in kohana

    - by Binaryrespawn
    Hi all, I have recently started to use Kohana and I know inheritance is in infancy stages at the moment. The work around is using a $_has_one annotation on the child class model. In may case i have "page" as the parent of "article". I have something like, protected $_has_one = array('mypage'=>array('model'=>'page', 'foreign_key'=>'id')); In my controller, I have an action which queries the database. In this query I am trying to access fields form the parent of "article" which is the "page". $n->articles=ORM::factory('article')->where('expires','=',0) ->where('articledate','<',date('y-m-d')) ->where('expirydate','>',date('y-m-d')) ->where('mypage->status','=','PUBLISHED') ->order_by('articledate','desc') ->find_all(); The status column resides in the page table and my query is generating an error to the effect of "cannot find status", clearly because it belongs to the parent. Any ideas ?

    Read the article

  • How do I create a user history?

    - by ggfan
    I want to create a user history function that allows shows users what they done. ex: commented on an ad, posted an ad, voted on an ad, etc. How exactly do I do this? I was thinking about... in my site, when they log in it stores their user_id ($_SESSION['user_id']) so I guess whenever an user posts an ad(postad.php), comments(comment.php), I would just store in a database table "userhistory" what they did based on whenever or not their user_id was activate. When they comment, I store the user_id in the comment dbc table, so I'll also store it in the "userhistory" table. And then I would just queries all the rows in the dbc for the user to show it Any steps/improvements I can make? :)

    Read the article

  • Could someone give me their two cents on this optimization strategy

    - by jimstandard
    Background: I am writing a matching script in python that will match records of a transaction in one database to names of customers in another database. The complexity is that names are not unique and can be represented multiple different ways from transaction to transaction. Rather than doing multiple queries on the database (which is pretty slow) would it be faster to get all of the records where the last name (which in this case we will say never changes) is "Smith" and then have all of those records loaded into memory as you go though each looking for matches for a specific "John Smith" using various data points. Would this be faster, is it feasible in python, and if so does anyone have any recommendations for how to do it?

    Read the article

  • Using multiple databases within one application

    - by Alex
    I have a web application made for several groups of people not connected with each other. Instead of using one database for all of them, I'm thinking about making separate databases. This will improve the speed of the queries and make me free from checking to what group the user belongs. But since I'm working with LINQ to SQL, my classes are explicitly connected with the databases, so I will have to make separate DataContexts for all of the databases. So how can I solve this problem? Or should I just not bother and use one database only?

    Read the article

  • Jet Database (ms access) ExecuteNonQuery - Can I make it faster?

    - by bluebill
    Hi all, I have this generic routine that I wrote that takes a list of sql strings and executes them against the database. Is there any way I can make this work faster? Typically it'll see maybe 200 inserts or deletes or updates at a time. Sometimes there is a mixture of updates, inserts and deletes. Would it be a good idea to separate the queries by type (i.e. group inserts together, then updates and then deletes)? I am running this against an ms access database and using vb.net 2005. Public Function ExecuteNonQuery(ByVal sql As List(Of String), ByVal dbConnection as String) As Integer If sql Is Nothing OrElse sql.Count = 0 Then Return 0 Dim recordCount As Integer = 0 Using connection As New OleDb.OleDbConnection(dbConnection) connection.Open() Dim transaction As OleDb.OleDbTransaction = connection.BeginTransaction() 'Using cmd As New OleDb.OleDbCommand() Using cmd As OleDb.OleDbCommand = connection.CreateCommand cmd.Connection = connection cmd.Transaction = transaction For Each s As String In sql If Not String.IsNullOrEmpty(s) Then cmd.CommandText = s recordCount += cmd.ExecuteNonQuery() End If Next transaction.Commit() End Using End Using Return recordCount End Function

    Read the article

  • MySQL : delete from table that is used in the where clause

    - by Eric
    I am writing a small script to synchronize 2 MySQL tables ( t1 to be 'mirrored' to t2 ) In a step I would like to delete rows inside t2 that has been delete in t1 with the same id. I tried this query : delete from t2 where t2.id in ( select t2.id left join t1 on (t1.id=t2.id) where t1.id is null ) But Mysql forbid me to use t2 in the same time in the delete and in the select (sound logical by the way) Of course, I can split the query into 2 queries : first select IDs, then delete rows with these IDs. My question : do you have a cleaner way to delete row from t2 that does not exist anymore in t1 ? with one query only ?

    Read the article

  • Does normalization really hurt performance in high traffic sites?

    - by Luke101
    I am designing a database and I would like to normalize the database. I one query I will joining about 30-40 tables. Will this hurt the website performance if it ever becomes extremely popular? This will be the main query and it will be getting called 50% of the time. The other queries I will be joining about 2 tables. I have a choice right now to normalize or not to normalize but if the normalization becomes a problem in the future i may have to rewrite 40% of the software and it may take me a long time. Does normalization really hurt in this case? Should I denormalize now while I have the time?

    Read the article

  • PostgreSQL: How to index all foreign keys?

    - by biggusjimmus
    I am working with a large PostgreSQL database, and I are trying to tune it to get more performance. Our queries and updates seem to be doing a lot of lookups using foreign keys. What I would like is a relatively simple way to add Indexes to all of our foreign keys without having to go through every table (~140) and doing it manually. In researching this, I've come to find that there is no way to have Postgres do this for you automatically (like MySQL does), but I would be happy to hear otherwise there, too.

    Read the article

  • MultiOS "Jet Database" for QtC++?

    - by Airjoe
    Hopefully I can articulate this well: I'm porting an application I made years ago from VB6 (I know, I know!) to QtC++. In my original application, one thing I liked was that I didn't need an actual SQL server running, I could just use MS Access .mdb files. I was wondering if something similar exists for QtC++ that will work on multiple OSes- a database stored in a file, pretty much, but that I can still run SQL queries with. Not sure if something like this exists or not, but any help appreciated, thanks!

    Read the article

  • CakePHP model useTable with SQL Views

    - by Chris
    I'm in the process converting our CakePHP-built website from Pervasive to SQL Server 2005. After a lot of hassle the setup I've gotten to work is using the ADODB driver with 'connect' as odbc_mssql. This connects to our database and builds the SQL queries just fine. However, here's the rub: one of our Models was associated with an SQL view in Pervasive. I ported over the view, but it appears using the set up that I have that CakePHP can't find the View in SQL Server. Couldn't find much after some Google searches - has anyone else run into a problem like this? Is there a solution/workaround, or is there some redesign in my future?

    Read the article

  • mysql does not utilize my cpu and ram enough?

    - by vick
    Hello Everyone! I am importing a 2.5gb csv file to a mysql table. My storage engine is innodb. Here is the script: use xxx; DROP TABLE IF EXISTS `xxx`.`xxx`; CREATE TABLE `xxx`.`xxx` ( `xxx_id` int(10) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(128) NOT NULL, `yy` varchar(128) NOT NULL, `yyy` varchar(64) NOT NULL, `yyyy` varchar(2) NOT NULL, `yyyyy` varchar(10) NOT NULL, `url` varchar(64) NOT NULL, `p` varchar(10) NOT NULL, `pp` varchar(10) NOT NULL, `category` varchar(256) NOT NULL, `flag` varchar(4) NOT NULL, PRIMARY KEY (`xxx_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; set autocommit = 0; load data local infile '/home/xxx/raw.csv' into table company fields terminated by ',' optionally enclosed by '"' lines terminated by '\r\n' ( name, yy, yyy, yyyy, yyyyy, url, p, pp, category, flag ); commit; Why does my PC (core i7 920 with 6gb ram) only consume 9% cpu power and 60% ram when running these queries?

    Read the article

  • PgJDBC: "no suitable driver found" when following tutorial, why?

    - by Celeritas
    I'm writing a Java program that queries a PostgreSQL database. I'm following this example and have trouble here: connection = DriverManager.getConnection( "jdbc:postgresql://127.0.0.1:5432/testdb", "mkyong", "123456"); According to the JavaDoc for DriverManager the first string is "a database url of the form jdbc:subprotocol:subname. When I connect to the server I type in psql -h dataserv.abc.company.com -d app -U emp24 and give the password qwe123 (for example sake). What should the first argument of getConnection be? I've tried connection = DriverManager.getConnection( "jdbc:postgresql://dataserv.abc.company.com", "emp24", "qwe123"); and get the run time error: no suitable driver found. I've download JDBC4 Postgresql Driver, Version 9.2-1000.

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >