Search Results

Search found 4788 results on 192 pages for 'adhoc queries'.

Page 134/192 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • In Java how instance of and type cast(i.e (ClassName)) works on proxy object ?

    - by learner
    Java generates a proxy class for a given interface and provides the instance of the proxy class. But when we type cast the proxy object to our specific Object, how java handles this internally? Is this treated as special scenario? For example I have class 'OriginalClass' and it implements 'OriginalInterface', when I create proxy object by passing 'OriginalInterface' interface java created proxy class 'ProxyClass' using methods in the provided interface and provides object of this class(i.e ProxyClass). If my understanding is correct then can you please answer following queries 1) When I type cast object of ProxyClass to my class OriginalClass this works, but how java is allowing this? Same in case of instace of? 2) As my knowledge java creates a proxy class only with the methods, but what happen when I try to access attributes on this object? 3) Only interface methods are getting implemented in Proxy, but what happens when I try to access a method which not in interface and only mentioned in the class? Thanks, Student

    Read the article

  • TSQL - compare tables

    - by Rya
    I want to create a stored procedure that compares the results of two queries. If the results of the 2nd table can be found in the first, print 'YES', otherwise, print 'No'. Table 1: SELECT dbo.Roles.RoleName, dbo.UserRoles.RoleID FROM dbo.Roles LEFT OUTER JOIN dbo.UserRoles ON dbo.Roles.RoleID = dbo.UserRoles.RoleID WHERE (dbo.Roles.PortalID = 0) AND (dbo.UserRoles.UserID = 2) Table 2: Declare @RowData as nvarchar(2000) Set @RowData = ( SELECT EditPermissions FROM vw_XMP_DMS_Documents where DocumentID = 2) Select Data from dbo.split(@RowData, ',') For example. Table 1: John Jack James Table 2: John Sally Jane Print 'YES' Is this possible??? Thank you all very much. -R

    Read the article

  • HOw do I limit array to a certain number??

    - by mathew
    I do have an array which queries database..what I need to do is control this array to a certain number say 10. but I dont want to set LIMIT in mysql query I need to leave that as it is... $result = mysql_query("SELECT * FROM query ORDER BY regtime DESC"); while($row = mysql_fetch_array($result)) { echo "<img src='bullet.gif' align='absmiddle' class='col1ab'><a class='col1ab' href=".$row['web']." >www.".$row['web']."</a><br>"; } How do I limit this array to certain limit??

    Read the article

  • Solr match all aka *:* does not work

    - by Karussell
    Don't know what I did wrong. I have two indices with identical documents in it. The local index was replicated from a master which responds correctly, so the same solrconfig.xml and schema.xml file. But if I query the index on my local machine with *:* I get 0 docs. (other queries on my local machine are working correct) I tried jetty and tomcat for the local index. no success. The *:* behaviour is crucial for me, because some test cases are failing now. Do you have an idea what could be wrong?

    Read the article

  • Index for wildcard match of end of string

    - by Anders Abel
    I have a table of phone numbers, storing the phone number as varchar(20). I have a requirement to implement searching of both entire numbers, but also on only the last part of the number, so a typical query will be: SELECT * FROM PhoneNumbers WHERE Number LIKE '%1234' How can I put an index on the Number column to make those searchs efficient? Is there a way to create an index that sorts the records on the reversed string? Another option might be to reverse the numbers before storing them, which will give queries like: SELECT * FROM PhoneNumbers WHERE ReverseNumber LIKE '4321%' However that will require all users of the database to always reverse the string. It might be solved by storing both the normal and reversed number and having the reversed number being updated by a trigger on insert/update. But that kind of solution is not very elegant. Any other suggestions?

    Read the article

  • Database Engine not appearing in SQL Server listing

    - by Jonn
    I don't know if I'm searching for the wrong queries in google but I can't seem to find an answer to this. I have SQL Server 2008 installed in my pc and according to services.msc, I've got 2 database engines running: SQLEXPRESS (probably one that came along with Visual Studio) and MSSQLSERVER. When I try to connect only SQLEXPRESS is visible in the Server Name drop down list. I tried to explicitly state MSSQLSERVER by typing in MYPCNAME\MSSQLSERVER Didn't work. The best solution I could find in the internet was to enable stuff at Configuration Manager. Didn't work either (although I did find that TCP/VIA and all other options were disabled for MSSQLSERVER). Anyone have any other ideas on what I should try next or probably something that I overlooked?

    Read the article

  • Deserializing only select properties of an Entity using JDOQL query string?

    - by user246114
    Hi, I have a rather large class stored in the datastore, for example a User class with lots of fields (I'm using java, omitting all decorations below example for clarity): @PersistenceCapable class User { private String username; private String city; private String state; private String country; private String favColor; } For some user queries, I only need the favColor property, but right now I'm doing this: SELECT FROM " + User.class.getName() + " WHERE username == 'bob' which should deserialize all of the entity properties. Is it possible to do something instead like: SELECT username, favColor FROM " + User.class.getName() + " WHERE username == 'bob' and then in this case, all of the returned User instances will only spend time deserializing the username and favColor properties, and not the city/state/country properties? If so, then I suppose all the other properties will be null (in the case of objects) or 0 for int/long/float? Thank you

    Read the article

  • possibility to do type/data conversion of data returned by ria services?

    - by reinier
    My service returns an byte array, which I have to convert to an 'animated gif' (using imagetools since silverlight doesn't support it yet) I was wondering, is it possible to insert some code at the client, where I can do the conversion before the actual object is returned to whatever it is binded against? On the server side, the queries can be customized before it is sent over the wire. I'm asking for the exact opposite, can I do some on the fly conversions the moment they get of the wire and before they are returned to the controls. If I'm overthinking this and there is a smarter/better/easier way to do it, I'm all for such an answer as well

    Read the article

  • Is file_get_contents multi line?

    - by Abs
    Hello all, Does file_get_contents maintain line breaks? I thought it did but I have tried this: if($conn){ $tsql = file_get_contents('scripts/CreateTables/SLR05_MATCH_CREATETABLES.sql'); $row = sqlsrv_query($conn, $tsql); print_r(sqlsrv_errors()); } The errors I get is that SQL Server complains that there is incorrect syntax. I get the same errors when I run the SQL Script without any line breaks, which suggests file_get_contents removes them? When I run the script normally (open the file in SQL Server Management Studio) and execute it, it works perfectly. So is there something that I can use that maintains line breaks etc? Or is there another problem here in using queries from a file with the SQL Server PHP Driver from Microsoft? Thanks all for any help

    Read the article

  • How do you optimize database performance when providing results for autocomplete/iterative search?

    - by Howiecamp
    Note: In this question I'm using the term "autocomplete" (or "iterative search") to refer to returning search-as-you-type results, e.g. like Google Search gives you. Also my question is not specific to web applications vs. fat client apps. How are SQL SELECT queries normally constructed to provide decent performance for this type of query, especially over arbitrarily large data sets? In the case where the search will only query based on the first n characters (easiest case) am I still issuing a new SELECT result FROM sometable WHERE entry LIKE... on each keypress. Even with various forms of caching this seems like it might result in poor performance. In cases where you want your search string to return results with prefix matches, substring matches, etc. it's an even more difficult problem. Looking at a case of searching a list of contacts, you might return results that match FirstName + LastName, LastName + FirstName, or any other substring.

    Read the article

  • DB management for Heroku apps

    - by zetarun
    Hi all, I'm fairly new to both Rails and Heroku but I'm seriously thinking of using it as a platform to deploy my Ruby/Rails applications. I want to use all the power of Heroku, so I prefer the "embedded" PostgreSQL managed by Heroku instead of the addon for Amazon RDS for MySQL, but I'm not so confident without the possibility to access my data in a SQL client... I know that in a well made app you have no need to access DB, but there are some situations (add rows to a config table, see data not mapped in a view, update some columns for debugging issues, performance monitoring, running queries for reporting, etc.) when this can be good... How do you solve this problem? What's you experience in a real life app powered by Heroku? Thanks!

    Read the article

  • Django - User account with multiple identities

    - by Scott Willman
    Synopsis: Each User account has a UserProfile to hold extended info like phone numbers, addresses, etc. Then, a User account can have multiple Identities. There are multiple types of identities that hold different types of information. The structure would be like so: User |<-FK- UserProfile | |<-FK- IdentityType1 |<-FK- IdentityType1 |<-FK- IdentityType2 |<-FK- IdentityType3 (current) |<-FK- IdentityType3 |<-FK- IdentityType3 The User account can be connected to n number of Identities of different types but can only use one Identity at a time. Seemingly, the Django way would be to collect all of the connected identities (user.IdentityType1_set.select_related()) into a QuerySet and then check each one for some kind of 'current' field. Question: Can anyone think of a better way to select the 'current' marked Identity than doing three DB queries (one for each IdentityType)?

    Read the article

  • SQL Query to select upcoming events with a start and end date

    - by Chris T
    I need to display upcoming events from a database. The problem is when I use the query I'm currently using any events with a start day that has passed will show up lower on the list of upcoming events regardless of the fact that they are current My table (yaml): columns: title: type: string(255) notnull: true default: Untitled Event start_time: type: time end_time: type: time start_day: type: date notnull: true end_day: type: date description: type: string(500) default: This event has no description category_id: integer My query (doctrine): $results = Doctrine_Query::create() ->from("sfEventItem e, e.Category c") ->select("e.title, e.start_day, e.description, e.category_id, e.slug") ->addSelect("c.title, c.slug") ->orderBy("e.start_day, e.start_time, e.title") ->limit(5) ->execute(array(), Doctrine_Core::HYDRATE_ARRAY); Basically I'd like any events that is currently going on (so if today is in between start_day and end_day) to be at the top of the list. How would I go about doing this if it's even possible? Raw sql queries are good answers too because they're pretty easy to turn into DQL.

    Read the article

  • advanced search with mysql

    - by Arsenal
    I'm creating a search function for my website where the user can put in anything he likes in a textfield. It get's matched against anything (name, title, job, car brand, ... you name it) I initially wrote the query with an INNER JOIN on every table that needed to be searched. SELECT column1, column2, ... FROM person INNER JOIN person_car ON ... INNER JOIN car ... This ended up in a query with 6 or 8 INNER JOINs, and a whole lot WHERE ... LIKE '%searchvalue%' Now this query seems to cause a time'out in MySql, and I even got a warning from my hosting provider that the queries just taking up too many resources. Now obviously I'm doing this very wrong, but I was wondering how the correct approach to these kind of search functions is. Thanks!

    Read the article

  • Does Oracle 11g automatically index fields frequently used for full table scans?

    - by gustafc
    I have an app using an Oracle 11g database. I have a fairly large table (~50k rows) which I query thus: SELECT omg, ponies FROM table WHERE x = 4 Field x was not indexed, I discovered. This query happens a lot, but the thing is that the performance wasn't too bad. Adding an index on x did make the queries approximately twice as fast, which is far less than I expected. On, say, MySQL, it would've made the query ten times faster, at the very least. I'm suspecting Oracle adds some kind of automatic index when it detects that I query a non-indexed field often. Am I correct? I can find nothing even implying this in the docs.

    Read the article

  • Lucene search on specific field name?

    - by Rachel
    I have been playing around with an installation of SOLR that indexes some data from my database. I am able to index data and query it back but I was wondering about how field name queries work. For certain fields I am able to specify their name and the search text to have the results return as expected and for other fields, when I specify their name and search text, no results are returned. q=type:book //(this will work) q=type:book AND title:"The Title" //(no results returned) In this example, type is a required field and title is not. For the example where I search by title, I can see the document in the results of the first query having the given title so I know that a document exists that matches this search. Is making a field 'required' the only way to be able to search by field name? [edit] I'm using the default installation and the 'example' folder inside of solr, editing the xml files and using the interface available through start.jar to be able to run, index and query.

    Read the article

  • MS Access caching of reports / query results

    - by FrustratedWithFormsDesigner
    Is it possible to cache a query or report the first time it is run? It seems that opening a report will re-query the datasource. For certain queries, the data source does not change frequently enough that I'd be worried about a cache being out of date (users are notified when the database changes), and it would be much easier for the users to be able to open the report instantly rather than having to wait several minutes every time they want to see the data (though I realize if they close the file the caches will be lost - that's OK). Data comes from an ODBC connection to Oracle, using Access 2003.

    Read the article

  • GQL Query with __key__ in List of KEYs

    - by bossylobster
    In the GQL reference [1], it is encouraged to use the IN keyword with a list of values, and to construct a Key from hand the GQL query SELECT * FROM MyModel WHERE __key__ = KEY('MyModel', 'my_model_key') will succeed. However, using the code you would expect to work: SELECT * FROM MyModel WHERE __key__ IN (KEY('MyModel', 'my_model_key1'), KEY('MyModel', 'my_model_key2')) in the Datastore Viewer, there is a complaint of "Invalid GQL query string." What is the correct way to format such a query? [1] http://code.google.com/appengine/docs/python/datastore/gqlreference.html PS I know there are more efficient ways to do this in Python (without constructing a GQL query) and using the remote_api, but each call to the remote_api counts against quota. In an environment where quota is not (necessarily) free, quick and dirty queries are very helpful.

    Read the article

  • usage of intval & real_escape_string when sanitizing integers

    - by paulus
    dear All. I'm using integer PKs in some tables of mysql database. Before input from PHP script, I am doing some sanitizing, which includes intval($id) and $mysqli-real_escape_string(). The queries are quite simple insert into `tblproducts`(`supplier_id`,`description`) values('$supplier_id','$description') In this example, $description goes through real_escape_string(), while $supplier_id only being intval()'ed. I'm just curious, if there're any situations, when I need to apply both intval and real_escape_string to integer I'm inserting into DB? So basically do I really need to use? $supplier_id = intval($mysqli->real_escape_string($supplier_id)); Thank you.

    Read the article

  • How can an SQL query return data from multiple tables

    - by Fluffeh
    I would like to know how to get data from multiple tables in my database, what types of methods are there to do this, what are joins and unions and how are they different from one another? When should I use each one compared to the others? I am planning to use this in my (for example - PHP) application, but don't want to run multiple queries against the database, what options do I have to get data from multiple tables in a single query? Note: I am writing this as I would like to be able to link to a well written guide on the numerous questions that I constantly come across in the PHP queue, so I can link to this for further detail when I post an answer. The answers cover off the following: Part 1 - Joins and Unions Part 2 - Subqueries Part 3 - Tricks and Efficient Code

    Read the article

  • Why Does TFS Allow Orphaned Content and How Do I Get Rid of It?

    - by Chad
    My TfsVersionControl database has grown to 40+ GB in size. We recently did a TFS Destroy on a folder tree that should have cleared up at least 10 GB but instead it seemed to have no effect. When I look at the tables in TfsVersionControl, I am first shocked to see that there are no foreign keys at all in the database. Running a few queries, I see that there is some orphaning going on: tbl_Content has 13.9 GB of records that don't have a related tbl_File record tbl_File and tbl_Content have 2.4 GB that don't have a related tbl_Namespace record The cleanup job seems to be running nightly (prc_DeleteUnusedContent) and running it against the database manually doesn't remove any orphans. I see in the log for the cleanup job that it failed on 3/16, which is the morning after I destroyed the large amount of data. The error was due to a full transaction log. Could that error be the reason I'm left with all this orphaned data that can't be deleted? How can I permanently destroy this unneeded content?

    Read the article

  • Suggest Sphinx index scheme

    - by htf
    Hi. In a MySQL database I have documents of different type: some have text content, meta keys, descriptions, others have code, SKU number, size and brand name and so on. The problem is, I have to search something in all of these documents and then display a single page, where the results will be grouped by the document type, such as help page, blog post, item... It's not clear for me how to implement the Sphinx index: I want to have a single index to speed up queries, but since different docs have different structure - how can I group them? I was thinking about just concatenating them, but it just doesn't feel right.

    Read the article

  • Wordpress SQL_CALC fix causes PHP error

    - by ok1ha
    I'm looking for some followup on an older topic for Wordpress where SQL_CALC was found to slow things down when you have a large DB in Wordpress. I have been using the code, at the bottom of this post, to get around it but it does generate an error in my error log. How would I prevent this error? PHP Warning: Division by zero in /var/www/vhosts/domain.com/httpdocs/wp-content/themes/greatTheme/functions.php on line 19 The original thread: http://wordpress.org/support/topic/slow-queries-sql_calc_found_rows-bringing-down-site?replies=25 The code in my functions.php: add_filter('pre_get_posts', 'optimized_get_posts', 100); function optimized_get_posts() { global $wp_query, $wpdb; $wp_query->query_vars['no_found_rows'] = 1; $wp_query->found_posts = $wpdb->get_var( "SELECT COUNT(*) FROM wp_posts WHERE 1=1 AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private')" ); $wp_query->found_posts = apply_filters_ref_array( 'found_posts', array( $wp_query->found_posts, &$wp_query ) ); $wp_query->max_num_pages = ceil($wp_query->found_posts / $wp_query->query_vars['posts_per_page']); return $wp_query; }

    Read the article

  • Slow server response time - CakePHP

    - by Hasan
    I am using CakePHP 1.2 for building a website. The problem i am facing is when ever a page loads it takes a lot of time in "waiting for www.example.com". The server response time is very slow. First i thot it was my database queries, but they were executing in less than seconds time. Next i also contacted the server people. They told it was not the server. Now i am stuck very badly with "waiting for www.example.com". Is the problem is in coding or the cakePHP is mis configured. Need help badly and fast. Thanks

    Read the article

  • Offline database access

    - by dtech
    I have a small application which basically consists of an user-friendly CRUD interface to a few tables (and joined tables) It currently works with a MySQL database but I would like to make it available offline. My first thought was to create a SQLite "buffer" in between the MySQL database and the application, e.g. by executing all queries on the SQLite but also storing them in a log table so that they can be executed later in the main database with very basic conflict resolution (I will basically let the user solve it if a conflict is detected) Due to the simplicity of the application this shouldn't be too difficult and good exercise, but I think I would be re-inventing the wheel. So my question is: are there existing solutions or other approaches for this problem?

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >