Search Results

Search found 4815 results on 193 pages for 'parameterized queries'.

Page 136/193 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • Solr match all aka *:* does not work

    - by Karussell
    Don't know what I did wrong. I have two indices with identical documents in it. The local index was replicated from a master which responds correctly, so the same solrconfig.xml and schema.xml file. But if I query the index on my local machine with *:* I get 0 docs. (other queries on my local machine are working correct) I tried jetty and tomcat for the local index. no success. The *:* behaviour is crucial for me, because some test cases are failing now. Do you have an idea what could be wrong?

    Read the article

  • usage of intval & real_escape_string when sanitizing integers

    - by paulus
    dear All. I'm using integer PKs in some tables of mysql database. Before input from PHP script, I am doing some sanitizing, which includes intval($id) and $mysqli-real_escape_string(). The queries are quite simple insert into `tblproducts`(`supplier_id`,`description`) values('$supplier_id','$description') In this example, $description goes through real_escape_string(), while $supplier_id only being intval()'ed. I'm just curious, if there're any situations, when I need to apply both intval and real_escape_string to integer I'm inserting into DB? So basically do I really need to use? $supplier_id = intval($mysqli->real_escape_string($supplier_id)); Thank you.

    Read the article

  • GQL Query with __key__ in List of KEYs

    - by bossylobster
    In the GQL reference [1], it is encouraged to use the IN keyword with a list of values, and to construct a Key from hand the GQL query SELECT * FROM MyModel WHERE __key__ = KEY('MyModel', 'my_model_key') will succeed. However, using the code you would expect to work: SELECT * FROM MyModel WHERE __key__ IN (KEY('MyModel', 'my_model_key1'), KEY('MyModel', 'my_model_key2')) in the Datastore Viewer, there is a complaint of "Invalid GQL query string." What is the correct way to format such a query? [1] http://code.google.com/appengine/docs/python/datastore/gqlreference.html PS I know there are more efficient ways to do this in Python (without constructing a GQL query) and using the remote_api, but each call to the remote_api counts against quota. In an environment where quota is not (necessarily) free, quick and dirty queries are very helpful.

    Read the article

  • Create a complex SQL query?

    - by mazzzzz
    Hey guys, I have a program that allows me to run queries against a large database. I have two tables that are important right now, Deposits and withdraws. Each contains a history of every user. I need to take each table, add up every deposit and withdraws (per user), then subtract the withdraws from the deposits. I then need to return every user whos result is negative (aka they withdrew more then they deposited). Is this possible in one query? Example: Deposit Table: |ID|UserName|Amount| |1 | Use1 |100.00| |2 | Use1 |50.00 | |3 | Use2 |25.00 | |4 | Use1 | 5.00 | WithDraw Table: |ID|UserName|Amount| |2 | Use2 | 5.00 | |1 | Use1 |100.00| |4 | Use1 | 5.00 | |3 | Use2 |25.00 | So then the result would output: |OverWithdrawers| | Use2 | Is this possible (I sure don't know how to do it)? Thanks for any help, Max

    Read the article

  • Lucene search on specific field name?

    - by Rachel
    I have been playing around with an installation of SOLR that indexes some data from my database. I am able to index data and query it back but I was wondering about how field name queries work. For certain fields I am able to specify their name and the search text to have the results return as expected and for other fields, when I specify their name and search text, no results are returned. q=type:book //(this will work) q=type:book AND title:"The Title" //(no results returned) In this example, type is a required field and title is not. For the example where I search by title, I can see the document in the results of the first query having the given title so I know that a document exists that matches this search. Is making a field 'required' the only way to be able to search by field name? [edit] I'm using the default installation and the 'example' folder inside of solr, editing the xml files and using the interface available through start.jar to be able to run, index and query.

    Read the article

  • Django - User account with multiple identities

    - by Scott Willman
    Synopsis: Each User account has a UserProfile to hold extended info like phone numbers, addresses, etc. Then, a User account can have multiple Identities. There are multiple types of identities that hold different types of information. The structure would be like so: User |<-FK- UserProfile | |<-FK- IdentityType1 |<-FK- IdentityType1 |<-FK- IdentityType2 |<-FK- IdentityType3 (current) |<-FK- IdentityType3 |<-FK- IdentityType3 The User account can be connected to n number of Identities of different types but can only use one Identity at a time. Seemingly, the Django way would be to collect all of the connected identities (user.IdentityType1_set.select_related()) into a QuerySet and then check each one for some kind of 'current' field. Question: Can anyone think of a better way to select the 'current' marked Identity than doing three DB queries (one for each IdentityType)?

    Read the article

  • Suggest Sphinx index scheme

    - by htf
    Hi. In a MySQL database I have documents of different type: some have text content, meta keys, descriptions, others have code, SKU number, size and brand name and so on. The problem is, I have to search something in all of these documents and then display a single page, where the results will be grouped by the document type, such as help page, blog post, item... It's not clear for me how to implement the Sphinx index: I want to have a single index to speed up queries, but since different docs have different structure - how can I group them? I was thinking about just concatenating them, but it just doesn't feel right.

    Read the article

  • MS Access caching of reports / query results

    - by FrustratedWithFormsDesigner
    Is it possible to cache a query or report the first time it is run? It seems that opening a report will re-query the datasource. For certain queries, the data source does not change frequently enough that I'd be worried about a cache being out of date (users are notified when the database changes), and it would be much easier for the users to be able to open the report instantly rather than having to wait several minutes every time they want to see the data (though I realize if they close the file the caches will be lost - that's OK). Data comes from an ODBC connection to Oracle, using Access 2003.

    Read the article

  • Neo4j+Gremlin : Trouble with T.gte and floating point node attributes

    - by letronje
    For a type of nodes in my graph, attribute values for an attribute(named 'some_count') is either missing or an integer or a float. I'm trying to write gremlin to filter these nodes based on minimum value for this attribute. I first verified that the values are indeed present by firing the following gremlin g.v(XXX)._().in('category').hasNot('some_count', T.eq, null).back(1).some_count Next, I tried filtering by exact value and that works and show me the matching nodes or gives an empty array if there is no match g.v(XXX)._().in('category').hasNot('some_count', T.eq, null).back(1).has('some_count', T.eq, 120000.0d) But the following query that uses the 'greater than or equal to' comparator doesn't work. g.v(XXX)._().in('category').hasNot('some_count', T.eq, null).back(1).has('some_count', T.gte, 1.0d) this returns nil (I'm querying via ruby/rails using Neo4j AR Adapter ) Instead of returning an empty array for no match, it returns a nil, which tells me something could be wrong with the query itself. I'm running neo4j community server 1.8. Is there a way I can ask Neo4j to log errors/queries, to see what could be going wrong ?

    Read the article

  • Best .NET Solution for Frequently Changed Database

    - by sestocker
    I am currently architecting a small CRUD applicaton. Their database is a huge mess and will be changing frequently over the course of the next 6 months to a year. What would you recommend for my data layer: 1) ORM (if so, which one?) 2) Linq2Sql 3) Stored Procedures 4) Parametrized Queries I really need a solution that will be dynamic enough (both fast and easy) where I can replace tables and add/delete columns frequently. Note: I do not have much experience with ORM (only a little SubSonic) and generally tend to use stored procedures so maybe that would be the way to go. I would love to learn Ling2Sql or NHibernate if either would allow for the situation I've described above.

    Read the article

  • Slow server response time - CakePHP

    - by Hasan
    I am using CakePHP 1.2 for building a website. The problem i am facing is when ever a page loads it takes a lot of time in "waiting for www.example.com". The server response time is very slow. First i thot it was my database queries, but they were executing in less than seconds time. Next i also contacted the server people. They told it was not the server. Now i am stuck very badly with "waiting for www.example.com". Is the problem is in coding or the cakePHP is mis configured. Need help badly and fast. Thanks

    Read the article

  • Is it OK to run an array with 22k strings in a PHP code on a shared web host?

    - by kuchikoo
    I'm new to writing code so kindly bear with me if this is a very noobish question. A couple of days back I asked a question about a PHP code that matches the the query entered by users on my website to an array stored within the PHP code and displays the matched queries. Here is the code I'm talking about Now I've ended up with a rather large list (over 22k) of strings that have to be stored in the array. Is it ok to run it like this? I'm hosting the site on a shared hostgator package, will this cause my site to crash? I don't know too much about DBs but can I somehow store this on MySQL instead of having it in the code?

    Read the article

  • SQL Profiles showing high activity

    - by Wong Chi
    I am running my application locally -- ie. No external traffic and very low number of queries, fully under my control. I see tons of 'Audit Login' and 'Audit Logout' events. What are these and where are they actually stored (ie. Where is this audit log)? Are these a hint of a problem with connections, because I have only a simple connection string within my app and thought that connections would remain active throughout the operation of my app (ie. a single login at launch, and then a single logout when terminating).

    Read the article

  • why LINQ 2 SQL sometime add a field like select 1 as test, others valid fields....

    - by Fredou
    I have to concat 2 linq2sql query and I have an issue since the 2 query doesn't return the same number of columns, what is weird is after a .ToList() on the queries, they can concat without problem. The reason is, for some linq2sql reason, I have 2 more column named test and test2 which come from 2 left outer join that linq2sql automatically create, something like "select 1 as test, tablefields" Is there any good reason for that? how to remove this extra "1 as test" field? here a few of examples of what it look like: google result for linq 2 sql "select 1 as test"

    Read the article

  • Oracle SQL: Multiple Subqueries Unioned Without Running Original Query Multiple Times.

    - by Bob
    So I've got a very large database, and need to work on a subset ~1% of the data to dump into an excel spreadsheet to make a graph. Ideally, I could select out the subset of data and then run multiple select queries on that, which are then UNION'ed together. Is this even possible? I can't seem to find anyone else trying to do this and would improve the performance of my current query quite a bit. Right now I have something like this: SELECT ( SELECT ( SELECT( long list of requirements ) UNION SELECT( slightly different long list of requirements ) ) ) and it would be nice if i could group the commonalities of the two long requirements and have simple differences between the two select statements being unioned.

    Read the article

  • Running script constantly in background: daemon, lock file with crontab, or simply loop?

    - by Mauritz Hansen
    I have a Perl script that queries a database for a list of files to process processes the files and then exits Upon startup this script creates a file (let's say script.lock), and upon exit it removes this file. I have a crontab entry that runs this script every minute. If the lockfile exists then the script exits, assuming that another instance of itself is running. The above process works fine but I am not very happy with the robustness of this approach. Specifically, if for some reason the script exits prematurely and the lockfile is not removed then a new instance will not execute properly. I would appreciate some advice on the following: Is using the lock file a good approach or is there a better/more robust way to do this? Is using crontab for this a good idea or could I better write an endless loop with sleep()? Should I use the GNU 'daemon' program or the Perl Proc::Daemon module (or some other equivalent) for this?

    Read the article

  • Using a custom URL parameter in Wordpress (with permalinks)?

    - by kiko
    Hello - Is there a way to perhaps edit .htaccess to add a custom URL parameter to Wordpress, so that the parameter is not stripped out by permalinks? I have a Wordpress site with a page that queries a separate (non-Wordpress) database. I passed the URL parameter "pubID" to display individual books and it is working OK. Example: http://www.uglyducklingpresse.org/catalog/browse/item/?pubID=63 But the individual books are not showing up properly in Google - maybe because they all have the same auto-generated "canonical" URL meta tab - one with the "pubID" parameter stripped out. Thank you for any help.

    Read the article

  • How to change language/region in a YQL search.spelling/search.suggestion query?

    - by Francisco Noriega
    Hello, I'm trying to use YQL's spelling and search suggestions, but as much as I try I cant find a way to change the language/region for the query, how is this done? I want to look for spelling/suggestions in spanish/mexico ("es-MX") I'm pretty happy with the results I get for queries in English, but when looking in Spanish I get no results: select * from search.suggest where query="dolor de cabeza" <?xml version="1.0" encoding="UTF-8"?> <query xmlns:yahoo="http://www.yahooapis.com/v1/base.rng" yahoo:count="0" yahoo:created="2010-11-22T17:41:13Z" yahoo:lang="en-US"> <results/> </query> I've looked around for a way to change yahoo:lang="en-US" to yahoo:lang="es-MX" but I cant find andy documentation about it. Thanks!

    Read the article

  • Adding more OR searches with CONTAINS Brings Query to Crawl

    - by scolja
    I have a simple query that relies on two full-text indexed tables, but it runs extremely slow when I have the CONTAINS combined with any additional OR search. As seen in the execution plan, the two full text searches crush the performance. If I query with just 1 of the CONTAINS, or neither, the query is sub-second, but the moment you add OR into the mix the query becomes ill-fated. The two tables are nothing special, they're not overly wide (42 cols in one, 21 in the other; maybe 10 cols are FT indexed in each) or even contain very many records (36k recs in the biggest of the two). I was able to solve the performance by splitting the two CONTAINS searches into their own SELECT queries and then UNION the three together. Is this UNION workaround my only hope? Thanks. SELECT a.CollectionID FROM collections a INNER JOIN determinations b ON a.CollectionID = b.CollectionID WHERE a.CollrTeam_Text LIKE '%fa%' OR CONTAINS(a.*, '"*fa*"') OR CONTAINS(b.*, '"*fa*"') Execution Plan (guess I need more reputation before I can post the image):

    Read the article

  • Oracle: Finding Columns with only null values

    - by Jorge Valois
    I have a table with a lot of columns and a type column. Some columns seem to be always empty for a specific type. I want to create a view for each type and only show the relevant columns for each type. Working under the assumption that if a column has ONLY null values for a specific type, then that columns should not be part of the view, how can you find that out with queries? Is there a SELECT [columnName] FROM [table] WHERE [columnValues] ARE ALL [null] I know I COMPLETELY made it all up above... I'm just trying to get the idea across. Thanks in advance!

    Read the article

  • php array into mysql

    - by mckenzie
    Hello, $sql_where = ''; $exclude = '30,35,36,122,123,124,125'; if($exclude != '') { $exclude_forums = explode(',', $exclude); foreach ($exclude_forums as $id) { if ($id > 0) { $sql_where = ' AND forum_id <> ' . trim($id); } } } $sql = 'SELECT topic_title, forum_id, topic_id, topic_type, topic_last_poster_id, topic_last_poster_name, topic_last_poster_colour, topic_last_post_time FROM ' . TOPICS_TABLE . ' WHERE topic_status <> 2 AND topic_approved = 1 ' . $sql_where . ' ORDER BY topic_time DESC'; The above code i use to exclude the id of forum to be displayed on sql queries. Why doesn't it work and still display it? Any solution

    Read the article

  • it is possible to "group by" without losing the original rows?

    - by toPeerOrNotToPeer
    i have a query like this: ID | name | commentsCount 1 | mysql for dummies | 33 2 | mysql beginners guide | 22 SELECT ..., commentsCount // will return 33 for first row, 22 for second one FROM mycontents WHERE name LIKE "%mysql%" also i want to know the total of comments, of all rows: SELECT ..., SUM(commentsCount) AS commentsCountAggregate // should return 55 FROM mycontents WHERE name LIKE "%mysql%" but this one obviously returns a single row with the total. now i want to merge these two queries in one single only, because my actual query is very heavy to execute (it uses boolean full text search, substring offset search, and sadly lot more), then i don't want to execute it twice is there a way to get the total of comments without making the SELECT twice? !! custom functions are welcome !! also variable usage is welcome, i never used them...

    Read the article

  • what good orm api will work well with scala or erlang

    - by Emotu Balogun
    I'm considering taking up scala programming but i'm really concerned about what will become of my ORM based applications. I currently use hibernate as my ORM and i find it a really reliable tool. I'd like to know if there's any ORM tool as efficient but written in scala, or will hibernate work seamlessly with it. i don't want to have to start writing endless sql queries again (like the days of JDBC). I also have the same thought about erlang. is there a good orm out there for erlang?? and can i use erlang with other DBMS like oracle and mysql with ORM

    Read the article

  • SQL COUNT records in table 2 JOINS away

    - by Fred K
    Using MySQL, I have three tables: projects: ID name 1 "birthday party" 2 "soccer match" 3 "wine tasting evening" 4 "dig out garden" 5 "mountainbiking" 6 "making music" batches: ID projectID templateID when 1 1 1 7 days before 2 1 1 1 day before 3 4 2 21 days before 4 4 1 7 days before 5 5 1 7 days before 6 3 5 7 days before 7 3 3 14 days before 8 5 1 14 days before templates: ID message 1 "Hi, I'd like to invite ..." 2 "Dear Sir, Madam, ..." 3 "Can you please ..." 4 "Would you like to ..." 5 "To all dear friends ..." 6 "Does any of you guys ..." I would like to display a table of templates and the number of projects they're used in. So, the result should be: templateID projectCount 1 3 2 1 3 1 4 0 5 1 6 0 I've tried all kinds of SQL queries using various JOINs, but I guess this is too complicated for me. Is it possible to get this result using a single SQL statement?

    Read the article

  • DB2 increase bufferpool size and compressed tables not equal better performance. Why?

    - by Mestika
    Hi, I’m working on tuning and increasing the performance of my IBM DB2 version 9.7 database. I’ve been searching around the net for the last couple of days and learned that if I created my tables in COMPRESS mode and created one more bufferpool and set both of them to access 1024mb, then the performance in my queries should increase because of the less I/Os to the disks. However, when I run my time analysis, the performance Decrease. I added the new additions to my regular database with the indexes I’ve used all the time. Each time I search google I come up with the statement that: Increased bufferpool size and several bufferpools AND a table compression SHOULD prove to get better performance. I’m very puzzled about the total unexpected result. Are there some tuning mechanisms I’ve forgot or does anyone have a explanation for this odd behavior? Sincerely Mestika

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >