Search Results

Search found 16731 results on 670 pages for 'memory limit'.

Page 423/670 | < Previous Page | 419 420 421 422 423 424 425 426 427 428 429 430  | Next Page >

  • Nested mysql select statements

    - by Jimmy Kamau
    I have a query as below: $sult = mysql_query("select * from stories where `categ` = 'businessnews' and `stryid`='".mysql_query("SELECT * FROM comments WHERE `comto`='".mysql_query("select * from stories where `categ` ='businessnews'")." ORDER BY COUNT(comto) DESC")."' LIMIT 3") or die(mysql_error()); while($ow=mysql_fetch_array($sult)){ The code above should return the top 3 'stories' with the most comments {count(comto)}. The comments are stored in a different table from the stories. The code above does not return any values and doesn't show any errors. Could someone please help?

    Read the article

  • PHP PDO close()?

    - by PHPLOVER
    Can someone tell me, when you for example update, insert, delete.. should you then close it like $stmt->close(); ? i checked php manual and don't understand what close() actually does. EXAMPLE: $stmt = $dbh->prepare("SELECT `user_email` FROM `users` WHERE `user_email` = ? LIMIT 1"); $stmt->execute(array($email)); $stmt->close(); Next part of my question is, if as an example i had multiple update queries in a transaction after every execute() for each query i am executing should i close them individually ? ... because it's a transaction not sure i need to use $stmt->close(); after each execute(); or just use one $stmt->close(); after all of them ? Thanks once again, phplover

    Read the article

  • pysvn client.log() returning empty dictionary

    - by nashr rafeeg
    i have the following script that i am using to get the log messages from svn import pysvn class svncheck(): def __init__(self, svn_root="http://10.11.25.3/svn/Moodle/modules", svn_user=None, svn_password=None): self.user = svn_user self.password = svn_password self.root = svn_root def diffrence(self): client = pysvn.Client() client.commit_info_style = 1 client.callback_notify = self.notify client.callback_get_login = self.credentials log = client.log( self.root, revision_start=pysvn.Revision( pysvn.opt_revision_kind.number, 0), revision_end=pysvn.Revision( pysvn.opt_revision_kind.number, 5829), discover_changed_paths=True, strict_node_history=True, limit=0, include_merged_revisions=False, ) print log def notify( event_dict ): print event_dict return def credentials(realm, username, may_save): return True, self.user, self.password, True s = svncheck() s.diffrence() when i run this script its returning a empty dictionary object [<PysvnLog ''>, <PysvnLog ''>, <PysvnLog ''>,.. any idea what i am doing wrong here ? i am using pysvn version 1.7.2 built again svn version 1.6.5 cheers Nash

    Read the article

  • Google Datastore low-level api query by key property

    - by Keyur
    I'm using the low-level google datastore api and I want to query by the key property and another property (let's call it category). I need to query based on a list of keys for which I'll use the IN operator. I know that the max. number of values you can provide for the IN clause is 30. I have 2 questions: Does the limit of 30 IN values apply to the key property as well? Do I need to create a composite index on {_key_ + category} or just on {category} for this query? Thanks, Keyur

    Read the article

  • MySQL Rank Not Matching High Score in Table

    - by boddie
    While making a game the MySQL call to get the top 10 is as follows: SELECT username, score FROM userinfo ORDER BY score DESC LIMIT 10 This seems to work decently enough, but when paired with a call to get a individual player's rank the numbers may be different if the player has a tied score with other players. The call to get the players rank is as follows: SELECT (SELECT COUNT(*) FROM userinfo ui WHERE (ui.score, ui.username) >= (uo.score, uo.username)) AS rank FROM userinfo uo WHERE username='boddie'; Example results from first call: +------------+-------+ | username | score | +------------+-------+ | admin | 4878 | | test3 | 3456 | | char | 785 | | test2 | 456 | | test1 | 253 | | test4 | 78 | | test7 | 0 | | boddie | 0 | | Lz | 0 | | char1 | 0 | +------------+-------+ Example results from second call +------+ | rank | +------+ | 10 | +------+ As can be seen, the first call ranks the player at number 8 on the list, but the second call puts him at number 10. What changes or what can I do to make these calls give matching results? Thank you in advance for any help!

    Read the article

  • Will MyISAM type tables work better than InnoDB for large numbers of columns?

    - by Ethan
    I have a MySQL InnoDB table with 238 columns. 56 of them are TEXT type, 27 are VARCHAR(255). I am getting MySQL error 139 when users insert data sometimes. After research I found that I'm probably running into InnoDB row size/column size/column count limitations. (I'm putting it that way because the specific limits among those three things are interdependent.) Docs on InnoDB give an idea of the limits. If I switch this table to MyISAM is it likely to solve the problem? I understand the maximum row size of 65,535 bytes. I think I'm hitting InnoDB's additional 8000 byte limit somehow. Switching to PostgreSQL is also a remote option, but would take much longer.

    Read the article

  • calculate distance with linq or subsonic

    - by minus4
    i have this MySQL statement from a search page, the user enters there postcode and it finds the nearest stiocklist within 15 MIles of the entered postcode. SELECT * , ( ( ACOS( SIN( "+SENTLNG +" * PI( ) /180 ) * SIN( s_lat * PI( ) /180 ) + COS( " + SENTLNG +" * PI( ) /180 ) * COS( s_lat * PI( ) /180 ) * COS( ( " + SENTLANG + " - s_lng ) * PI( ) /180 ) ) *180 / PI( ) ) *60 * 1.1515 ) AS distance_miles FROM new_stockists WHERE s_lat IS NOT NULL HAVING distance_miles <15 ORDER BY distance_miles ASC LIMIT 0 , 15 but now i am using linq and subsonic and not got a clue how do do this in linq or subsonic your help would be much appreciated, please also not that i have to sent in a dynamic from address, thats the postcode mentioned at the top of the page, i do a call to google to get then lng and lat from them for the postcode given.

    Read the article

  • git: having 2 push/pull repos in sync (or 1 push/pull and 1 pull in sync)

    - by xavjuan
    Hello, We work on multiple geographically seperate sites. Today I have our git clones all live on one site A. Then users from site B have to ssh over to do a git clone or to push in changes. These are bare repos where the update is through pushes. Ideally, for git clone/push performance, I'd like to limit having to go over ssh. I'd like to have a copy of git repo X live on site A and site B... and have some syncing mechanism between them. OR to have X live on both sites, but only allow pushing to A (and have that setup correctly at clone time on B) I'm worried about the case where someone on site A pushes changes to the repo at site A at the same time that someone on site B pushes a truely conflicting change to the repo at site B. Is there some 'sync'ing solution built into git for distributed open repos like this? Or a way to have a clone from X set the origin/parent to the X from the other site? thanks, -John

    Read the article

  • SelfReferenceProperty vs. ListProperty Google App Engine

    - by John
    Hi All, I am experimenting with the Google App Engine and have a question. For the sake of simplicity, let's say my app is modeling a computer network (a fairly large corporate network with 10,000 nodes). I am trying to model my Node class as follows: class Node(db.Model): name = db.StringProperty() neighbors = db.SelfReferenceProperty() Let's suppose, for a minute, that I cannot use a ListProperty(). Based on my experiments to date, I can assign only a single entity to 'neighbors' - and I cannot use the "virtual" collection (node_set) to access the list of Node neighbors. So... my questions are: Does SelfReferenceProperty limit you to a single entity that you can reference? If I instead use a ListProperty, I believe I am limited to 5,000 keys, which I need to exceed. Thoughts? Thanks, John

    Read the article

  • Mysql query different group by

    - by solomongaby
    Hello i have a products table that contains normal products and configurable product It has a basic stucture of: id name price configurable ('yes', 'no') id_configuration Normal products have configurable no and 0 as id configuration, and configurable products have it set to yes and have the same id_configuration value. The current query is: SELECT `products`.* FROM `products`, `categories`, `product_categories` WHERE `categories`.`id` = 23 AND `products`.`id` = `product_categories`.`id_product` AND `categories`.`id` = `product_categories`.`id_category` AND `products`.`active` = 'yes' AND ORDER BY `pos_new` ASC, `created` DESC LIMIT 0,20 I was wondering if there is a way to group by id_configuration, but only for the configurable products. The reason is that i want only one of the configuration products to show in search. I was thinking i could do a join, but was wondering if there is a way to do some kind of special group by. For example for configurable yes the field should be id_configuration otherwise it should be the id field Thanks a lot for any sugestions

    Read the article

  • Fetch main model and translations in one query with globalize2

    - by J. Pablo Fernández
    Is there a way to fetch the model and the translations in one query when using globalize2? For example, having a model called Language which have two fields, code and name of which the second is translatable I do the following: en = Language.find_by_code("en") and it runs this query: SELECT SQL_NO_CACHE * FROM `languages` WHERE (`languages`.`code` = 'en') LIMIT 1 and when I do: en.name it runs: SELECT SQL_NO_CACHE * FROM `language_translations` WHERE (`language_translations`.language_id = 123 AND (`language_translations`.`locale` IN ('en','root'))) and if I do it again it'll re-run the query. Is there a way to fetch all the translated data in the first query? I've tried: en = Language.find_by_code("en", :joins => "JOIN language_translations ON language_translations.language_id = languages.id") but it made no difference. UPDATE: this is being discussed as an issue in globalize2: http://github.com/joshmh/globalize2/issues/#issue/33

    Read the article

  • most popular j2ee based websites

    - by krishna
    J2EE as I understand is used to build Enterprise applications. My question is :What are the most popular public facing(internet) sites using j2ee stack. The one's that I know of are : linkedIn.com ,evite.com and sun ibm and oracle (obviously) Eclipse.org uses php, I wonder why? If you have worked on/know any other sites, can your share your experience and also the technologies used(if that's not an issue)? EDIT:It doesn't have to use the full stack. EDIT : There are quite a few ecommerce websites like bestbuy.com. I know this bcos I worked with the ATG(atg.com) ecommerce suite and their website lists their clients. Iam looking for those kind of examples and also your experience working on them. Please limit to only internet sites

    Read the article

  • Double-tap or two single-taps?

    - by Jaka Jancar
    What is the time limit for two taps to be considered a double-tap, on the iPhone OS? // Edit: Why is this important? In order to handle single-tap and double-tap differently, Apple's guide says to do performSelector...afterDelay with some 'reasonable' interval on first tap (and cancel it later if the second tap is detected). The problem is that if the interval is too short (0.1), the single tap action will be performed even when double-tapping (if relying only on tapCount, that is). If it's too long (0.8), the user will be waiting unnecessarily for the single-tap to be recognized, when there is no possibility for a double-tap. It has to be exactly the correct number, in order to work optimally, but definitely not smaller, or there's a chance for bugs (simultaneous single-tap and double-tap).

    Read the article

  • Are bit operations quick?

    - by flashnik
    I'm dealing with a problem which needs to work with a lot of data. Currently its' values are represented as unsigned int. I know that real values do not exceed some limit, say 1000. That means that I can use unsigned short to store it. One profit is that it'll use less space. Do I have to pay for it by loosing in performance? Another assumption. I decided to store data as short but all calling functions use int, so I need to convert between these datatypes when storing/extracting values. Wiil the performance lost be dramatic? Third assumption. Due to great wish to econom memory I decided to use not short but just 10 bits packed into array of unsigned int. What will happen in this case comparing with previous ones?

    Read the article

  • Storing n-grams in database in < n number of tables.

    - by kurige
    If I was writing a piece of software that attempted to predict what word a user was going to type next using the two previous words the user had typed, I would create two tables. Like so: == 1-gram table == Token | NextWord | Frequency ------+----------+----------- "I" | "like" | 15 "I" | "hate" | 20 == 2-gram table == Token | NextWord | Frequency ---------+------------+----------- "I like" | "apples" | 8 "I like" | "tomatoes" | 12 "I hate" | "tomatoes" | 20 "I hate" | "apples" | 2 Following this example implimentation the user types "I" and the software, using the above database, predicts that the next word the user is going to type is "hate". If the user does type "hate" then the software will then predict that the next word the user is going to type is "tomatoes". However, this implimentation would require a table for each additional n-gram that I choose to take into account. If I decided that I wanted to take the 5 or 6 preceding words into account when predicting the next word, then I would need 5-6 tables, and an exponentially increase in space per n-gram. What would be the best way to represent this in only one or two tables, that has no upper-limit on the number of n-grams I can support?

    Read the article

  • What is the cost in bytes for the overhead of a sql_variant column in SQL Server?

    - by Elan
    I have a table, which contains many columns of float data type with 15 digit precision. Each column consumes 8 bytes of storage. Most of the time the data does not require this amount of precision and could be stored as a real data type. In many cases the value can be 0, in which case I could get away with storing a single byte. My goal here is to optimize space storage requirements, which is an issue I am facing working with a SQL Express 4GB database size limit. If byte, real and float data types are stored in a sql_variant column there is obviously some overhead involved in storing these values. What is the cost of this overhead? I would then need to evaluate whether I would actually end up in significant space savings (or not) switching to using sql_variant column data types. Thanks, Elan

    Read the article

  • How do I record streams in chunks on Flash Media Server.

    - by Vasil
    I want to record a stream which is published with Flash Live Encoder to FMS 3.5, but split the recording in files with predefined length. For example if a stream 'webcam' is published I want to record it in chunks of 10 minutes: 'webcam1.flv', 'webcam2.flv' ... From what I can tell there's no facility to work with timers. The only solution I could think of was using stream.record() with a time limit parameter but that seems like a hack because it triggers NetStream.Record.DiskQuotaExceeded on the stream when the recordin should stop and start recording another chunk. Has anyone done something similar?

    Read the article

  • Perl Math::Business::EMA help

    - by Dustin
    Script pulls data from mysql: $DBI::result = $db->prepare(qq{ SELECT close FROM $table WHERE day <= '$DATE' ORDER BY day DESC LIMIT $EMA }); $DBI::result->execute(); while($row = $DBI::result->fetchrow) { print "$row\n"; }; with the following example results: 1.560 1.560 1.550... But I need to work out the EMA using Math::Business::EMA; and I'm not sure how to calculate this while maintaining the accuracy. EMA is weighted and My lack of Perl knowledge is not helping.

    Read the article

  • Javascript Input type

    - by Phoenix
    Hi All, In javascript we use input type = file .. to open up a file browser pop-up .. is there a way to limit access to folders .. I want to select a folder then ftp all the files in the folder .. so i need access upto only the folder level and not file level .. i guess it would be tedious to go an manually select every file from the folder and then ftp .. is there a way to do that.. Also, how can i set the file-browser pop-up window path to a default one ?

    Read the article

  • What adapter to use for ExpandableListView with non-TextView views?

    - by David
    I have an ExpandableListView in which I'd like to have controls other than TextView. Apparently, SimpleExandableListViewAdapter assumes all the controls are TextViews. A cast exception is generated if they are not. What is the recommended solution? Options I can think of include: - Use some other included adapter. But I can't tell if they all have the same assumption. - Create my own adapter. Is there a doc which describes the contract, ie the sequence of method calls an adapter will encounter? I expected the existing adapters to require the views to conform to some interface to allow any conforming view to be used, rather than hardcode to textview and limit where they can be used.

    Read the article

  • Get a specific entry by group in SQL

    - by Jensen
    Hi, I've a database who contain some datas in that form: icon(name, size, tag) (myicon.png, 16, 'twitter') (myicon.png, 32, 'twitter') (myicon.png, 128, 'twitter') (myicon.png, 256, 'twitter') (anothericon.png, 32, 'facebook') (anothericon.png, 128, 'facebook') (anothericon.png, 256, 'facebook') So as you see it, the name field is not uniq I can have multiple icons with the same name and they are separated with the size field. Now in PHP I have a query that get ONE icon set, for example : mysql_query("SELECT * FROM icon WHERE tag='".$tag."' ORDER BY size LIMIT 0, 10"); With this example if $tag contain 'twitter' it will show ONLY the first SQL data entry with the tag 'twitter', so it will be : (myicon.png, 16, 'twitter') This is what I want, but I would prefer the '128' size by default. Is this possible to tell SQL to send me only the 128 size when existing and if not another size ? Thanks !

    Read the article

  • Advanced MySQL Search Help

    - by Brandon
    I've been trying to come up with something for a while now to no avail. My MySQL knowledge is rudimentary at best so I could use some guidance on what I should use for the following: I have 2 tables ('bible' and 'books') that I need to search from. Right now I am just searching 'bible' with the following query: SELECT * FROM bible WHERE text LIKE '%" . $query . "%' ORDER BY likes DESC LIMIT $start, 10 Now, I need to add another part that searches for some pretty advanced stuff. Here is what I want to do in pseudocode which I am aware doesn't work: SELECT * FROM bible WHERE books.book+' '+bible.chapter+':'+bible.verse = '$query' $query would equal something like Genesis 1:2, Genesis coming from books.book, 1 coming from bible.chapter and 2 coming from bible.verse Any help/guidance on this is much appreciated =)

    Read the article

  • GQL: I'm storing JSON in the DataStore. All json is getting converted to html entities, how to avoid

    - by fmsf
    The tittle says most: I'm storing JSON in the DataStore. All json is getting converted to html entities, how can I avoid this? Original I had myJson = db.StringProperty() it complained the json i had was to long and StringProperty had a limit of around 500 chars. Sugesting to use TextProperty instead. It inserted without problems but now myJson looks like this when i fetch it from the database: { &quot;timeUnit&quot;: &quot;14&quot;, &quot;taskCounter&quot;: &quot;0&quot;, &quot;dependencyCounter&quot;: &quot;0&quot;, &quot;tasks&quot;: [], &quot;dependencies&quot;: []} Any sugestions?

    Read the article

  • Guidelines for good webcrawler 'Etiquette'

    - by Harry
    I'm building a search engine (for fun) and it has just struck me that potentially my little project might wreak havok by clicking on ads and all sorts of problems. So what are the guidelines for good webcrawler 'Etiquette'? Things that spring to mind: Observe Robot.txt instructions Limit the number of simultaneous requests to the same domain Don't follow ad links? Stopping the crawler from clicking on ads - This one is particularly on my mind at the moment... how do i stop my bot from 'clicking' on ads? if it is going straight to the url in the ad is it counted as a click?

    Read the article

  • Building 'flat' rather than 'tree' LINQ expressions

    - by Ian Gregory
    I'm using some code (available here on MSDN) to dynamically build LINQ expressions containing multiple OR 'clauses'. The relevant code is var equals = values.Select(value => (Expression)Expression.Equal(valueSelector.Body, Expression.Constant(value, typeof(TValue)))); var body = equals.Aggregate<Expression>((accumulate, equal) => Expression.Or(accumulate, equal)); This generates a LINQ expression that looks something like this: (((((ID = 5) OR (ID = 4)) OR (ID = 3)) OR (ID = 2)) OR (ID = 1)) I'm hitting the recursion limit (100) when using this expression, so I'd like to generate an expression that looks like this: (ID = 5) OR (ID = 4) OR (ID = 3) OR (ID = 2) OR (ID = 1) How would I modify the expression building code to do this?

    Read the article

< Previous Page | 419 420 421 422 423 424 425 426 427 428 429 430  | Next Page >