Search Results

Search found 5021 results on 201 pages for 'limit'.

Page 89/201 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • System call time out?

    - by Arnold
    Hi, I'm using unix system() calls to gunzip and gzip files. With very large files sometimes (i.e. on the cluster compute node) these get aborted, while other times (i.e. on the login nodes) they go through. Is there some soft limit on the time a system call may take? What else could it be?

    Read the article

  • SEC_TO_TIME() convert to java.sql.Time error

    - by chun
    hi I have a aggregate column present the microsecond, a report(with jasper) have to show HH:mm:ss of this indicator What I did is using SEC_TO_TIME(sum(col)/1000) , but when mapping to java.sql.Time, i doesn't work when the value of hour in result pass over 24(ex:36:33:33) Then I think another way, not using sec_to_time, just mapping the microsecond as Bigdecimal, but dunno what java class shoud i use to format date as the default format of hh:mm:ss is limit to 24...?

    Read the article

  • apc_delete() not working in background script

    - by Jared
    I have a shell background convertor on my video website and I can't seem to get APC to delete a key as a file is uploaded and its visibility is updated. The script is structured like so: if(file_exists($output_file)) { $conn->query("UPDATE `foo` SET `bar` = 1 WHERE `id` = ".$id." LIMIT 1"); apc_delete('feed:'.$id); } Everything works fine except for the APC and this is the only script on the site that has had this problem. I'm stumped.

    Read the article

  • Question regarding MySQL indices and their functionality

    - by user281434
    Hi Say I have an ordinary table in my db like so ---------------------------- | id | username | password | ---------------------------- | 24 | blah | blah | ---------------------------- A primary key is assigned to the id column. Now when I run a Mysql query like this: SELECT id FROM table WHERE username = 'blah' LIMIT 1 Does that primary key index even help? If I am telling it to match usernames, then shouldn't the username column be indexed instead? Thanks for your time

    Read the article

  • mysql query optimization

    - by vamsivanka
    I would need some help on how to optimize the query. select * from transaction where id < 7500001 order by id desc limit 16 when i do an explain plan on this - the type is "range" and rows is "7500000" According to the some online reference's this is explained as, it took the query 7,500,000 rows to scan and get the data. Is there any way i can optimize so it uses less rows to scan and get the data. Also, id is the primary key column.

    Read the article

  • Huge Graph Structure

    - by Harph
    I'm developing an application in which I need a structure for represent a huge graph (between 1000000 and 6000000 nodes and 100 or 600 edges) in memory. The edges representation will contain some attribute of the relation. I have tried a memory map representation, arrays, dictionaries and string for represent that structure in memory, but this always crash because the memory limit. I would to get an advice of how can I represent this, or something similar. By the way, I'm using python.

    Read the article

  • How do I add a filter button to this pagination?

    - by ClarkSKent
    Hey, I want to add a button(link), that when clicked will filter the pagination results. I'm new to php (and programming in general) and would like to add a button like 'Automotive' and when clicked it updates the 2 mysql queries in my pagination script, seen here: As you can see, the category automotive is hardcoded in, I want it to be dynamic, so when a link is clicked it places whatever the id or class is in the category part of the query. 1: $record_count = mysql_num_rows(mysql_query("SELECT * FROM explore WHERE category='automotive'")); 2: $get = mysql_query("SELECT * FROM explore WHERE category='automotive' LIMIT $start, $per_page"); This is the entire current php pagination script that I am using: <?php //connecting to the database $error = "Could not connect to the database"; mysql_connect('localhost','root','root') or die($error); mysql_select_db('ajax_demo') or die($error); //max displayed per page $per_page = 2; //get start variable $start = $_GET['start']; //count records $record_count = mysql_num_rows(mysql_query("SELECT * FROM explore WHERE category='automotive'")); //count max pages $max_pages = $record_count / $per_page; //may come out as decimal if (!$start) $start = 0; //display data $get = mysql_query("SELECT * FROM explore WHERE category='automotive' LIMIT $start, $per_page"); while ($row = mysql_fetch_assoc($get)) { // get data $name = $row['id']; $age = $row['site_name']; echo $name." (".$age.")<br />"; } //setup prev and next variables $prev = $start - $per_page; $next = $start + $per_page; //show prev button if (!($start<=0)) echo "<a href='pagi_test.php?start=$prev'>Prev</a> "; //show page numbers //set variable for first page $i=1; for ($x=0;$x<$record_count;$x=$x+$per_page) { if ($start!=$x) echo " <a href='pagi_test.php?start=$x'>$i</a> "; else echo " <a href='pagi_test.php?start=$x'><b>$i</b></a> "; $i++; } //show next button if (!($start>=$record_count-$per_page)) echo " <a href='pagi_test.php?start=$next'>Next</a>"; ?>

    Read the article

  • Using Time datatype in MySQL without seconds

    - by Alex
    I'm trying to store a 12/24hr (ie; 00:00) clock time in a MySQL database. At the moment I am using the time datatype. This works ok but it insists on adding the seconds to the column. So you enter 09:20 and it is stored as 09:20:00. Is there any way I can limit it in MySQL to just 00:00?

    Read the article

  • Committed JDO writes do not apply on local GAE HRD, or possibly reused transaction

    - by eeeeaaii
    I'm using JDO 2.3 on app engine. I was using the Master/Slave datastore for local testing and recently switched over to using the HRD datastore for local testing, and parts of my app are breaking (which is to be expected). One part of the app that's breaking is where it sends a lot of writes quickly - that is because of the 1-second limit thing, it's failing with a concurrent modification exception. Okay, so that's also to be expected, so I have the browser retry the writes again later when they fail (maybe not the best hack but I'm just trying to get it working quickly). But a weird thing is happening. Some of the writes which should be succeeding (the ones that DON'T get the concurrent modification exception) are also failing, even though the commit phase completes and the request returns my success code. I can see from the log that the retried requests are working okay, but these other requests that seem to have committed on the first try are, I guess, never "applied." But from what I read about the Apply phase, writing again to that same entity should force the apply... but it doesn't. Code follows. Some things to note: I am attempting to use automatic JDO caching. So this is where JDO uses memcache under the covers. This doesn't actually work unless you wrap everything in a transaction. all the requests are doing is reading a string out of an entity, modifying part of the string, and saving that string back to the entity. If these requests weren't in transactions, you'd of course have the "dirty read" problem. But with transactions, isolation is supposed to be at the level of "serializable" so I don't see what's happening here. the entity being modified is a root entity (not in a group) I have cross-group transactions enabled Another weird thing is happening. If the concurrent modification thing happens, and I subsequently edit more than 5 more entities (this is the max for cross-group transactions), then nothing happens right away, but when I stop and restart the server I get "IllegalArgumentException: operating on too many entity groups in a single transaction". Could it be possible that the PMF is returning the same PersistenceManager every time, or the PM is reusing the same transaction every time? I don't see how I could possibly get the above error otherwise. The code inside the transaction just edits one root entity. I can't think of any other way that GAE would give me the "too many entity groups" error. The relevant code (this is a simplified version) PersistenceManager pm = PMF.getManager(); Transaction tx = pm.currentTransaction(); String responsetext = ""; try { tx.begin(); // I have extra calls to "makePersistent" because I found that relying // on pm.close didn't always write the objects to cache, maybe that // was only a DataNucleus 1.x issue though Key userkey = obtainUserKeyFromCookie(); User u = pm.getObjectById(User.class, userkey); pm.makePersistent(u); // to make sure it gets cached for next time Key mapkey = obtainMapKeyFromQueryString(); // this is NOT a java.util.Map, just FYI Map currentmap = pm.getObjectById(Map.class, mapkey); Text mapData = currentmap.getMapData(); // mapData is JSON stored in the entity Text newMapData = parseModifyAndReturn(mapData); // transform the map currentmap.setMapData(newMapData); // mutate the Map object pm.makePersistent(currentmap); // make sure to persist so there is a cache hit tx.commit(); responsetext = "OK"; } catch (JDOCanRetryException jdoe) { // log jdoe responsetext = "RETRY"; } catch (Exception e) { // log e responsetext = "ERROR"; } finally { if (tx.isActive()) { tx.rollback(); } pm.close(); } resp.getWriter().println(responsetext); EDIT: so I have verified that it fails after exactly 5 transactions. Here's what I do: I create a Foo (root entity), do a bunch of concurrent operations on that Foo, and some fail and get retried, and some commit but don't apply (as described above). Then, I start creating more Foos, and do a few operations on those new Foos. If I only create four Foos, stopping and restarting app engine does NOT give me the IllegalArgumentException. However if I create five Foos (which is the limit for cross-group transactions), then when I stop and restart app engine, I do get the exception. So it seems that somehow these new Foos I am creating are counting toward the limit of 5 max entities per transaction, even though they are supposed to be handled by separate transactions. It's as if a transaction is still open and is being reused by the servlet when it handles the new requests for the 2nd through 5th Foos. EDIT2: it looks like the IllegalArgument thing is independent of the other bug. In other words, it always happens when I create five Foos, even if I don't get the concurrent modification exception. I don't know if it's a symptom of the same problem or if it's unrelated. EDIT3: I found out what was causing the (unrelated) IllegalArgumentException, it was a dumb mistake on my part. But the other issue is still happening. EDIT4: added pseudocode for the datastore access EDIT5: I am pretty sure I know why this is happening, but I will still award the bounty to anyone who can confirm it. Basically, I think the problem is that transactions are not really implemented in the local version of the datastore. References: https://groups.google.com/forum/?fromgroups=#!topic/google-appengine-java/gVMS1dFSpcU https://groups.google.com/forum/?fromgroups=#!topic/google-appengine-java/deGasFdIO-M https://groups.google.com/forum/?hl=en&fromgroups=#!msg/google-appengine-java/4YuNb6TVD6I/gSttMmHYwo0J Because transactions are not implemented, rollback is essentially a no-op. Therefore, I get a dirty read when two transactions try to modify the record at the same time. In other words, A reads the data and B reads the data at the same time. A attempts to modify the data, and B attempts to modify a different part of the data. A writes to the datastore, then B writes, obliterating A's changes. Then B is "rolled back" by app engine, but since rollbacks are a no-op when running on the local datastore, B's changes stay, and A's do not. Meanwhile, since B is the thread that threw the exception, the client retries B, but does not retry A (since A was supposedly the transaction that succeeded).

    Read the article

  • Get size of max possible result set

    - by wheresrhys
    For my application most of my SQL queries return a specified number of rows. I'd also like to get the maximum possible number of results i.e. how many rows would be returned if I wasn't setting a LIMIT. Is there a more efficient way to do this (using just SQL?) than returning all the results, getting the size of the result set and then splicing the set to return just the first N rows.

    Read the article

  • Designing Business Objects to indicate constraints such as Max Length

    - by JR
    Is there a standard convention when designing business objects for providing consumers with a way to discover constraints such as a property's maximum length? It could be used up in the UI layer to, for example, set a Textbox's MaxLength property according to the maximum length limit back in the business object. Is there a standard design approach for this?

    Read the article

  • Question related to UITextView

    - by user217572
    How to adjust height of textview programetically as we are typing the text textview should increase upwards with no limit.UIScrollview for textview is disable in our case.We are giving scroll for entire view to see upward contents in textview

    Read the article

  • What is your favorite NumPy feature?

    - by Gökhan Sever
    Share your favourite NumPy features / tips & tricks. Please try to limit one feature per line. The question is posted in parallel at ask.scipy.org We welcome you to join the conversation there -with the main idea of collecting the Scientific Python related questions under one roof. Feel free to dual-post or post at your favourite site...

    Read the article

  • innerJoin query show error

    - by Chithri Ajay
    just i print the two table data so i am using inner join SELECT sd.GameName FROM LottoryTickets AS sd JOIN group AS p ON sd.Group = p.groupname WHERE p.groupname = 11 now i get #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'group AS p ON sd.Group = p.groupname WHERE p.groupname = 11 LIMIT 0, 30' at line 3 this response please guide me thanks for advance.

    Read the article

  • mysql problem left join and from_unixtime

    - by moustafa
    i have this SELECT COUNT(1) cnt, a.auther_id FROM `posts` a LEFT JOIN users u ON a.auther_id = u.id GROUP BY a.auther_id ORDER BY cnt DESC LIMIT 20 its work fine bu now i want select from posts which added from 1 day tried to use WHERE from_unixtime(post_time) >= SUBDATE(NOW(),1) but its didnot worked any one have idea

    Read the article

  • Image upload storage strategies

    - by MatW
    When a user uploads an image to my site, the image goes through this process; user uploads pic store pic metadata in db, giving the image a unique id async image processing (thumbnail creation, cropping, etc) all images are stored in the same uploads folder So far the site is pretty small, and there are only ~200,000 images in the uploads directory. I realise I'm nowhere near the physical limit of files within a directory, but this approach clearly won't scale, so I was wondering if anyone had any advice on upload / storage strategies for handling large volumes of image uploads.

    Read the article

  • Resize image on upload php

    - by blasteralfred
    Hi, I have a php script for image upload as below <?php $LibID = $_POST[name]; define ("MAX_SIZE","10000"); function getExtension($str) { $i = strrpos($str,"."); if (!$i) { return ""; } $l = strlen($str) - $i; $ext = substr($str,$i+1,$l); return $ext; } $errors=0; $image=$_FILES['image']['name']; if ($image) { $filename = stripslashes($_FILES['image']['name']); $extension = getExtension($filename); $extension = strtolower($extension); if (($extension != "jpg") && ($extension != "jpeg")) { echo '<h1>Unknown extension!</h1>'; $errors=1; exit(); } else { $size=filesize($_FILES['image']['tmp_name']); if ($size > MAX_SIZE*1024) { echo '<h1>You have exceeded the size limit!</h1>'; $errors=1; exit(); } $image_name=$LibID.'.'.$extension; $newname="uimages/".$image_name; $copied = copy($_FILES['image']['tmp_name'], $newname); if (!$copied) { echo '<h1>image upload unsuccessfull!</h1>'; $errors=1; exit(); }}} ?> which uploads the image file to a folder "uimages" in the root. I have made changes in the html file for the compact display of the image by defining "max-height" and "max-width". But i want to resize the image file on upload. The image file may have a maximum width of 100px and maximum height of 150px. The image proportions must be constrained. That is, the image may be smaller than the above dimensions, but, it should not exceed the limit. How can I make this possible?? Thanks in advance :) blasteralfred..

    Read the article

  • Can I customize the Magento app/code/core folder without affecting future upgrades?

    - by mck89
    I found a guide on how to add new attributes to users, it explains that for this operation I must modify some files in the app / code / core / Mage directory (the directory that contains Magento’s modules). But if i make some changes in that folder will this affect future upgrades? Will an upgrade will delete my changes? Should I limit the changes only to my modules to not have problems with updates?

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >