Search Results

Search found 16731 results on 670 pages for 'memory limit'.

Page 429/670 | < Previous Page | 425 426 427 428 429 430 431 432 433 434 435 436  | Next Page >

  • Can I use a MySQL PREPARE statement in a function to create a query with a variable table name

    - by aHunter
    I want to create a function that has a select query inside that can be used against multiple database tables but I can not use a variable as the table name. Can I get around this using a PREPARE statement in the function? An Example: FUNCTION `TESTFUNC`(dbTable VARCHAR(25)) RETURNS bigint(20) BEGIN DECLARE datereg DATETIME; DECLARE stmt VARCHAR(255); SET stmt := concat( 'SELECT dateT FROM', dbTable, 'ORDER BY dateT DESC LIMIT 1'); PREPARE stmt FROM @stmt; EXECUTE stmt; RETURN dateT; END $$ Thanks in advance for any input.

    Read the article

  • mysql join default value

    - by andy
    I've been trying to use the IsNull() function to ensure that there is a value for a field. $result = mysql_query(" SELECT crawled.id,IsNull(sranking.score,0) as Score,crawled.url,crawled.title,crawled.blurb FROM crawled LEFT JOIN sranking ON crawled.id = sranking.sid WHERE crawled.body LIKE '%".$term."%' ORDER BY Score DESC LIMIT " . $start . "," . $c . " ") or die(mysql_error()); But I get the error message:Incorrect parameter count in the call to native function 'IsNull' Anybody have any ideas? I'm pretty new to mySQL.

    Read the article

  • Do I need to include all fields in my entity framework model

    - by Jim B
    Quick question for everyone: Do I need to include all the database table fields on my EF model? For example; I've created a sub-model that only deals with tblPayment and associated tables. Now, I need to write a LINQ query to get some information about items. I would typically get this by joining tblPayment to tblInvoice to tblInvoiceItem to finally tblOrderItem. I'm wondering if when I add in those other tables, do I need to include all the fields for tblInvoice and tblInvoiceItem? Ideally; I'd just like to keep the fields I'd need to join on, as that would limit the possibility of my sub-model breaking if other fields on those tables are modified/deleted. Can I do this?

    Read the article

  • Integrate ClickOnce update in a setup

    - by Erick
    As recent decision in my team we decided to deploy a software we develop for some time now in ClickOnce. Previously it was deployed with a merge module in a setup project. We had in mind to actually deploy the ClickOnce setup/updater in the web server setup. The problem here would not exactly to integrate it but to make sure to limit the human work to integrate it. Normally I guess it would be necessary to "add - file" and select the folder containing the setup.exe + publish.htm + application files + .application, but I fear that for each time we publish a new version to the hard drive we have to update the setup project as well. Would someone have some insight to help that ? (especially to not have to add the program_[version] folder inside application files that is created each time a new publish is done).

    Read the article

  • DB function failed with error number 1 in joomla admin panel

    - by sabuj
    When i access joomla article manager or module manager then i had faced the bellow output: 500 - An error has occurred! DB function failed with error number 1 Can't create/write to file '/tmp/#sql_57c0_0.MYD' (Errcode: 17) SQL=SELECT c.*, g.name AS groupname, cc.title AS name, u.name AS editor, f.content_id AS frontpage, s.title AS section_name, v.name AS author FROM jos_content AS c LEFT JOIN jos_categories AS cc ON cc.id = c.catid LEFT JOIN jos_sections AS s ON s.id = c.sectionid LEFT JOIN jos_groups AS g ON g.id = c.access LEFT JOIN jos_users AS u ON u.id = c.checked_out LEFT JOIN jos_users AS v ON v.id = c.created_by LEFT JOIN jos_content_frontpage AS f ON f.content_id = c.id WHERE c.state != -2 ORDER BY section_name , section_name, cc.title, c.ordering LIMIT 0, 20

    Read the article

  • RSS Reader php (have already read related articles)

    - by lightingwrist
    Hey there. I've read all the related articles on here and can't find one that is specific to what I am looking for. I am new to RSS and am looking for the following reader if anyone know's the right direction to throw me in: An rss reader that I can put on my page that does NOT require mysql database A fairly light chunk of code that I can just add as many .xml,rss.php links/addresses to I can wrap div's around to style each segment specifically as possible can manually limit the amount of feeds that are read to conform to my desires of the pages content out put thanks in advance!

    Read the article

  • Migrating from mssql to firebird: pro and cons

    - by user193655
    i am considering the migration for 3 reasons: 1) SQLSERVER installation is a nightmar, expecially for 1-user software. Software installs in 10 seconds, SQLServer in 1 hour. Firebird installation is much easier. 2) SQLSERVER runs on windows server only 3) My customers have all the express edition 4) i am not using any advanced feature, I am now starting using filestream, but the main reason for this is that Express eidtion has 4/10GB db size limit So these are all Pros of moving to Firebird. Which are the cons? I can also plan to support both platiforms, but this will backfire I fear.

    Read the article

  • Fastest way to find sum of digits on big numbers

    - by dada
    I have some big numbers (again) and i need to find if the sum of the digits is an even number. I tried this: finding the sum of the digits with a while loop and then checking if that sum % 2 equals 0 and it's working but it's too slow for big numbers, because i am given intervals of numbers and if the input is 1999999 19999999999 then my program fails, i cannot complete within the time limit which is 0,1 sec. What to do ? Is there any other faster way to do this ? EDIT: The input 1999999 19999999999 means it will start with 1999999 and check all the numbers like i wrote above until 19999999999, and because we are talking about big numbers (< 2^30) my program is not worthy.

    Read the article

  • Serving large generated files using Google App Engine?

    - by John Carter
    Hiya, Presently I have a GAE app that does some offline processing (backs up a user's data), and generates a file that's somewhere in the neighbourhood of 10 - 100 MB. I'm not sure of the best way to serve this file to the user. The two options I'm considering are: Adding some code to the offline processing code that 'spoofs' it as a form upload to the blob store, and going thru the normal blobstore process to serve the file. Having the offline processing code store the file somewhere off of GAE, and serving it from there. Is there a much better approach I'm overlooking? I'm guessing this is functionality that isn't well suited to GAE. I had thought of storing in the datastore as db.Text or Dd.Blob but there I encounter the 1 MB limit. Any input would be appreciated,

    Read the article

  • Back Button gets Disabled on IE 7,8 for an ASP.NET site

    - by timeitquery
    In an ASP.NET 3.5 website we are noticing that the back button is not working properly. If the user does several postbacks (say 10 times), and than starts pressing back button - the back button gets disabled before the user gets through all the pages. The site does not use AJAX.net. I can reproduce the issue on IE 7 and 8 almost always. The problem seems to be with some sort of limit IE has on History Cache for a given tab/instance. In the tests I did the post request to the server are large - around 83k, and the responses are are round 300k. It seems that with these request sizes the history does not hold more than 4 items. The moment I get to the 5 post, the first item i had selected is dropped.

    Read the article

  • Retrive multiple values from single dimension value / key JSON

    - by jonnypixel
    I'm busting my head trying to work this out. "ContentBlock1":["2","22"] I have been trying to get the 2 and the 22 into a comma sepertaed string so i can use it within a MySQL IN(2,22) query. I currently have tried several ways but none seem to work for me. $ContentBlock = my json data; $cid = json_decode($ContentBlock,true); foreach ($cid as $key){ $jsoncid = "$key ,"; } And then: SELECT * FROM content WHERE featured=1 AND state=1 AND catid IN($jsoncid) ORDER BY ordering ASC LIMIT 4");

    Read the article

  • Need help optimizing MYSQL query with join

    - by makeee
    I'm doing a join between the "favorites" table (3 million rows) the "items" table (600k rows). The query is taking anywhere from .3 seconds to 2 seconds, and I'm hoping I can optimize it some. Favorites.faver_profile_id and Items.id are indexed. Instead of using the faver_profile_id index I created a new index on (faver_profile_id,id), which eliminated the filesort needed when sorting by id. Unfortunately this index doesn't help at all and I'll probably remove it (yay, 3 more hours of downtime to drop the index..) Any ideas on how I can optimize this query? In case it helps: Favorite.removed and Item.removed are "0" 98% of the time. Favorite.collection_id is NULL about 80% of the time. SELECT `Item`.`id`, `Item`.`source_image`, `Item`.`cached_image`, `Item`.`source_title`, `Item`.`source_url`, `Item`.`width`, `Item`.`height`, `Item`.`fave_count`, `Item`.`created` FROM `favorites` AS `Favorite` LEFT JOIN `items` AS `Item` ON (`Item`.`removed` = 0 AND `Favorite`.`notice_id` = `Item`.`id`) WHERE ((`faver_profile_id` = 1) AND (`collection_id` IS NULL) AND (`Favorite`.`removed` = 0) AND (`Item`.`removed` = '0')) ORDER BY `Favorite`.`id` desc LIMIT 50;

    Read the article

  • Searching custom nodes' by field in Drupal?

    - by Airjoe
    Hello- Using Drupal on a project which I made custom node types in using the CCK. I want to be able to search the specific node based on a custom field the node has. So let's say I have this node type Article which has a field "myfield", I want to be able to search for Articles based on the myfield field. I understand the default search module allows for searching of node types using the type:MyNodeType in the search, but I did not see any way to limit which fields are searched. Any tips? Is this something that is going to get crazy? Appreciate the help.

    Read the article

  • When does it make sense to use a map?

    - by kiwicptn
    I am trying to round up cases when it makes sense to use a map (set of key-value entries). So far I have two categories (see below). Assuming more exist, what are they? Please limit each answer to one unique category and put up an example. Property values (like a bean) age -> 30 sex -> male loc -> calgary Presence, with O(1) performance peter -> 1 john -> 1 paul -> 1

    Read the article

  • MSSQL - Select one random record not showing duplicates

    - by Lukes123
    I have two tables, events and photos, which relate together via the 'Event_ID' column. I wish to select ONE random photo from each event and display them. How can I do this? I have the following which displays all the photos which are associated. How can I limit it to one per event? SELECT Photos.Photo_Id, Photos.Photo_Path, Photos.Event_Id, Events.Event_Title, Events.Event_StartDate, Events.Event_EndDate FROM Photos, Events WHERE Photos.Event_Id = Events.Event_Id AND Events.Event_EndDate < GETDATE() AND Events.Event_EndDate IS NOT NULL AND Events.Event_StartDate IS NOT NULL ORDER BY NEWID() Thanks Luke Stratton

    Read the article

  • How to reduce the number of points in (x,y) data

    - by Gowtham
    I have a set of data points: (x1, y1) (x2, y2) (x3, y3) ... (xn, yn) The number of sample points can be thousands. I want to represent the same curve as accurately as possible with minimal (lets suppose 30) set of points. I want to capture as many inflection points as possible. However, I have a hard limit on the number of allowed points to represent the data. What is the best algorithm to achieve the same? Is there any free software library that can help? PS: I have tried to implement relative slope difference based point elimination, but this does not always result in the best possible data representation. Thanks for your time. -Gowtham

    Read the article

  • Single logical SQL Server possible from multiple physical servers?

    - by TuffyIsHere
    Hi, With Microsoft SQL Server 2005, is it possible to combine the processing power of multiple physical servers into a single logical sql server? Is it possible on SQL Server 2008? I'm thinking, if the database files were located on a SAN and somehow one of the sql servers acted as a kind of master, then processing could be spread out over multiple physical servers, for instance even allowing simultaneous updates where there was no overlap, and in the case of read-only queries on unlocked tables no limit. We have an application that is limited by the speed of our sql server, and probably stuck with server 2005 for now. Is the only option to get a single more powerful physical server? Sorry I'm not an expert, I'm not sure if the question is a stupid one. TIA

    Read the article

  • Efficient way to combine results of two database queries.

    - by ensnare
    I have two tables on different servers, and I'd like some help finding an efficient way to combine and match the datasets. Here's an example: From server 1, which holds our stories, I perform a query like: query = """SELECT author_id, title, text FROM stories ORDER BY timestamp_created DESC LIMIT 10 """ results = DB.getAll(query) for i in range(len(results)): #Build a string of author_ids, e.g. '1314,4134,2624,2342' But, I'd like to fetch some info about each author_id from server 2: query = """SELECT id, avatar_url FROM members WHERE id IN (%s) """ values = (uid_list) results = DB.getAll(query, values) Now I need some way to combine these two queries so I have a dict that has the story as well as avatar_url and member_id. If this data were on one server, it would be a simple join that would look like: SELECT * FROM members, stories WHERE members.id = stories.author_id But since we store the data on multiple servers, this is not possible. What is the most efficient way to do this? Thanks.

    Read the article

  • Best Practices for a Web App Staging Server (on a budget)

    - by fig-gnuton
    I'd like to set up a staging server for a Rails app. I use git & github, Cap, and have a VPS with Apache/Passenger. I'm curious as to the best practices for a staging setup, as far as both the configuration of the staging server as well as the processes for interacting with it. I do know it should be as identical to the production server as possible, but restricting public access to it will limit that, so tips on securing it only for my use would also be great. Another specific question would be whether I could just create a virtual host on the VPS, so that the staging server could reside alongside the production one. I have a feeling there may be reasons to avoid this, though.

    Read the article

  • Do we need Record Level Locking when we already have Transaction for online ordering? (of concert ti

    - by Jian Lin
    For online ordering of concert seat or airline ticket, do we need Record Level Locking or is Transaction good enough? For concert ticket (say, seat Number 20B), or airline ticket (even with overbooking, the limit is 210, for example), I think the website cannot lock any record or begin transaction when showing the ticket purchase screen. But after the user clicks "Confirm Purchase", then the server should Begin a Transaction, Purchase Seat Number 20B, and try to Commit. If another user already bought Seat 20B in a previous transaction, then it is the "Commit" part that the current transaction will fail? So... we don't need Record Level Locking? Do Transactions always go serialized (one after another), so that's why we can know for sure there is no "race condition"? In what situation is Record Level Locking needed then?

    Read the article

  • Are default _id fields for MongoDB documents always 24 characters?

    - by ottobar
    As part of my application requirements, I have a limit of 30 characters for an ID field. This is out of my control and I am wondering if the MongoDB default _id fields will work for me. It appears as though the default _id field is 24 characters long. That works for me, but I am wondering if this is likely to change in the future. I am well aware that things can always change, but, for the next year or two, can I expect there to be 24 character default _id fields?

    Read the article

  • How to query range of data in DB2 with highest performance?

    - by Fuangwith S.
    Usually, I need to retrieve data from a table in some range; for example, a separate page for each search result. In MySQL I use LIMIT keyword but in DB2 I don't know. Now I use this query for retrieve range of data. SELECT * FROM( SELECT SMALLINT(RANK() OVER(ORDER BY NAME DESC)) AS RUNNING_NO , DATA_KEY_VALUE , SHOW_PRIORITY FROM EMPLOYEE WHERE NAME LIKE 'DEL%' ORDER BY NAME DESC FETCH FIRST 20 ROWS ONLY ) AS TMP ORDER BY TMP.RUNNING_NO ASC FETCH FIRST 10 ROWS ONLY but I know it's bad style. So, how to query for highest performance?

    Read the article

  • MySQL command-line tool: How to find out number of rows affected by a DELETE?

    - by ambivalence
    I'm trying to run a script that deletes a bunch of rows in a MySQL (innodb) table in batches, by executing the following in a loop: mysql --user=MyUser --password=MyPassword MyDatabase < SQL_FILE where SQL_FILE contains a DELETE FROM ... LIMIT X command. I need to keep running this loop until there's no more matching rows. But unlike running in the mysql shell, the above command does not return the number of rows affected. I've tried -v and -t but neither works. How can I find out how many rows the batch script affected? Thanks!

    Read the article

  • ODBC Storage Size

    - by dcp3450
    I'm pulling a lot of text from a MS SQL Server database. I'm not getting all the text (which includes some html. The text is stored perfectly on the database. However, when I run the query to get the data It will only pull part of the text. I pull the data using odbc_exec and store using $variable = odbc_result($runquery,"body"). if i display the content with odbc_result_all($runquery) i get part of the content. if I use echo $body; i get part of the content then some garbage and part of the text from the begining. very strange response. Is there a size limit? Any ideas what I'm missing here?

    Read the article

  • Using AND/OR mysql commands with FROM_UNIXTIME

    - by scatteredbomb
    Trying to select a query in php/mysql to get "Upcoming Items" in a calendar. We store the dates in the DB as a unix time. Here's what my query looks like right now SELECT * FROM `calendar` WHERE (`eventDate` > '$yesterday') OR (FROM_UNIXTIME(eventDate, '%m') > '$current_month' AND `$yearly` = '1') ORDER BY `eventDate` LIMIT 4 This is giving me an error "Unknown column '' in 'where clause'". I'm sure it has to do with my use of parenthesis (which I've never used before in a query) and the FROM_UNIXTIME command. Can someone help me out and let me know how I've screwed this up? Thanks!

    Read the article

< Previous Page | 425 426 427 428 429 430 431 432 433 434 435 436  | Next Page >