Search Results

Search found 16731 results on 670 pages for 'memory limit'.

Page 433/670 | < Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >

  • I am having trouble with my first project in ruby on rails

    - by Sebastian
    Here's my index action in the books controller: http://pastebin.com/XdtGRQKV Here's the view for the action i just mentioned: http://pastebin.com/nQFy400m Here's the result without being logged in: http://i.imgur.com/rQoiw.jpg Here's the result when i'm logged in with the user 'admin': http://i.imgur.com/E1CUr.jpg So the problem is that, in the view, before line 25 the 'user' variable seems to be empty ( or not loaded), and after line 25 the variable 'user' has the expected values. I have tried initializing a variable in the index method of the books controller but get exactly the same results. Thanks in advance! BTW had to make the links text because of stackoverflow limit.

    Read the article

  • Does the order of conditions in a WHERE clause affect MySQL performance?

    - by Greg
    Say that I have a long, expensive query, packed with conditions, searching a large number of rows. I also have one particular condition, like a company id, that will limit the number of rows that need to be searched considerably, narrowing it down to dozens from hundreds of thousands. Does make any difference to MySQL performance whether I do this: SELECT * FROM clients WHERE (firstname LIKE :foo OR lastname LIKE :foo OR phone LIKE :foo) AND (firstname LIKE :bar OR lastname LIKE :bar OR phone LIKE :bar) AND company = :ugh or this: SELECT * FROM clients WHERE company = :ugh AND (firstname LIKE :foo OR lastname LIKE :foo OR phone LIKE :foo) AND (firstname LIKE :bar OR lastname LIKE :bar OR phone LIKE :bar)

    Read the article

  • Echo-ing Only Available Database Result

    - by Robert Hanson
    I have this Associative Array : $Fields = array("row0"=>"Yahoo ID", "row1"=>"MSN ID", "row2"=> "Gtalk ID"); on the other side, I have this SQL query : SELECT YahooID, MSNID, GTalkID From UserTable WHERE Username = '$Username' LIMIT 1; the result maybe vary, because some users only have Yahoo ID and some have others. for example if I have this result : $row[0] = NONE //means YahooID = NONE $row[1] = [email protected] $row[2] = [email protected] then how to have this as an output (echo) : MSN ID = [email protected] Gtalk ID = [email protected] since Yahoo ID is not exist, then the result will be MSN and Gtalk only. 'MSN ID' and 'Gtalk ID' is variable from Associative Array, while '[email protected]' and '[email protected]' from SQL result. thanks!

    Read the article

  • How can this be done with (N)Hibernate?

    - by Vilx-
    I'm creating a windows forms application with NHibernate. It's an MDI application, so there is no limit to how many forms the user can have open at the same time (probably many). For most forms I want to have an "OK" and a "Cancel" button. Both close the form, but "OK" also saves the modified data to the DB. The forms can be pretty complex, and the modifications are likely to touch a whole graph of objects, adding some, deleting some, and changing some more. It would be good if the changes could be automatically detected and persisted as needed, without the need to manually keep track of each of them. What would be a good way to do this? Extra information: I can make whatever DB schema I want. I'm using MSSQL 2008 and currently have decided for GUID primary keys (with guid.comb generator) and a TIMESTAMP column for optimistic concurrency. I tried to simply set FlushMode of a NHibernate ISession to Never, doing all modifications as needed, and then calling Flush() if the user clicked OK. But that didn't work.

    Read the article

  • Get count matches in query on large table very slow

    - by Roy Roes
    I have a mysql table "items" with 2 integer fields: seid and tiid The table has about 35000000 records, so it's very large. seid tiid ----------- 1 1 2 2 2 3 2 4 3 4 4 1 4 2 The table has a primary key on both fields, an index on seid and an index on tiid. Someone types in 1 or more tiid values and now I would like to get the seid with most results. For example when someone types 1,2,3, I would like to get seid 2 and 4 as result. They both have 2 matches on the tiid values. My query so far: SELECT COUNT(*) as c, seid FROM items WHERE tiid IN (1,2,3) GROUP BY seid HAVING c = (SELECT COUNT(*) as c, seid FROM items WHERE tiid IN (1,2,3) GROUP BY seid ORDER BY c DESC LIMIT 1) But this query is extremly slow, because of the large table. Does anyone know how to construct a better query for this purpose?

    Read the article

  • Strip H1 tag and its contents

    - by Andy
    How can i remove <h1>and its contents</h1> from the following line strip_tags(substr($article->content(),0,255) from this complete code <?php $last_articles = $this->children(array('limit'=>5, 'order'=>'page.created_on DESC')); ?> <?php foreach ($last_articles as $article): ?> <div class="entry"> <h3><?php echo $article->link($article->title); ?></h3> <?php echo strip_tags(substr($article->content(),0,255).'...', '<p><a>'); ?> </div> <?php endforeach; ?> Any help would be appreciated.

    Read the article

  • Android font out of view on small screen

    - by user581949
    Hi Everyone I have several text views that take up the majority of the screen in landscape view in a relativelayout and the font size i have set is quite big (150dp). The text views are all timers and the furthest to the right is the "seconds" textview. My problem is that when testing on a phone with a small screen res the seconds are way outside the limit of the screen and can't be seen. They are in perfect place on normal to large screen resolutions just not on a small screen. Is there any way i can force the "seconds" text view to stay on screen, without adjusting the font size or margins between each text view? Even if it means looking cramped on a small screen i can live with that. Any help is greatly appreciated. Thanks This is the corresponding code:

    Read the article

  • php mysql search in 2 columns in 2 tables.

    - by andrew fishwick
    Hey, I have two tables in one DB, one called Cottages and one called Hotels. In both tables they have the same named fields. I basically have a search bar that i want it to search in both of the fields in both of the tables. (the two fields being called "Name" and "Location" SO far I have $sql = mysql_query("SELECT * FROM Cottages WHERE Name LIKE '%$term%' or Location LIKE '%$term%' LIMIT 0, 30"); But this only searches the Cottages table, how can I make it search both the cottages and hotel tables? Andy

    Read the article

  • Get the sum by comparing between two tables

    - by Ismail Gunes
    I have to tables ProdBiscuit As tb and StockData As sd , I have to get the sum of the quantity in StockData (quantite) with the condition of if (sd.status0 AND sd.prodid = tb.id AND sd.matcuisine = 3) Here is my sql query SELECT tb.id, tb.nom, tb.proddate, tb.qty, tb.stockrecno FROM ProdBiscuit AS tb JOIN (SELECT id, prodid, matcuisine, status, SUM(quantite) AS rq FROM StockData) AS sd ON (tb.id = sd.prodid AND sd.status > 0 AND sd.matcuisine = 3) LIMIT 25 OFFSET @Myid This gives me no rows at all ? There is only 3 rows in ProdBiscuit and 11 rows in Stockdata and there is only 2 rows in StockData good with the condition. And as shown in the picture there is only two rows which give the condition. What is wrong in my query ? PS: The green lines on the image shows the condition in my query.

    Read the article

  • How big can a SQL Server row be before it's a problem?

    - by John Leidegren
    Occasionally I run into this limitation using SQL Server 2000 that a row size can not exceed 8K bytes. SQL Server 2000 isn't really state of the art, but it's still in production code and because some tables are denormalized that's a problem. However, this seems to be a non issue with SQL Server 2005. At least, it won't complain that row sizes are bigger than 8K, but what happens instead and why was this a problem in SQL Server 2000? Do I need to care about my rows growing? Should I try and avoid large rows? Are varchar(max) and varbinary(max) a solution or expensive, in terms of size in database and/or CPU time? Why do I care at all about specifying the length of a particular column, when it seems like it's just a matter of time before someones going to hit that upper limit?

    Read the article

  • Is this SQL select code following good practice?

    - by acidzombie24
    I am using sqlite and will port to mysql (5) later. I wanted to know if I am doing something I shouldnt be doing. I tried purposely to design so I'll compare to 0 instead of 1 (I changed hasApproved to NotApproved to do this, not a big deal and I haven't written any code). I was told I never need to write a subquery but I do here. My Votes table is just id, ip, postid (I don't think I can write that subquery as a join instead?) and that's pretty much all that is on my mind. Naming conventions I don't really care about since the tables are created via reflection and is all over the place. select id, name, body, upvotes, downvotes, (select 1 from UpVotes where IPAddr=? AND post=Post.id) as myup, (select 1 from DownVotes where IPAddr=@0 AND post=Post.id) as mydown from Post where flag = '0' limit ?, ?"

    Read the article

  • Alternatives to FastDateFormat for efficient date parsing?

    - by Tom Tucker
    Well aware of performance and thread issues with SimpleDateFormat, I decided to go with FastDateFormat, until I realized that FastDateFormat is for formatting only, no parsing! Is there an alternative to FastDateFormat, that is ready to use out of the box and much faster than SimpleDateFormat? I believe FastDateFormat is one of the faster ones, so anything that is about as fast would do. Just curious , any idea why FastDateFormat does not support parsing? Doesn't it seriously limit its use? Thanks! EDIT Holy crap, I just left a comment and that literally REMOVED a good answer! This appears a serious bug on stackoverflow!

    Read the article

  • ferret,multiple model search -undefined method `aaf_index' for #<Class:>

    - by jissy
    ferret,multiple model search - I have 2 models A and B.I want to perform a text search by using 3 fields; title, description(part of A) and comment(part of B). Where I want to include the comment field to perform the ferret search.Then,what other changes needed. class A < ActiveRecord::Base has_one :b acts_as_ferret :fields => [:title, :description], :additional_fields => [:comment_text] def comment_text return b.comment end In a_controller, i wrote: @search = A.find_with_ferret( params[:st][:text_search], :limit => :all, :multi => [B] ).paginate :per_page =>10, :page=>params[:page] The second mosel is given below: class B < ActiveRecord::Base belongs_to :a while using :multi[B] option with the find_with_ferret,the following error is getting: undefined method `aaf_index' for #ClassName

    Read the article

  • How to split and join array in C++?

    - by Richard Knop
    I have a byte array like this: lzo_bytep out; // my byte array size_t uncompressedImageSize = 921600; out = (lzo_bytep) malloc((uncompressedImageSize + uncompressedImageSize / 16 + 64 + 3)); wrkmem = (lzo_voidp) malloc(LZO1X_1_MEM_COMPRESS); // Now the byte array has 802270 bytes r = lzo1x_1_compress(imageData, uncompressedImageSize, out, &out_len, wrkmem); How can I split it into smaller parts under 65,535 bytes (the byte array is one large packet which I want to sent over UDP which has upper limit 65,535 bytes) and then join those small chunks back into a continuous array?

    Read the article

  • Take advantage of multiple cores executing SQL statements

    - by willvv
    I have a small application that reads XML files and inserts the information on a SQL DB. There are ~ 300 000 files to import, each one with ~ 1000 records. I started the application on 20% of the files and it has been running for 18 hours now, I hope I can improve this time for the rest of the files. I'm not using a multi-thread approach, but since the computer I'm running the process on has 4 cores I was thinking on doing it to get some improvement on the performance (although I guess the main problem is the I/O and not only the processing). I was thinking on using the BeginExecutingNonQuery() method on the SqlCommand object I create for each insertion, but I don't know if I should limit the max amount of simultaneous threads (nor I know how to do it). What's your advice to get the best CPU utilization? Thanks

    Read the article

  • How can I write faster JavaScript?

    - by a paid nerd
    I'm writing an HTML5 canvas visualization. According to the Chrome Developer Tools profiler, 90% of the work is being done in (program), which I assume is the V8 interpreter at work calling functions and switching contexts and whatnot. Other than logic optimizations (e.g., only redrawing parts of the visualization that have changed), what can I do to optimize the CPU usage of my JavaScript? I'm willing to sacrifice some amount of readability and extensibility for performance. Is there a big list I'm missing because my Google skills suck? I have some ideas but I'm not sure if they're worth it: Limit function calls When possible, use arrays instead of objects and properties Use variables for math operation results as much as possible Cache common math operations such as Math.PI / 180 Use sin and cos approximation functions instead of Math.sin() and Math.cos() Reuse objects when passing around data instead of creating new ones Replace Math.abs() with ~~ Study jsperf.com until my eyes bleed Use a preprocessor on my JavaScript to do some of the above operations

    Read the article

  • Need to change page location for a Wordpress site

    - by PhilipK
    UPDATED I'm building a WP blog around an existing website. http://uk2canadapensiontransfers.com/news.php When I use the following default code... <?php wp_get_archives('title_li=&type=postbypost&limit=10'); ?> or previous and next page links... Wordpress attaches strings like... ?p=%post_id% to index.php But I need them to attach to news.php, e.g. news.php?p=%post_id% How can I change the Wordpress settings so that news.php can be my index and index.php remaind outside of the WP system.

    Read the article

  • Loop on enumeration values

    - by Rachel
    How awful is it - or is it perfectly acceptable - to index a loop on an enumeration? I have an enumeration defined. The values of the literals are default values. The assigned values do not have any significance, will not have any significance, and the values of any literals added in the future will also not have any significance. It's just defined to limit the allowed values and to make things easier to follow. Therefore the values will always start at 0 and increase by 1. Can I set up a loop like so: enum MyEnum { value1, value2, value3, maxValue } for(MyEnum i = value1; i < maxValue; i = static_cast<MyEnum>(i+1)){}

    Read the article

  • MySQL COUNT() multiple columns

    - by liam
    Hello, I'm trying to fetch the most popular tags from all videos in my database (ignoring blank tags). I also need the 'flv' for each tag. I have this working as I want if each video has one tag: SELECT tag_1, flv, COUNT(tag_1) AS tagcount FROM videos WHERE NOT tag_1='' GROUP BY tag_1 ORDER BY tagcount DESC LIMIT 0, 10 However in my database, each video is allowed three tags - tag_1, tag_2 and tag_3. Is there a way to get the most popular tags reading from multiple columns? The record structure is: +-----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | flv | varchar(150) | YES | | NULL | | | tag_1 | varchar(75) | YES | | NULL | | | tag_2 | varchar(75) | YES | | NULL | | | tag_3 | varchar(75) | YES | | NULL | | +-----------------+--------------+------+-----+---------+----------------+

    Read the article

  • Ensuring uniqueness on a varchar greater than 255 in MYSQL/InnoDB

    - by Vijay Boyapati
    I have a table which contains HTML entries for news pages. When I initially designed it I used URL as the primary key. I've learned the error of my ways because left-joining is super slow. So I want to redesign the table with an integer (id) primary key, but still keep the rows unique based on the URL. The problem is that I've found URLs longer than 255 characters, and MySQL isn't letting my create a key on the URL. I'm using an InnoDB/UTF8 table. From what I understand it's using multiple bytes per character with a limit of 766 bytes for the key (in InnoDB). I would really love suggestions on an elegant way of keeping the rows unique based on URL, while using an integer primary key. Thanks!

    Read the article

  • Reading long lines from text file

    - by sonofdelphi
    I am using the following code for reading lines from a text-file. What is the best method for handling the case where the line is greater than the limit SIZE_MAX_LINE? void TextFileReader::read(string inFilename) { ifstream xInFile(inFilename.c_str()); if(!xInFile){ return; } char acLine[SIZE_MAX_LINE + 1]; while(xInFile){ xInFile.getline(acLine, SIZE_MAX_LINE); if(xInFile){ m_sStream.append(acLine); //Appending read line to string } } xInFile.close(); }

    Read the article

  • Advanced find in Rails

    - by jriff
    Hi all I really suck at Rails' finders besides the most obvious. I always resort to SQL when things get more advanced than Model.find(:all, :conditions => ['field>? and field<? and id in (select id from table)', 1,2]) I have this method: def self.get_first_validation_answer(id) a=find_by_sql(" select answers.*, answers_registrations.answer_text from answers_registrations left join answers on answers_registrations.answer_id=answers.id where (answers_registrations.question_id in (select id from questions where validation_question=true)) and (sale_registration_id=#{id}) limit 1 ").first a.answer_text || a.text if a end Can someone create a find method that gets me what I want? Regards, Jacob

    Read the article

  • Selecting the first row out of many sql joins

    - by IcedDante
    Alright, so I'm putting together a path to select a revision of a particular novel: SELECT Catalog.WbsId, Catalog.Revision, NovelRevision.Revision FROM Catalog, BookInCatalog INNER JOIN NovelMaster INNER JOIN HasNovelRevision INNER JOIN NovelRevision ON HasNovelRevision.right = NovelRevision.obid ON HasNovelRevision.Left=NovelMaster.obid ON NovelMaster.obid = BookInCatalog.Right WHERE Catalog.obid = BookInCatalog.Left; This returns all revisions that are in the Novel Master for each Novel Master that is in the catalog. The problem is, I only want the FIRST revision of each novel master in the catalog. How do I go about doing that? Oh, and btw: my flavor of sql is hobbled, as many others are, in that it does not support the LIMIT Function.

    Read the article

  • Is WinForms ListView in VirtualMode limited to 100,000,000 rows?

    - by damageboy
    I have some grid scenario with 500,000,000 rows I would like to display in ListView. If I artificially limit my ListView to display 100,000,000: _listView.VirtualListSize = _data.Count; if (_listView.VirtualListSize > 100000000) _listView.VirtualListSize = 100000000; Everything works fine (In VirtualMode naturally). When I change my code to: _listView.VirtualListSize = _data.Count; if (_listView.VirtualListSize > 100000001) _listView.VirtualListSize = 100000001; The ListView display an empty grid... Is this a Microsoft Bug? Where is this coming from? Is this a Win32 ListView limitation? Most importantly, why is this not documented?

    Read the article

  • Cache bandwidth per tick for modern CPUs

    - by osgx
    Hello What is a speed of cache accessing for modern CPUs? How many bytes can be read or written from memory every processor clock tick by Intel P4, Core2, Corei7, AMD? Please, answer with both theoretical (width of ld/sd unit with its throughput in uOPs/tick) and practical numbers (even memcpy speed tests, or STREAM benchmark), if any. PS it is question, related to maximal rate of load/store instructions in assembler. There can be theoretical rate of loading (all Instructions Per Tick are widest loads), but processor can give only part of such, a practical limit of loading.

    Read the article

< Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >