Search Results

Search found 5021 results on 201 pages for 'limit'.

Page 183/201 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • dublicate display of Posts from controller - conditions and joins problem - Ruby on Rails

    - by bgadoci
    I have built a blog application using Ruby on Rails. In the application I have posts and tags. Post has_many :tags and Tag belongs_to :post. In the /views/posts/index.html view I want to display two things. First is a listing of all posts displayed 'created_at DESC' and then in the side bar I am wanting to reference my Tags table, group records, and display as a link that allows for viewing all posts with that tag. With the code below, I am able to display the tag groups and succesfully display all posts with that tag. There are two problems with it thought. /posts view, the code seems to be referencing the Tag table and displaying the post multiple times, directly correlated to how many tags that post has. (i.e. if the post has 3 tags it will display the post 3 times). /posts view, only displays posts that have Tags. If the post doesn't have a tag, no display at all. /views/posts/index.html.erb <%= render :partial => @posts %> /views/posts/_post.html.erb <% div_for post do %> <h2><%= link_to_unless_current h(post.title), post %></h2> <i>Posted <%= time_ago_in_words(post.created_at) %></i> ago <%= simple_format h truncate(post.body, :length => 300) %> <%= link_to "Read More", post %> | <%= link_to "View & Add Comments (#{post.comments.count})", post %> <hr/> <% end %> /models/post.rb class Post < ActiveRecord::Base validates_presence_of :body, :title has_many :comments, :dependent => :destroy has_many :tags, :dependent => :destroy cattr_reader :per_page @@per_page = 10 end posts_controller.rb def index @tag_counts = Tag.count(:group => :tag_name, :order => 'updated_at DESC', :limit => 10) @posts=Post.all(:joins => :tags,:conditions=>(params[:tag_name] ? { :tags => { :tag_name => params[:tag_name] }} : {} ) ).paginate :page => params[:page], :per_page => 5 respond_to do |format| format.html # index.html.erb format.xml { render :xml => @posts } format.json { render :json => @posts } format.atom end end

    Read the article

  • Python - calculate multinomial probability density functions on large dataset?

    - by Seafoid
    Hi, I originally intended to use MATLAB to tackle this problem but the inbuilt functions has limitations that do not suit my goal. The same limitation occurs in NumPy. I have two tab-delimited files. The first is a file showing amino acid residue, frequency and count for an in-house database of protein structures, i.e. A 0.25 1 S 0.25 1 T 0.25 1 P 0.25 1 The second file consists of quadruplets of amino acids and the number of times they occur, i.e. ASTP 1 Note, there are 8,000 such quadruplets. Based on the background frequency of occurence of each amino acid and the count of quadruplets, I aim to calculate the multinomial probability density function for each quadruplet and subsequently use it as the expected value in a maximum likelihood calculation. The multinomial distribution is as follows: f(x|n, p) = n!/(x1!*x2!*...*xk!)*((p1^x1)*(p2^x2)*...*(pk^xk)) where x is the number of each of k outcomes in n trials with fixed probabilities p. n is 4 four in all cases in my calculation. I have created three functions to calculate this distribution. # functions for multinomial distribution def expected_quadruplets(x, y): expected = x*y return expected # calculates the probabilities of occurence raised to the number of occurrences def prod_prob(p1, a, p2, b, p3, c, p4, d): prob_prod = (pow(p1, a))*(pow(p2, b))*(pow(p3, c))*(pow(p4, d)) return prob_prod # factorial() and multinomial_coefficient() work in tandem to calculate C, the multinomial coefficient def factorial(n): if n <= 1: return 1 return n*factorial(n-1) def multinomial_coefficient(a, b, c, d): n = 24.0 multi_coeff = (n/(factorial(a) * factorial(b) * factorial(c) * factorial(d))) return multi_coeff The problem is how best to structure the data in order to tackle the calculation most efficiently, in a manner that I can read (you guys write some cryptic code :-)) and that will not create an overflow or runtime error. To data my data is represented as nested lists. amino_acids = [['A', '0.25', '1'], ['S', '0.25', '1'], ['T', '0.25', '1'], ['P', '0.25', '1']] quadruplets = [['ASTP', '1']] I initially intended calling these functions within a nested for loop but this resulted in runtime errors or overfloe errors. I know that I can reset the recursion limit but I would rather do this more elegantly. I had the following: for i in quadruplets: quad = i[0].split(' ') for j in amino_acids: for k in quadruplets: for v in k: if j[0] == v: multinomial_coefficient(int(j[2]), int(j[2]), int(j[2]), int(j[2])) I haven'te really gotten to how to incorporate the other functions yet. I think that my current nested list arrangement is sub optimal. I wish to compare the each letter within the string 'ASTP' with the first component of each sub list in amino_acids. Where a match exists, I wish to pass the appropriate numeric values to the functions using indices. Is their a better way? Can I append the appropriate numbers for each amino acid and quadruplet to a temporary data structure within a loop, pass this to the functions and clear it for the next iteration? Thanks, S :-)

    Read the article

  • getting rid of filesort on WordPress MySQL query

    - by Hans
    An instance of WordPress that I manage goes down about once a day due to this monster MySQL query taking far too long: SELECT SQL_CALC_FOUND_ROWS distinct wp_posts.* FROM wp_posts LEFT JOIN wp_term_relationships ON (wp_posts.ID = wp_term_relationships.object_id) LEFT JOIN wp_term_taxonomy ON wp_term_taxonomy.term_taxonomy_id = wp_term_relationships.term_taxonomy_id LEFT JOIN wp_ec3_schedule ec3_sch ON ec3_sch.post_id=id WHERE 1=1 AND wp_posts.ID NOT IN ( SELECT tr.object_id FROM wp_term_relationships AS tr INNER JOIN wp_term_taxonomy AS tt ON tr.term_taxonomy_id = tt.term_taxonomy_id WHERE tt.taxonomy = 'category' AND tt.term_id IN ('1050') ) AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish') AND NOT EXISTS (SELECT * FROM wp_term_relationships JOIN wp_term_taxonomy ON wp_term_taxonomy.term_taxonomy_id = wp_term_relationships.term_taxonomy_id WHERE wp_term_relationships.object_id = wp_posts.ID AND wp_term_taxonomy.taxonomy = 'category' AND wp_term_taxonomy.term_id IN (533,3567) ) AND ec3_sch.post_id IS NULL GROUP BY wp_posts.ID ORDER BY wp_posts.post_date DESC LIMIT 0, 10; What do I have to do to get rid of the very slow filesort? I would think that the multicolumn type_status_date index would be fast enough. The EXPLAIN EXTENDED output is below. +----+--------------------+-----------------------+--------+-----------------------------------+------------------+---------+---------------------------------------------------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+--------------------+-----------------------+--------+-----------------------------------+------------------+---------+---------------------------------------------------------------------------------+------+----------------------------------------------+ | 1 | PRIMARY | wp_posts | ref | type_status_date | type_status_date | 124 | const,const | 7034 | Using where; Using temporary; Using filesort | | 1 | PRIMARY | wp_term_relationships | ref | PRIMARY | PRIMARY | 8 | bwog_wordpress_w.wp_posts.ID | 373 | Using index | | 1 | PRIMARY | wp_term_taxonomy | eq_ref | PRIMARY | PRIMARY | 8 | bwog_wordpress_w.wp_term_relationships.term_taxonomy_id | 1 | Using index | | 1 | PRIMARY | ec3_sch | ref | post_id_index | post_id_index | 9 | bwog_wordpress_w.wp_posts.ID | 1 | Using where; Using index | | 3 | DEPENDENT SUBQUERY | wp_term_taxonomy | range | PRIMARY,term_id_taxonomy,taxonomy | term_id_taxonomy | 106 | NULL | 2 | Using where | | 3 | DEPENDENT SUBQUERY | wp_term_relationships | eq_ref | PRIMARY,term_taxonomy_id | PRIMARY | 16 | bwog_wordpress_w.wp_posts.ID,bwog_wordpress_w.wp_term_taxonomy.term_taxonomy_id | 1 | Using index | | 2 | DEPENDENT SUBQUERY | tt | const | PRIMARY,term_id_taxonomy,taxonomy | term_id_taxonomy | 106 | const,const | 1 | | | 2 | DEPENDENT SUBQUERY | tr | eq_ref | PRIMARY,term_taxonomy_id | PRIMARY | 16 | func,const | 1 | Using index | +----+--------------------+-----------------------+--------+-----------------------------------+------------------+---------+---------------------------------------------------------------------------------+------+----------------------------------------------+ 8 rows in set, 2 warnings (0.05 sec) And CREATE TABLE: CREATE TABLE `wp_posts` ( `ID` bigint(20) unsigned NOT NULL auto_increment, `post_author` bigint(20) unsigned NOT NULL default '0', `post_date` datetime NOT NULL default '0000-00-00 00:00:00', `post_date_gmt` datetime NOT NULL default '0000-00-00 00:00:00', `post_content` longtext NOT NULL, `post_title` text NOT NULL, `post_excerpt` text NOT NULL, `post_status` varchar(20) NOT NULL default 'publish', `comment_status` varchar(20) NOT NULL default 'open', `ping_status` varchar(20) NOT NULL default 'open', `post_password` varchar(20) NOT NULL default '', `post_name` varchar(200) NOT NULL default '', `to_ping` text NOT NULL, `pinged` text NOT NULL, `post_modified` datetime NOT NULL default '0000-00-00 00:00:00', `post_modified_gmt` datetime NOT NULL default '0000-00-00 00:00:00', `post_content_filtered` text NOT NULL, `post_parent` bigint(20) unsigned NOT NULL default '0', `guid` varchar(255) NOT NULL default '', `menu_order` int(11) NOT NULL default '0', `post_type` varchar(20) NOT NULL default 'post', `post_mime_type` varchar(100) NOT NULL default '', `comment_count` bigint(20) NOT NULL default '0', `robotsmeta` varchar(64) default NULL, PRIMARY KEY (`ID`), KEY `post_name` (`post_name`), KEY `type_status_date` (`post_type`,`post_status`,`post_date`,`ID`), KEY `post_parent` (`post_parent`), KEY `post_date` (`post_date`), FULLTEXT KEY `post_related` (`post_title`,`post_content`) )

    Read the article

  • link_to passing paramater and display problem - tag feature - Ruby on Rails

    - by bgadoci
    I have gotten a great deal of help from KandadaBoggu on my last question and very very thankful for that. As we were getting buried in the comments I wanted to break this part out. I am attempting to create a tag feature on the rails blog I am developing. The relationship is Post has_many :tags and Tag belongs_to :post. Adding and deleting tags to posts are working great. In my /view/posts/index.html.erb I have a section called tags where I am successfully querying the Tags table, grouping them and displaying the count next to the tag_name (as a side note, I mistakenly called the column containing the tag name, 'tag_name' instead of just 'name' as I should have) . In addition the display of these groups are a link that is referencing the index method in the PostsController. That is where the problem is. When you navigate to /posts you get an error because there is no parameter being passed (without clicking the tag group link). I have the .empty? in there so not sure what is going wrong here. Here is the error and code: Error You have a nil object when you didn't expect it! You might have expected an instance of Array. The error occurred while evaluating nil.empty? /views/posts/index.html.erb <% @tag_counts.each do |tag_name, tag_count| %> <tr> <td><%= link_to(tag_name, posts_path(:tag_name => tag_name)) %></td> <td>(<%=tag_count%>)</td> </tr> <% end %> PostsController def index @tag_counts = Tag.count(:group => :tag_name, :order => 'updated_at DESC', :limit => 10) @posts=Post.all(:joins => :tags,:conditions=>(params[:tag_name].empty? ? {}: { :tags => { :tag_name => params[:tag_name] }} ) ) respond_to do |format| format.html # index.html.erb format.xml { render :xml => @posts } format.json { render :json => @posts } format.atom end end

    Read the article

  • iPhone SDK: Audio Queue control

    - by codemercenary
    Hi all, I am new to the audio queue services so I have taken an example from a book called iPhone Cool Projects where it describes how to stream audio. I want to extend this to being able to play a continuous playlist of links to mp3 files like an internet radio. The problem with the example code it that it does not detect when a stream ends and does not call AudioQueueStop at any point, so I added a counter to number of buffers added to the queue, and then decrement this counter each time audioQueueOutputCallback is called by the queue. This works fine except if when the buffer count goes to 0, and then I add a call AudioQueueFlush(audioQueue) and then AudioQueueStop(audioQueue, false) I get an error. If I only call AudioQueueReset, it continues to load the buffers again, but plays them out faster then it loads them... getting stuck in a loop and then crashing. 2010-04-14 13:56:29.745 AudioPlayer[2269:207] init player with URL 2010-04-14 13:56:29.941 AudioPlayer[2269:207] did recieve data 2010-04-14 13:56:29.942 AudioPlayer[2269:207] audio request didReceiveData 2010-04-14 13:56:29.944 AudioPlayer[2269:207] >>> start audio queue 2010-04-14 13:56:29.960 AudioPlayer[2269:207] packetCallback count 2 2010-04-14 13:56:29.961 AudioPlayer[2269:207] add buffer: 1 2010-04-14 13:56:29.962 AudioPlayer[2269:207] did recieve data 2010-04-14 13:56:29.963 AudioPlayer[2269:207] audio request didReceiveData 2010-04-14 13:56:29.963 AudioPlayer[2269:207] packetCallback count 1 2010-04-14 13:56:29.964 AudioPlayer[2269:207] add buffer: 2 2010-04-14 13:56:29.965 AudioPlayer[2269:207] packetCallback count 13 2010-04-14 13:56:29.967 AudioPlayer[2269:207] add buffer: 3 2010-04-14 13:56:29.968 AudioPlayer[2269:207] done with buffer: 3 2010-04-14 13:56:29.969 AudioPlayer[2269:207] done with buffer: 2 2010-04-14 13:56:29.974 AudioPlayer[2269:207] done with buffer: 1 So this loop continues some 20 - 30 times and then it crashes. The first time it plays an audio file it queues up the buffers and then plays sound, but doesn't callback to delete them until some 100 or more have been played. Can anyone explain this behavior? I read that there was a limit of 1 audio queue for MP3 playback for the iPhone. Is that still true? If not then I suppose I should use another audio queue for the next mp3 stream. I've had a look through the apple docs but it doesn't explain this in any particular detail. A better insight into this would be great. TIA.

    Read the article

  • The Skyline Problem.

    - by zeroDivisible
    I just came across this little problem on UVA's Online Judge and thought, that it may be a good candidate for a little code-golf. The problem: You are to design a program to assist an architect in drawing the skyline of a city given the locations of the buildings in the city. To make the problem tractable, all buildings are rectangular in shape and they share a common bottom (the city they are built in is very flat). The city is also viewed as two-dimensional. A building is specified by an ordered triple (Li, Hi, Ri) where Li and Ri are left and right coordinates, respectively, of building i and Hi is the height of the building. In the diagram below buildings are shown on the left with triples (1,11,5), (2,6,7), (3,13,9), (12,7,16), (14,3,25), (19,18,22), (23,13,29), (24,4,28) and the skyline, shown on the right, is represented by the sequence: 1, 11, 3, 13, 9, 0, 12, 7, 16, 3, 19, 18, 22, 3, 23, 13, 29, 0 The output should consist of the vector that describes the skyline as shown in the example above. In the skyline vector (v1, v2, v3, ... vn) , the vi such that i is an even number represent a horizontal line (height). The vi such that i is an odd number represent a vertical line (x-coordinate). The skyline vector should represent the "path" taken, for example, by a bug starting at the minimum x-coordinate and traveling horizontally and vertically over all the lines that define the skyline. Thus the last entry in the skyline vector will be a 0. The coordinates must be separated by a blank space. If I will not count declaration of provided (test) buildings and including all spaces and tab characters, my solution, in Python, is 223 characters long. Here is the condensed version: B=[[1,11,5],[2,6,7],[3,13,9],[12,7,16],[14,3,25],[19,18,22],[23,13,29],[24,4,28]] # Solution. R=range v=[0 for e in R(max([y[2] for y in B])+1)] for b in B: for x in R(b[0], b[2]): if b[1]>v[x]: v[x]=b[1] p=1 k=0 for x in R(len(v)): V=v[x] if p and V==0: continue elif V!=k: p=0 print "%s %s" % (str(x), str(V)), k=V I think that I didn't made any mistake but if so - feel free to criticize me. EDIT I don't have much reputation, so I will pay only 100 for a bounty - I am curious, if anyone could try to solve this in less than .. lets say, 80 characters. Solution posted by cobbal is 101 characters long and currently it is the best one. ANOTHER EDIT I thought, that 80 characters is a sick limit for this kind of problem. cobbal, with his 46 character solution totaly amazed me - though I must admit, that I spent some time reading his explanation before I partially understood what he had written.

    Read the article

  • Java language features which have no equivalent in C#

    - by jthg
    Having mostly worked with C#, I tend to think in terms of C# features which aren't available in Java. After working extensively with Java over the last year, I've started to discover Java features that I wish were in C#. Below is a list of the ones that I'm aware of. Can anyone think of other Java language features which a person with a C# background may not realize exists? The articles http://www.25hoursaday.com/CsharpVsJava.html and http://en.wikipedia.org/wiki/Comparison_of_Java_and_C_Sharp give a very extensive list of differences between Java and C#, but I wonder whether I missed anything in the (very) long articles. I can also think of one feature (covariant return type) which I didn't see mentioned in either article. Please limit answers to language or core library features which can't be effectively implemented by your own custom code or third party libraries. Covariant return type - a method can be overridden by a method which returns a more specific type. Useful when implementing an interface or extending a class and you want a method to override a base method, but return a type more specific to your class. Enums are classes - an enum is a full class in java, rather than a wrapper around a primitive like in .Net. Java allows you to define fields and methods on an enum. Anonymous inner classes - define an anonymous class which implements a method. Although most of the use cases for this in Java are covered by delegates in .Net, there are some cases in which you really need to pass multiple callbacks as a group. It would be nice to have the choice of using an anonymous inner class. Checked exceptions - I can see how this is useful in the context of common designs used with Java applications, but my experience with .Net has put me in a habit of using exceptions only for unrecoverable conditions. I.E. exceptions indicate a bug in the application and are only caught for the purpose of logging. I haven't quite come around to the idea of using exceptions for normal program flow. strictfp - Ensures strict floating point arithmetic. I'm not sure what kind of applications would find this useful. fields in interfaces - It's possible to declare fields in interfaces. I've never used this. static imports - Allows one to use the static methods of a class without qualifying it with the class name. I just realized today that this feature exists. It sounds like a nice convenience.

    Read the article

  • ASP.Net Web Farm Monitoring

    - by cisellis
    I am looking for suggestions on doing some simple monitoring of an ASP.Net web farm as close to real-time as possible. The objectives of this question are to: Identify the best way to monitor several Windows Server production boxes during short (minutes long) period of ridiculous load Receive near-real-time feedback on a few key metrics about each box. These are simple metrics available via WMI such as CPU, Memory and Disk Paging. I am defining my time constraints as soon as possible with 120 seconds delayed being the absolute upper limit. Monitor whether any given box is up (with "up" being defined as responding web requests in a reasonable amount of time) Here are more details, things I've tried, etc. I am not interested in logging. We have logging solutions in place. I have looked at solutions such as ELMAH which don't provide much in the way of hardware monitoring and are not visible across an entire web farm. ASP.Net Health Monitoring is too broad, focuses too much on logging and is not acceptable for deep analysis. We are on Amazon Web Services and we have looked into CloudWatch. It looks great but messages in the forum indicate that the metrics are often a few minutes behind, with one thread citing 2 minutes as the absolute soonest you could expect to receive the feedback. This would be good to have for later analysis but does not help us real-time Stuff like JetBrains profiler is good for testing but again, not helpful during real-time monitoring. The closest out-of-box solution I've seen is Nagios which is free and appears to measure key indicators on any kind of box, including Windows. However, it appears to require a Linux box to run itself on and a good deal of manual configuration. I'd prefer to not spend my time mining config files and then be up a creek when it fails in production since Linux is not my main (or even secondary) environment. Are there any out-of-box solutions that I am missing? Obviously a windows-based solution that is easy to setup is ideal. I don't require many bells and whistles. In the absence of an out-of-box solution, it seems easy for me to write something simple to handle what I need. I've been thinking a simple client-server setup where the server requests a few WMI metrics from each client over http and sticks them in a database. We could then monitor the metrics via a query or a dashboard or something. If the client doesn't respond, it's effectively down. Any problems with this, best practices, or other ideas? Thanks for any help/feedback.

    Read the article

  • Oracle 9i Session Disconnections

    - by mlaverd
    [Cross-Posting from ServerFault] I am in a development environment, and our test Oracle 9i server has been misbehaving for a few days now. What happens is that we have our JDBC connections disconnecting after a few successful connections. We got this box set up by our IT department and handed over to. It is 'our problem', so options like 'ask you DBA' isn't going to help me. :( Our server is set up with 3 plain databases (one is the main dev db, the other is the 'experimental' dev db). We use the Oracle 10 ojdbc14.jar thin JDBC driver (because of some bug in the version 9 of the driver). We're using Hibernate to talk to the DB. The only thing that I can see that changed is that we now have more users connecting to the server. Instead of one developer, we now have 3. With the Hibernate connection pools, I'm thinking that maybe we're hitting some limit? Anyone has any idea what's going on? Here's the stack trace on the client: Caused by: org.hibernate.exception.GenericJDBCException: could not execute query at org.hibernate.exception.SQLStateConverter.handledNonSpecificException(SQLStateConverter.java:126) [hibernate3.jar:na] at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:114) [hibernate3.jar:na] at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) [hibernate3.jar:na] at org.hibernate.loader.Loader.doList(Loader.java:2235) [hibernate3.jar:na] at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2129) [hibernate3.jar:na] at org.hibernate.loader.Loader.list(Loader.java:2124) [hibernate3.jar:na] at org.hibernate.loader.hql.QueryLoader.list(QueryLoader.java:401) [hibernate3.jar:na] at org.hibernate.hql.ast.QueryTranslatorImpl.list(QueryTranslatorImpl.java:363) [hibernate3.jar:na] at org.hibernate.engine.query.HQLQueryPlan.performList(HQLQueryPlan.java:196) [hibernate3.jar:na] at org.hibernate.impl.SessionImpl.list(SessionImpl.java:1149) [hibernate3.jar:na] at org.hibernate.impl.QueryImpl.list(QueryImpl.java:102) [hibernate3.jar:na] ... Caused by: java.sql.SQLException: Io exception: Connection reset at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:146) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:255) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:829) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1049) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.T4CPreparedStatement.executeMaybeDescribe(T4CPreparedStatement.java:854) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1154) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3370) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3415) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at org.hibernate.jdbc.AbstractBatcher.getResultSet(AbstractBatcher.java:208) [hibernate3.jar:na] at org.hibernate.loader.Loader.getResultSet(Loader.java:1812) [hibernate3.jar:na] at org.hibernate.loader.Loader.doQuery(Loader.java:697) [hibernate3.jar:na] at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:259) [hibernate3.jar:na] at org.hibernate.loader.Loader.doList(Loader.java:2232) [hibernate3.jar:na]

    Read the article

  • PHP throws 'Allowed memory exhausted' errors while migrating data in Drupal.

    - by Stan
    I'm trying to setup a tiny sandbox on a local machine to play around with Drupal. I created a few CCK types; in order to create a few nodes I wrote the following script: chdir('C:\..\drupal'); require_once '.\includes\bootstrap.inc'; drupal_bootstrap(DRUPAL_BOOTSTRAP_FULL); module_load_include('inc', 'node', 'node.pages'); $node = array('type' => 'my_type'); $link = mysql_connect(..); mysql_select_db('my_db'); $query_bldg = ' SELECT stuff FROM table LIMIT 10 '; $result = mysql_query($query_bldg); while ($row = mysql_fetch_object($result)) { $form_state = array(); $form_state['values']['name'] = 'admin'; $form_state['values']['status'] = 1; $form_state['values']['op'] = t('Save'); $form_state['values']['title'] = $row->val_a; $form_state['values']['my_field'][0]['value'] = $row->val_b; ## About another dozen or so of similar assignments... drupal_execute('node_form', $form_state, (object)$node); } Here are a few relevant lines from php_errors.log: [12-Jun-2010 05:02:47] PHP Notice: Undefined index: REMOTE_ADDR in C:\..\drupal\includes\bootstrap.inc on line 1299 [12-Jun-2010 05:02:47] PHP Notice: Undefined index: REMOTE_ADDR in C:\..\drupal\includes\bootstrap.inc on line 1299 [12-Jun-2010 05:02:47] PHP Warning: session_start(): Cannot send session cookie - headers already sent by (output started at C:\..\drupal\includes\bootstrap.inc:1299) in C:\..\drupal\includes\bootstrap.inc on line 1143 [12-Jun-2010 05:02:47] PHP Warning: session_start(): Cannot send session cache limiter - headers already sent (output started at C:\..\drupal\includes\bootstrap.inc:1299) in C:\..\drupal\includes\bootstrap.inc on line 1143 [12-Jun-2010 05:02:47] PHP Warning: Cannot modify header information - headers already sent by (output started at C:\..\drupal\includes\bootstrap.inc:1299) in C:\..\drupal\includes\bootstrap.inc on line 709 [12-Jun-2010 05:02:47] PHP Warning: Cannot modify header information - headers already sent by (output started at C:\..\drupal\includes\bootstrap.inc:1299) in C:\..\drupal\includes\bootstrap.inc on line 710 [12-Jun-2010 05:02:47] PHP Warning: Cannot modify header information - headers already sent by (output started at C:\..\drupal\includes\bootstrap.inc:1299) in C:\..\drupal\includes\bootstrap.inc on line 711 [12-Jun-2010 05:02:47] PHP Warning: Cannot modify header information - headers already sent by (output started at C:\..\drupal\includes\bootstrap.inc:1299) in C:\..\drupal\includes\bootstrap.inc on line 712 [12-Jun-2010 05:02:47] PHP Notice: Undefined index: REMOTE_ADDR in C:\..\drupal\includes\bootstrap.inc on line 1299 [12-Jun-2010 05:02:48] PHP Fatal error: Allowed memory size of 239075328 bytes exhau sted (tried to allocate 261904 bytes) in C:\..\drupal\includes\form.inc on line 488 [12-Jun-2010 05:03:22] PHP Fatal error: Allowed memory size of 239075328 bytes exhausted (tried to allocate 261904 bytes) in C:\..\drupal\includes\form.inc on line 488 [12-Jun-2010 05:04:34] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 At this point any action php takes results in the last error shown above. I tried increasing the value of memory_limit in php.ini before the final Fatal error which obviously didn't help. How can the error be eliminated? Am I on a correct path to migrating data into Drupal or should the cck tables be operated on directly? Windows XP PHP 5.3.2 VC6 Apache 2.2

    Read the article

  • Best practices for sending automated daily emails from web service

    - by Tauren
    I am running a web service that currently sends confirmation emails out to new users via the gmail smtp servers. As I'm only getting a few new users each day, this hasn't been a problem. I've recently added new features to the webapp that will require a customized message to be sent out to each user every day. Think of this as similar to the regular messages LinkedIn sends out that give you a status report on the activity in your network. Every user's message will be different. With thousands of users, this means thousands of unique messages will be sent each day. Edit: I've since found that these types of email are called "transactional or relationship messages". Spamtacular has a good article on differentiating between marketing and transactional email. I don't think using gmail's smtp servers will cut it anymore, but I don't know that for sure. I don't know what gmail's maximum outgoing messages per account is (it might be 100/day), but they limit outgoing mail to 500 recipients per message. I'm not sending a single message to 500 recipients, but I'm going to be sending 1000's of customized messages with each recipient getting one per day. I'm interested to learn any best practices for doing this (especially for Java-based webapps). Here are some of my thoughts and concerns on it: Should I set up my own outgoing mail server? If I do this, it seems like I'll have all sorts of other issues to worry about, such as preventing mail server abuse, monitoring bounces, allowing ways to opt-out of emails, etc. Are there any tools or services to help with this? Maybe something like OpenEMM or a services like MailChimp? But those seem focused more toward email marketing campaigns. I don't think I should have the webapp itself handle sending emails as it currently is for new user signups. I'm thinking I should setup a separate messaging server that can access the same backend/datastore as the webapp. Thoughts on this? Should I consider setting up some sort of message queueing service to help with this, such as JMS, RabbitMQ, ActiveMQ, etc.? Do I need to provide users a way to opt-out? Do I need to flag these as bulk messages? I don't really consider these email marketing messages, but I'm unsure what is considered appropriate or proper netiquette. Any advice is appreciated. I'm also very interested in open source tools or web services that simplify things and could help me to ramp up as quickly as possible. Thanks!

    Read the article

  • Problem with bootstrap loader and kernel

    - by dboarman-FissureStudios
    We are working on a project to learn how to write a kernel and learn the ins and outs. We have a bootstrap loader written and it appears to work. However we are having a problem with the kernel loading. I'll start with the first part: bootloader.asm: [BITS 16] [ORG 0x0000] ; ; all the stuff in between ; ; the bottom of the bootstrap loader datasector dw 0x0000 cluster dw 0x0000 ImageName db "KERNEL SYS" msgLoading db 0x0D, 0x0A, "Loading Kernel Shell", 0x0D, 0x0A, 0x00 msgCRLF db 0x0D, 0x0A, 0x00 msgProgress db ".", 0x00 msgFailure db 0x0D, 0x0A, "ERROR : Press key to reboot", 0x00 TIMES 510-($-$$) DB 0 DW 0xAA55 ;************************************************************************* The bootloader.asm is too long for the editor without causing it to chug and choke. In addition, the bootloader and kernel do work within bochs as we do get the message "Welcome to our OS". Anyway, the following is what we have for a kernel at this point. kernel.asm: [BITS 16] [ORG 0x0000] [SEGMENT .text] ; code segment mov ax, 0x0100 ; location where kernel is loaded mov ds, ax mov es, ax cli mov ss, ax ; stack segment mov sp, 0xFFFF ; stack pointer at 64k limit sti mov si, strWelcomeMsg ; load message call _disp_str mov ah, 0x00 int 0x16 ; interrupt: await keypress int 0x19 ; interrupt: reboot _disp_str: lodsb ; load next character or al, al ; test for NUL character jz .DONE mov ah, 0x0E ; BIOS teletype mov bh, 0x00 ; display page 0 mov bl, 0x07 ; text attribute int 0x10 ; interrupt: invoke BIOS jmp _disp_str .DONE: ret [SEGMENT .data] ; initialized data segment strWelcomeMsg db "Welcome to our OS", 0x00 [SEGMENT .bss] ; uninitialized data segment Using nasm 2.06rc2 I compile as such: nasm bootloader.asm -o bootloader.bin -f bin nasm kernel.asm -o kernel.sys -f bin We write bootloader.bin to the floppy as such: dd if=bootloader.bin bs=512 count=1 of/dev/fd0 We write kernel.sys to the floppy as such: cp kernel.sys /dev/fd0 As I stated, this works in bochs. But booting from the floppy we get output like so: Loading Kernel Shell ........... ERROR : Press key to reboot Other specifics: OpenSUSE 11.2, GNOME desktop, AMD x64 Any other information I may have missed, feel free to ask. I tried to get everything in here that would be needed. If I need to, I can find a way to get the entire bootloader.asm posted somewhere. We are not really interested in using GRUB either for several reasons. This could change, but we want to see this boot successful before we really consider GRUB.

    Read the article

  • mysqli query for no results do something else do something else

    - by Yousef Altaf
    I'm running a mysqli query. this is my code <?php $SM_pro_info="SELECT * FROM special_marketing_ads ORDER BY id desc LIMIT 4"; $QSM_pro_info = $db->query($SM_pro_info)or die($db->error); if($QSM_pro_info->num_rows ==1){ while($SM_pro=$QSM_pro_info->fetch_object()){ ?> <table width="208" border="0"> <tr> <td width="129" height="35" align="right"><span style="color:#361800; font-size:14px; font-weight:bold;"><?php echo $SM_pro->pro_title; ?></span></td> <td width="69" rowspan="2" align="center"><a rel="lightbox" href="includes/Cpanel/projectImages//images.jpg" ><img src="<?php echo $SM_pro->image_1; ?>" alt="" width="60" height="60" border="0" /></a></td> </tr> <tr align="right"> <td><span style="color:#361800; font-size:14px; font-weight:bold;">?????</span> : <span style="color:#da6e19; font-size:15px; font-weight:bold;"><?php echo $SM_pro->pro_purpose; ?></span></td> </tr> </table> <?php } }else{ echo"Your Ads here"; } ?> now all I want to do is if there is any results comeing from mysqli echo it if there is no results it should echo an image which is add your ads here. the point is I have 4 Ads so if there is 1 Ads in mysqli row and the others is empty so I want it to echo this single Ads and the other 3 Ads should be this image add your Ads here. any Help on this please?

    Read the article

  • Filter chain halted as [:login_required] rendered_or_redirected

    - by Magicked
    Hopefully I can explain this well enough, but please let me know if more information is needed! I'm building a form where a user can create an "incident". This incident has the following relationships: belongs_to: customer (customer has_many incidents) belongs_to: user (user has_many incidents) has_one: incident_status (incident_status belongs to incident) The form allows the user to assign the incident to a user (select form) and then select an incident status. The incident is nested in customer. However, I'm getting the following in the server logs: Processing IncidentsController#create (for 127.0.0.1 at 2010-04-26 10:41:33) [POST] Parameters: {"commit"=>"Create", "action"=>"create", "authenticity_token"=>"YhW++vd/dnLoNV/DSl1DULcaWq/RwP7jvLOVx9jQblA=", "customer_id"=>"4", "controller"=>"incidents", "incident"=>{"title"=>"Some Bad Incident", "incident_status_id"=>"1", "user_id"=>"2", "other_name"=>"SS01-042310-001"}} User Load (0.3ms) SELECT * FROM "users" WHERE ("users"."id" = 2) LIMIT 1 Redirected to http://localhost:3000/session/new Filter chain halted as [:login_required] rendered_or_redirected. Completed in 55ms (DB: 0) | 302 Found [http://localhost/customers/4/incidents] It looks to me like it's trying to gather information about the user, even though it already has the id (which is all it needs to create the incident), and the user may not have permission to do a select statement like that? I'm rather confused. Here is the relevant (I think) information in the Incident controller. before_filter :login_required, :get_customer def new @incident = @customer.incidents.build @users = @customer.users @statuses = IncidentStatus.find(:all) respond_to do |format| format.html # new.html.erb format.xml { render :xml => @incident } end end def create @incident = @customer.incidents.build(params[:incident]) respond_to do |format| if @incident.save flash[:notice] = 'Incident was successfully created.' format.html { redirect_to(@incident) } format.xml { render :xml => @incident, :status => :created, :location => @incident } else format.html { render :action => "new" } format.xml { render :xml => @incident.errors, :status => :unprocessable_entity } end end end Just as an FYI, I am using the restful_authentication plugin. So in summary, when I submit the incident creation form, it does not save the incident because it halts. I'm still very new to rails, so my skill at diagnosing problems like this is still very bad. I'm going in circles. :) Thanks in advance for any help. Please let me know if more information is needed and I'll edit it in!

    Read the article

  • Modeling distribution of performance measurements

    - by peterchen
    How would you mathematically model the distribution of repeated real life performance measurements - "Real life" meaning you are not just looping over the code in question, but it is just a short snippet within a large application running in a typical user scenario? My experience shows that you usually have a peak around the average execution time that can be modeled adequately with a Gaussian distribution. In addition, there's a "long tail" containing outliers - often with a multiple of the average time. (The behavior is understandable considering the factors contributing to first execution penalty). My goal is to model aggregate values that reasonably reflect this, and can be calculated from aggregate values (like for the Gaussian, calculate mu and sigma from N, sum of values and sum of squares). In other terms, number of repetitions is unlimited, but memory and calculation requirements should be minimized. A normal Gaussian distribution can't model the long tail appropriately and will have the average biased strongly even by a very small percentage of outliers. I am looking for ideas, especially if this has been attempted/analysed before. I've checked various distributions models, and I think I could work out something, but my statistics is rusty and I might end up with an overblown solution. Oh, a complete shrink-wrapped solution would be fine, too ;) Other aspects / ideas: Sometimes you get "two humps" distributions, which would be acceptable in my scenario with a single mu/sigma covering both, but ideally would be identified separately. Extrapolating this, another approach would be a "floating probability density calculation" that uses only a limited buffer and adjusts automatically to the range (due to the long tail, bins may not be spaced evenly) - haven't found anything, but with some assumptions about the distribution it should be possible in principle. Why (since it was asked) - For a complex process we need to make guarantees such as "only 0.1% of runs exceed a limit of 3 seconds, and the average processing time is 2.8 seconds". The performance of an isolated piece of code can be very different from a normal run-time environment involving varying levels of disk and network access, background services, scheduled events that occur within a day, etc. This can be solved trivially by accumulating all data. However, to accumulate this data in production, the data produced needs to be limited. For analysis of isolated pieces of code, a gaussian deviation plus first run penalty is ok. That doesn't work anymore for the distributions found above. [edit] I've already got very good answers (and finally - maybe - some time to work on this). I'm starting a bounty to look for more input / ideas.

    Read the article

  • SQL server timeout 2000 from C# .NET

    - by Johnny Egeland
    I have run into a strange problem using SQL Server 2000 and two linked server. For two years now our solution has run without a hitch, but suddenly yesterday a query synchronizing data from one of the databases to the other started timing out. I connect to a server in the production network, which is linked to a server containing orders I need data from. The query contains a few joins, but basically this summarizes what is done: INSERT INTO ProductionDataCache (column1, column2, ...) SELECT tab1.column1, tab1.column2, tab2.column1, tab3.column1 ... FROM linkedserver.database.dbo.Table1 AS tab1 JOIN linkedserver.database.dbo.Table2 AS tab2 ON (...) JOIN linkedserver.database.dbo.Tabl32 AS tab3 ON (...) ... WHERE tab1.productionOrderId = @id ORDER BY ... Obviously my first attempt to fix the problem was to increase the timeout limit from the original 5 minutes. But when I arrived at 30 minutes and still got a timeout, I started to suspect something else was going on. A query just does not go from executing in less than 5 minutes to over 30 minutes over night. I outputted the SQL query (which was originally in the C# code) to my logs, and decided to execute the query in the Query Analyzer directly on the database server. To my big surprise, the query executed correctly in less than 10 seconds. So I isolated the SQL execution in a simple test program, and observed the same query time out both on the server originally running this solution AND when running it locally on the database server. Also I have tried to create a Stored Procedure and execute this from the program, but this also times out. Running it in Query Analyzer works fine in less than a few seconds. It seems that the problem only occurs when I execute this query from the C# program. Has anyone seen such behavior before, and found a solution for it? UPDATE: I have now used SQL Profiler on the server. The obvious difference is that when executing the query from the .NET program, it shows up in the log as "exec sp_executesql N'INSERT INTO ...'", but when executing from Query Analyzer it occurs as a normal query in the log. Further I tried to connect the SQL Query Analyzer using the same SQL user as the program, and this triggered the problem in Query Analyzer as well. So it seems the problem only occurs when connecting via TCP/IP using a sql user.

    Read the article

  • C# or C++ game: many 16 color images loaded into RAM. Efficient solution?

    - by user560639
    I am in the planning stages of creating a fighting game and am unsure how to handle one issue relating to memory. Background info: - Still debating whether to use C# (XNA) or C++. We do not want to commit to either until we have explored how to solve this problem in both languages. - Using a max of 256MB RAM would be great if possible. - Two characters will be present at a time, and these characters can only change between battles. There is time to load/free memory between battles, but the game needs to run at a constant 60 drawn frames per second during combat. Each frame is 16.67ms - The total number of images per character is in the low hundreds. Each image is roughly 200x400 pixels. Only one image from each character will be displayed at any given moment. Uncompressed, each image takes roughly 300kb from my calculations; upwards of 100MB for a whole character. This is pushing too close to the 256MB limit given that memory will be needed for some other resources as well. Since each image can be made with a total of 16 colors. Theoretically I should be able to use 1/8th the space if I can take advantage of this. I've looked around but haven't found any word of native support for paletted images. (Storing each pixel using fewer bits that each map to a 32-bit RGBa color) I was considering making my own file format with 4 bits per pixel (and some extra palette info), loading all the images of this new format into RAM before battle, and then when drawing any specific image, decompress only that image into a raw image so it can be rendered properly. I don't know if it's realistic to perform so many assignment operations (appx 200x400 for each character = 160k) each frame. It sounds very hacky to me. Does anyone have advice on whether my solution sounds reasonable, and if there is perhaps a better one available? Thanks so much! (I also attempted to use an image with only 1 channel, then use a shader to perform a series of if statements to translate various values into other colors. Unfortunately, there were too many lines of code for the shader. It is also rather hacky and does not scale well.)

    Read the article

  • Data denormalization and C# objects DB serialization

    - by Robert Koritnik
    I'm using a DB table with various different entities. This means that I can't have an arbitrary number of fields in it to save all kinds of different entities. I want instead save just the most important fields (dates, reference IDs - kind of foreign key to various other tables, most important text fields etc.) and an additional text field where I'd like to store more complete object data. the most obvious solution would be to use XML strings and store those. The second most obvious choice would be JSON, that usually shorter and probably also faster to serialize/deserialize... And is probably also faster. But is it really? My objects also wouldn't need to be strictly serializable, because JsonSerializer is usually able to serialize anything. Even anonymous objects, that may as well be used here. What would be the most optimal solution to solve this problem? Additional info My DB is highly normalised and I'm using Entity Framework, but for the purpose of having external super-fast fulltext search functionality I'm sacrificing a bit DB denormalisation. Just for the info I'm using SphinxSE on top of MySql. Sphinx would return row IDs that I would use to fast query my index optimised conglomerate table to get most important data from it much much faster than querying multiple tables all over my DB. My table would have columns like: RowID (auto increment) EntityID (of the actual entity - but not directly related because this would have to point to different tables) EntityType (so I would be able to get the actual entity if needed) DateAdded (record timestamp when it's been added into this table) Title Metadata (serialized data related to particular entity type) This table would be indexed with SPHINX indexer. When I would search for data using this indexer I would provide a series of EntityIDs and a limit date. Indexer would have to return a very limited paged amount of RowIDs ordered by DateAdded (descending). I would then just join these RowIDs to my table and get relevant results. So this won't actually be full text search but a filtering search. Getting RowIDs would be very fast this way and getting results back from the table would be much faster than comparing EntityIDs and DateAdded comparisons even though they would be properly indexed.

    Read the article

  • Flex+JPA/Hibernate+BlazeDS+MySQL how to debug this monster?!

    - by Zenzen
    Ok so I'm making a "simple" web app using the technologies from the topic, recently I found http://www.adobe.com/devnet/flex/articles/flex_hibernate.html so I'm following it and I try to apply it to my app, the only difference being I'm working on a Mac and I'm using MAMP for the database (so no command line for me). The thing is I'm having some trouble with retrieving/connecting to the database. I have the remoting-config.xml, persistance.xml, a News.java class (my Entity), a NewsService.java class, a News.as class - all just like in the tutorial. I have of course this line in one of my .mxmls: <mx:RemoteObject id="loaderService" destination="newsService" result="handleLoadResult(event)" fault="handleFault(event)" showBusyCursor="true" /> And my remoting-config.xml looks like this (well part of it): <destination id="newsService"> <properties><source>com.gamelist.news.NewsService</source></properties> </destination> NewsService has a method: public List<News> getLatestNews() { EntityManagerFactory emf = Persistence.createEntityManagerFactory(PERSISTENCE_UNIT); EntityManager em = emf.createEntityManager(); Query findLatestQuery = em.createNamedQuery("news.findLatest"); List<News> news = findLatestQuery.getResultList(); return news; } And the named query is in the News class: @Entity @Table(name="GLT_NEWS") @NamedQueries({ @NamedQuery(name="news.findLatest", query="from GLT_NEWS order by new_date_added limit 5 ") }) The handledLoadResult looks like this: private function handleLoadResult(ev:ResultEvent):void { newsList = ev.result as ArrayCollection; newsRecords = newsList.length; } Where: [Bindable] private var newsList:ArrayCollection = new ArrayCollection(); But when I try to trigger: loaderService.getLatestNews(); nothing happens, newsList is empty. Few things I need to point out: 1) as I said I didn't install mysql manually, but I'm using MAMP (yes, the server's running), could this cause some trouble? 2) I already have a "gladm" database and I have a "GLT_NEWS" table with all the fields, is this bad? Basically the question is how am I suppose to debug this thing so I can find the mistake I'm making? I know that loadData() is executed (did a trace()), but I have no idea what happens with loaderService.getLatestNews()... @EDIT: ok so I see I'm getting an error in the "fault handler" which says "Error: Client.Error.MessageSend - Channel.Connect.Failed error NetConnection.Call.Failed: HTTP: Status 404: url: 'http://localhost:8080/WebContent/messagebroker/amf' - "

    Read the article

  • PHP: How to find connections between users so I can create a closed friend circle?

    - by CuSS
    Hi all, First of all, I'm not trying to create a social network, facebook is big enough! (comic) I've chosen this question as example because it fits exactly on what I'm trying to do. Imagine that I have in MySQL a users table and a user_connections table with 'friend requests'. If so, it would be something like this: Users Table: userid username 1 John 2 Amalia 3 Stewie 4 Stuart 5 Ron 6 Harry 7 Joseph 8 Tiago 9 Anselmo 10 Maria User Connections Table: userid_request userid_accepted 2 3 7 2 3 4 7 8 5 6 4 5 8 9 4 7 9 10 6 1 10 7 1 2 Now I want to find circles between friends and create a structure array and put that circle on the database (none of the arrays can include the same friends that another has already). Return Example: // First Circle of Friends Circleid => 1 CircleStructure => Array( 1 => 2, 2 => 3, 3 => 4, 4 => 5, 5 => 6, 6 => 1, ) // Second Circle of Friends Circleid => 2 CircleStructure => Array( 7 => 8, 8 => 9, 9 => 10, 10 => 7, ) I'm trying to think of an algorithm to do that, but I think it will take a lot of processing time because it would randomly search the database until it 'closes' a circle. PS: The minimum structure length of a circle is 3 connections and the limit is 100 (so the daemon doesn't search the entire database) EDIT: I've think on something like this: function browse_user($userget='random',$users_history=array()){ $user = user::get($userget); $users_history[] = $user['userid']; $connections = user::connection::getByUser($user['userid']); foreach($connections as $connection){ $userid = ($connection['userid_request']!=$user['userid']) ? $connection['userid_request'] : $connection['userid_accepted']; // Start the circle array if(in_array($userid,$users_history)) return array($user['userid'] => $userid); $res = browse_user($userid, $users_history); if($res!==false){ // Continue the circle array return $res + array($user['userid'] => $userid); } } return false; } while(true){ $res = browse_user(); // Yuppy, friend circle found! if($res!==false){ user::circle::create($res); } // Start from scratch again! } The problem with this function is that it could search the entire database without finding the biggest circle, or the best match.

    Read the article

  • Determine the 'Overtype' mode using Javascript

    - by Snorkpete
    We are creating a web app to replace an old-school green-screen application. In the green-screen app, as the user presses the Insert key to switch between overtype and insert modes, the cursor changes to indicate which input mode the user is currently in. In IE (which is the official browser of the company), overtype mode also works, but there's no visual indication as to whether overtype mode is on or not, until the user starts typing and possibly over-writes existing information unexpectedly. I'd like to put some sort of visual indicator on the screen if in overtype mode. How can you determine if the browser is in 'overtype mode' from Javascript? Is there some property or function i can query to determine if the browser is in overtype mode? Even an IE-specific solution would be helpful, since our corporate policy dictates the browser to use as IE7 (pure torture, btw). (I do know that one solution is to do check for key presses of the Insert key. However, it's a solution that I'd prefer to avoid since that method seems a bit flaky & error-prone because I can't guarantee what mode the user would be in BEFORE he/she hits my page. ) The reasoning behind this question: The functionality of this portion of the green-screen app is such that the user can select from a list of 'preformatted bodies of text'. crude eg. The excess for this policy is: $xxxxxx and max limit is:$xxxxxx Date of policy is: xx/xx/xxxx and expires : xx/xx/xxxx Some other irrelevant text After selecting this 'preformatted text', the user would then use overtype to replace the x's with actual values, without disturbing the alignment of the rest of the text. (To be clear, they can still edit any part of the 'preformatted text' if they so wished. It's just that usually, they just wish to replace specific portions of the text. Keeping the alignment is important since these sections of text can end up on printed documents.) Of course, the same effect can be achieved by just selecting the x's to replace first, but it would be helpful (with respect to easing the transition to the web app) to allow old methods of doing things to continue to work, while still allowing 'web methods' to be used by the more tech-savvy users. Essentially, we're trying to make the initial transition from the green-screen app to the web app be as seemless as possible to minimise the resistance from the long-time green-screeners.

    Read the article

  • mysql index optimization for a table with multiple indexes that index some of the same columns

    - by Sean
    I have a table that stores some basic data about visitor sessions on third party web sites. This is its structure: id, site_id, unixtime, unixtime_last, ip_address, uid There are four indexes: id, site_id/unixtime, site_id/ip_address, and site_id/uid There are many different types of ways that we query this table, and all of them are specific to the site_id. The index with unixtime is used to display the list of visitors for a given date or time range. The other two are used to find all visits from an IP address or a "uid" (a unique cookie value created for each visitor), as well as determining if this is a new visitor or a returning visitor. Obviously storing site_id inside 3 indexes is inefficient for both write speed and storage, but I see no way around it, since I need to be able to quickly query this data for a given specific site_id. Any ideas on making this more efficient? I don't really understand B-trees besides some very basic stuff, but it's more efficient to have the left-most column of an index be the one with the least variance - correct? Because I considered having the site_id being the second column of the index for both ip_address and uid but I think that would make the index less efficient since the IP and UID are going to vary more than the site ID will, because we only have about 8000 unique sites per database server, but millions of unique visitors across all ~8000 sites on a daily basis. I've also considered removing site_id from the IP and UID indexes completely, since the chances of the same visitor going to multiple sites that share the same database server are quite small, but in cases where this does happen, I fear it could be quite slow to determine if this is a new visitor to this site_id or not. The query would be something like: select id from sessions where uid = 'value' and site_id = 123 limit 1 ... so if this visitor had visited this site before, it would only need to find one row with this site_id before it stopped. This wouldn't be super fast necessarily, but acceptably fast. But say we have a site that gets 500,000 visitors a day, and a particular visitor loves this site and goes there 10 times a day. Now they happen to hit another site on the same database server for the first time. The above query could take quite a long time to search through all of the potentially thousands of rows for this UID, scattered all over the disk, since it wouldn't be finding one for this site ID. Any insight on making this as efficient as possible would be appreciated :) Update - this is a MyISAM table with MySQL 5.0. My concerns are both with performance as well as storage space. This table is both read and write heavy. If I had to choose between performance and storage, my biggest concern is performance - but both are important. We use memcached heavily in all areas of our service, but that's not an excuse to not care about the database design. I want the database to be as efficient as possible.

    Read the article

  • Why does PostgresQL query performance drop over time, but restored when rebuilding index

    - by Jim Rush
    According to this page in the manual, indexes don't need to be maintained. However, we are running with a PostgresQL table that has a continuous rate of updates, deletes and inserts that over time (a few days) sees a significant query degradation. If we delete and recreate the index, query performance is restored. We are using out of the box settings. The table in our test is currently starting out empty and grows to half a million rows. It has a fairly large row (lots of text fields). We are search is based of an index, not the primary key (I've confirmed the index is being used, at least under normal conditions) The table is being used as a persistent store for a single process. Using PostgresQL on Windows with a Java client I'm willing to give up insert and update performance to keep up the query performance. We are considering rearchitecting the application so that data is spread across various dynamic tables in a manner that allows us to drop and rebuild indexes periodically without impacting the application. However, as always, there is a time crunch to get this to work and I suspect we are missing something basic in our configuration or usage. We have considered forcing vacuuming and rebuild to run at certain times, but I suspect the locking period for such an action would cause our query to block. This may be an option, but there are some real-time (windows of 3-5 seconds) implications that require other changes in our code. Additional information: Table and index CREATE TABLE icl_contacts ( id bigint NOT NULL, campaignfqname character varying(255) NOT NULL, currentstate character(16) NOT NULL, xmlscheduledtime character(23) NOT NULL, ... 25 or so other fields. Most of them fixed or varying character fiel ... CONSTRAINT icl_contacts_pkey PRIMARY KEY (id) ) WITH (OIDS=FALSE); ALTER TABLE icl_contacts OWNER TO postgres; CREATE INDEX icl_contacts_idx ON icl_contacts USING btree (xmlscheduledtime, currentstate, campaignfqname); Analyze: Limit (cost=0.00..3792.10 rows=750 width=32) (actual time=48.922..59.601 rows=750 loops=1) - Index Scan using icl_contacts_idx on icl_contacts (cost=0.00..934580.47 rows=184841 width=32) (actual time=48.909..55.961 rows=750 loops=1) Index Cond: ((xmlscheduledtime < '2010-05-20T13:00:00.000'::bpchar) AND (currentstate = 'SCHEDULED'::bpchar) AND ((campaignfqname)::text = '.main.ee45692a-6113-43cb-9257-7b6bf65f0c3e'::text)) And, yes, I am aware there there are a variety of things we could do to normalize and improve the design of this table. Some of these options may be available to us. My focus in this question is about understanding how PostgresQL is managing the index and query over time (understand why, not just fix). If it were to be done over or significantly refactored, there would be a lot of changes.

    Read the article

  • J: Self-reference in bubble sort tacit implementation

    - by Yasir Arsanukaev
    Hello people! Since I'm beginner in J I've decided to solve a simple task using this language, in particular implementing the bubblesort algorithm. I know it's not idiomatically to solve such kind of problem in functional languages, because it's naturally solved using array element transposition in imperative languages like C, rather than constructing modified list in declarative languages. However this is the code I've written: (((<./@(2&{.)), $:@((>./@(2&{.)),2&}.)) ^: (1<#)) ^: # Let's apply it to an array: (((<./@(2&{.)), $:@((>./@(2&{.)),2&}.)) ^: (1<#)) ^: # 5 3 8 7 2 2 3 5 7 8 The thing that confuses me is $: referring to the statement within the outermost parentheses. Help says that: $: denotes the longest verb that contains it. The other book (~ 300 KiB) says: 3+4 7 5*20 100 Symbols like + and * for plus and times in the above phrases are called verbs and represent functions. You may have more than one verb in a J phrase, in which case it is constructed like a sentence in simple English by reading from left to right, that is 4+6%2 means 4 added to whatever follows, namely 6 divided by 2. Let's rewrite my code snippet omitting outermost ()s: ((<./@(2&{.)), $:@((>./@(2&{.)),2&}.)) ^: (1<#) ^: # 5 3 8 7 2 2 3 5 7 8 Reuslts are the same. I couldn't explain myself why this works, why only ((<./@(2&{.)), $:@((>./@(2&{.)),2&}.)) ^: (1<#) is treated as the longest verb for $: but not the whole expression ((<./@(2&{.)), $:@((>./@(2&{.)),2&}.)) ^: (1<#) ^: # and not just (<./@(2&{.)), $:@((>./@(2&{.)),2&}.), because if ((<./@(2&{.)), $:@((>./@(2&{.)),2&}.)) ^: (1<#) is a verb, it should also form another verb after conjunction with #, i. e. one might treat the whole sentence (first snippet) as a verb. Probably there's some limit for the verb length limited by one conjunction. Look at the following code (from here): factorial =: (* factorial@<:) ^: (1&<) factorial 4 24 factorial within expression refers to the whole function, i. e. (* factorial@<:) ^: (1&<). Following this example I've used a function name instead of $:: bubblesort =: (((<./@(2&{.)), bubblesort@((>./@(2&{.)),2&}.)) ^: (1<#)) ^: # bubblesort 5 3 8 7 2 2 3 5 7 8 I expected bubblesort to refer to the whole function, but it doesn't seem true for me since the result is correct. Also I'd like to see other implementations if you have ones, even slightly refactored. Thanks.

    Read the article

  • How to store wiki sites (vcs)

    - by Eugen
    Hello, as a personal project I am trying to write a wiki with the help of django. I'm a beginner when it comes to web development. I am at the (early) point where I need to decide how to store the wiki sites. I have three approaches in mind and would like to know your suggestion. Flat files I considered a flat file approach with a version control system like git or mercurial. Firstly, I would have some example wikis to look at like http://hatta.sheep.art.pl/. Secondly, the vcs would probably deal with editing conflicts and keeping the edit history, so I would not have to reinvent the wheel. And thirdly, I could probably easily clone the wiki repository, so I (or for that matter others) can have an offline copy of the wiki. On the other hand, as far as I know, I can not use django models with flat files. Then, if I wanted to add fields to a wiki site, like a category, I would need to somehow keep a reference to that flat file in order to associate the fields in the database with the flat file. Besides, I don't know if it is a good idea to have all the wiki sites in one repository. I imagine it is more natural to have kind of like a repository per wiki site resp. file. Last but not least, I'm not sure, but I think using flat files would limit my deploying capabilities because web hosts maybe don't allow creating files (I'm thinking, for example, of Google App Engine) Storing in a database By storing the wiki sites in the database I can utilize django models and associate arbitrary fields with the wiki site. I probably would also have an easier life deploying the wiki. But I would not get vcs features like history and conflict resolving per se. I searched for django-extensions to help me and I found django-reversion. However, I do not fully understand if it fit my needs. Does it track model changes like for example if I change the django model file, or does it track the content of the models (which would fit my need). Plus, I do not see if django reversion would help me with edit conflicts. Storing a vcs repository in a database field This would be my ideal solution. It would combine the advantages of both previous approaches without the disadvantages. That is; I would have vcs features but I would save the wiki sites in a database. The problem is: I have no idea how feasible that is. I just imagine saving a wiki site/source together with a git/mercurial repository in a database field. Yet, I somehow doubt database fields work like that. So, I'm open for any other approaches but this is what I came up with. Also, if you're interested, you can find the crappy early test I'm working on here http://github.com/eugenkiss/instantwiki-test

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >