Search Results

Search found 8 results on 1 pages for 'hogan'.

Page 1/1 | 1 

  • How do I setup a custom Gem.path using JRuby::Rack?

    - by Ben Hogan
    Hi Nick et al, I've been having some fun looking at to source code of JRuby-Rack and Rubygems to try to figure out how to solve a org.jruby.rack.RackInitializationException: no such file to load -- compass in my rackup script cased by require 'compass'. I'm passing in a custom 'gem.path' as a servlet init parameter and it is being correctly picked up by jruby-rack as far as I can tell by debugging in my rackup script: ENV['GEM_PATH'] => '/foo/lib/.jruby/gems' (expected) but rubygems seems to be broken: Gem.path => file:/foo/lib/jruby-complete-1.4.0.jar!/META-INF/jruby.home/lib/ruby/gems/1.8 I'm not sure why rubygems has not adjusted it's gem_path nor the LOAD_PATH, thus breaking require? Thanks again, I'm still a newbie at ruby, jruby, rack and sinatra. Any pointers in the right direction appreciated! Ben

    Read the article

  • How should jruby-jars and jruby-rack be added to the classpath using warbler?

    - by Ben Hogan
    Hi again, I've been reading through the warbler source code, and I can't figure out how the jruby-jars and jruby-rack jars are meant to end up on the servlet classpath? It seems warbler is copying them into web-inf/gems/gems/<gemname>/lib/<jarname>.jar but they are not on the classpath. I'm guessing that if I put them in my ruby apps lib/ folder they would be copied to web-inf/lib and all would be well, however, it seems odd to have 2 copies of the jar in the war file, is that what I am meant to do? Ben

    Read the article

  • Which jsPerf-test should I consider as standard for checking the performance of javascript template-engines

    - by bhargav
    I am on a search for a javascript template engine that has good performance when used in large js applications and is also very suitable for mobile applications. So I have gone through the various jsPerf-tests for these. There seems to be a lot which show different results and it is confusing to find out which is the standard test. Can some one guide me a standard jsPerf that I can refer to and that should also include following templates dust, underscore, hogan, mustache, handlebars. From what I have observed dot.js is a constant performer with good rendering speed, but is it mature enough for larger applications ? What is this "with" and "no with" that is shown in the jspref tests? Can some one explain. In all the tests I have seen popular templates like mustache, handlebars, dust, hogan,etc seems to be behind performance than other templates, so why people are using them leaving out the top performers,is it because of maturity of these template engines? Thanks in advance

    Read the article

  • An Interesting Perspective on Oracle's Mobile Strategy

    - by Carlos Chang
    Oracle’s well known for being an acquisitive company. On average, I think we acquire about 1 company a month. (don’t quote me, I didn't run the numbers)  With all the excitement around mobile, mobile and wait for it… mobile, well, you know...what' s up with that? Well, just to be clear and quote Schultz from Hogan's Heroes "I know nothing! Nothing! "  But I did recently run across this blog by Kevin Benedict over at mobileenterprisestrategies.com covering this very topic, Oracle Mobility Emerges Prepared for the Future,  a little (fair use) snippet here:"History, however, may reward Oracle's patience.  While veteran mobile platform vendors (including SAP) have struggled to keep up with the fast changing market, R&D investment requirements, the fickle preferences of mobile developers, and the emergence of cloud-based mobile services, Oracle has kept their focus on supporting mobile developers with integration services and tools that extend their solutions out to mobile apps.”It’s an interesting read, and I would encourage you to check it out here.   BTW, if you’re a Twitter user, follow our new account @OracleMobile To the first ten thousand followers, I bequeath you my sincere virtual thanks and gratitude. :)  For the dedicated mobile blog, go to blogs.oracle.com/mobile.

    Read the article

  • SQL Server Bulk insert of CSV file with inconsistent quotes

    - by mattstuehler
    Is it possible to BULK INSERT (SQL Server) a CSV file in which the fields are only OCCASSIONALLY surrounded by quotes? Specifically, quotes only surround those fields that contain a ",". In other words, I have data that looks like this (the first row contain headers): id, company, rep, employees 729216,INGRAM MICRO INC.,"Stuart, Becky",523 729235,"GREAT PLAINS ENERGY, INC.","Nelson, Beena",114 721177,GEORGE WESTON BAKERIES INC,"Hogan, Meg",253 Because the quotes aren't consistent, I can't use '","' as a delimiter, and I don't know how to create a format file that accounts for this. I tried using ',' as a delimter and loading it into a temporary table where every column is a varchar, then using some kludgy processing to strip out the quotes, but that doesn't work either, because the fields that contain ',' are split into multiple columns. Unfortunately, I don't have the ability to manipulate the CSV file beforehand. Is this hopeless? Many thanks in advance for any advice. By the way, i saw this post SQL bulk import from csv, but in that case, EVERY field was consistently wrapped in quotes. So, in that case, he could use ',' as a delimiter, then strip out the quotes afterwards.

    Read the article

  • Paging, sorting and filtering in a stored procedure (SQL Server)

    - by Fruitbat
    I was looking at different ways of writing a stored procedure to return a "page" of data. This was for use with the asp ObjectDataSource, but it could be considered a more general problem. The requirement is to return a subset of the data based on the usual paging paremeters, startPageIndex and maximumRows, but also a sortBy parameter to allow the data to be sorted. Also there are some parameters passed in to filter the data on various conditions. One common way to do this seems to be something like this: [Method 1] ;WITH stuff AS ( SELECT CASE WHEN @SortBy = 'Name' THEN ROW_NUMBER() OVER (ORDER BY Name) WHEN @SortBy = 'Name DESC' THEN ROW_NUMBER() OVER (ORDER BY Name DESC) WHEN @SortBy = ... ELSE ROW_NUMBER() OVER (ORDER BY whatever) END AS Row, ., ., ., FROM Table1 INNER JOIN Table2 ... LEFT JOIN Table3 ... WHERE ... (lots of things to check) ) SELECT * FROM stuff WHERE (Row > @startRowIndex) AND (Row <= @startRowIndex + @maximumRows OR @maximumRows <= 0) ORDER BY Row One problem with this is that it doesn't give the total count and generally we need another stored procedure for that. This second stored procedure has to replicate the parameter list and the complex WHERE clause. Not nice. One solution is to append an extra column to the final select list, (SELECT COUNT(*) FROM stuff) AS TotalRows. This gives us the total but repeats it for every row in the result set, which is not ideal. [Method 2] An interesting alternative is given here (http://www.4guysfromrolla.com/articles/032206-1.aspx) using dynamic SQL. He reckons that the performance is better because the CASE statement in the first solution drags things down. Fair enough, and this solution makes it easy to get the totalRows and slap it into an output parameter. But I hate coding dynamic SQL. All that 'bit of SQL ' + STR(@parm1) +' bit more SQL' gubbins. [Method 3] The only way I can find to get what I want, without repeating code which would have to be synchronised, and keeping things reasonably readable is to go back to the "old way" of using a table variable: DECLARE @stuff TABLE (Row INT, ...) INSERT INTO @stuff SELECT CASE WHEN @SortBy = 'Name' THEN ROW_NUMBER() OVER (ORDER BY Name) WHEN @SortBy = 'Name DESC' THEN ROW_NUMBER() OVER (ORDER BY Name DESC) WHEN @SortBy = ... ELSE ROW_NUMBER() OVER (ORDER BY whatever) END AS Row, ., ., ., FROM Table1 INNER JOIN Table2 ... LEFT JOIN Table3 ... WHERE ... (lots of things to check) SELECT * FROM stuff WHERE (Row > @startRowIndex) AND (Row <= @startRowIndex + @maximumRows OR @maximumRows <= 0) ORDER BY Row (Or a similar method using an IDENTITY column on the table variable). Here I can just add a SELECT COUNT on the table variable to get the totalRows and put it into an output parameter. I did some tests and with a fairly simple version of the query (no sortBy and no filter), method 1 seems to come up on top (almost twice as quick as the other 2). Then I decided to test probably I needed the complexity and I needed the SQL to be in stored procedures. With this I get method 1 taking nearly twice as long as the other 2 methods. Which seems strange. Is there any good reason why I shouldn't spurn CTEs and stick with method 3? UPDATE - 15 March 2012 I tried adapting Method 1 to dump the page from the CTE into a temporary table so that I could extract the TotalRows and then select just the relevant columns for the resultset. This seemed to add significantly to the time (more than I expected). I should add that I'm running this on a laptop with SQL Server Express 2008 (all that I have available) but still the comparison should be valid. I looked again at the dynamic SQL method. It turns out I wasn't really doing it properly (just concatenating strings together). I set it up as in the documentation for sp_executesql (with a parameter description string and parameter list) and it's much more readable. Also this method runs fastest in my environment. Why that should be still baffles me, but I guess the answer is hinted at in Hogan's comment.

    Read the article

1