Search Results

Search found 2017 results on 81 pages for 'jason swett'.

Page 4/81 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Ideal PHP Session Size?

    - by Jason
    Hi, I have a PHP form (mortgage app) that is about 400 fields, traffic on the site will be low. What is the ideal Session size for 400 fields going into a MySQL db? In PHP.ini what do I set? Anything I should set that I am missing? -Jason

    Read the article

  • Doing a large number of upserts as fast as possible

    - by Jason Swett
    My app (which uses MySQL) is doing a large number of subsequent upserts. Right now my SQL looks like this: INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('VICTOR H KINDELL','123','123','123') INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('VICTOR H KINDELL','123','123','123') INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('VICTOR H KINDELL OR','123','123','123') INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('TRACY L WALTER PERSONAL REP FOR','123','123','123') INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('TRACY L WALTER PERSONAL REP FOR','123','123','123') So far I've found INSERT IGNORE to be the fastest way to achieve upserts. Selecting a record to see if it exists and then either updating it or inserting a new one is too slow. Even this is not as fast as I'd like because I need to do a separate statement for each record. Sometimes I'll have around 50,000 of these statements in a row. Is there a way to take care of all of these in just one statement, without deleting any existing records?

    Read the article

  • Cap deploy doesn't work all the sudden; something to do with FactoryGirl and assets

    - by Jason Swett
    I've been cap deploying my app all throughout it development, and this last time I tried to deploy it, it didn't work. Here's what happened: * executing `deploy:assets:precompile' * executing "cd /var/www/oneteam/releases/20121006153136 && bundle exec rake RAILS_ENV=production RAILS_GROUPS=assets assets:precompile" servers: ["electricsasquatch.com"] [electricsasquatch.com] executing command ** [out :: electricsasquatch.com] rake aborted! ** [out :: electricsasquatch.com] uninitialized constant OneTeam::Application::FactoryGirl ** [out :: electricsasquatch.com] ** [out :: electricsasquatch.com] (See full trace by running task with --trace) It looks like it failed on the deploy:assets:precompile command. I don't get why that command would have tried to do anything with FactoryGirl, though. Any ideas?

    Read the article

  • Adding one subquery makes query a little slower, adding another makes it way slower

    - by Jason Swett
    This is fast: select ba.name, penamt.value penamt, #address_line4.value address_line4 from account a join customer c on a.customer_id = c.id join branch br on a.branch_id = br.id join bank ba on br.bank_id = ba.id join account_address aa on aa.account_id = a.id join address ad on aa.address_id = ad.id join state s on ad.state_id = s.id join import i on a.import_id = i.id join import_bundle ib on i.import_bundle_id = ib.id join (select * from unused where heading_label = 'PENAMT') penamt ON penamt.account_id = a.id #join (select * from unused where heading_label = 'Address Line 4') address_line4 ON address_line4.account_id = a.id where i.active=1 And this is fast: select ba.name, #penamt.value penamt, address_line4.value address_line4 from account a join customer c on a.customer_id = c.id join branch br on a.branch_id = br.id join bank ba on br.bank_id = ba.id join account_address aa on aa.account_id = a.id join address ad on aa.address_id = ad.id join state s on ad.state_id = s.id join import i on a.import_id = i.id join import_bundle ib on i.import_bundle_id = ib.id #join (select * from unused where heading_label = 'PENAMT') penamt ON penamt.account_id = a.id join (select * from unused where heading_label = 'Address Line 4') address_line4 ON address_line4.account_id = a.id where i.active=1 But this is slow: select ba.name, penamt.value penamt, address_line4.value address_line4 from account a join customer c on a.customer_id = c.id join branch br on a.branch_id = br.id join bank ba on br.bank_id = ba.id join account_address aa on aa.account_id = a.id join address ad on aa.address_id = ad.id join state s on ad.state_id = s.id join import i on a.import_id = i.id join import_bundle ib on i.import_bundle_id = ib.id join (select * from unused where heading_label = 'PENAMT') penamt ON penamt.account_id = a.id join (select * from unused where heading_label = 'Address Line 4') address_line4 ON address_line4.account_id = a.id where i.active=1 Why is it fast when I include just one of the two subqueries but slow when I include both? I would think it should be twice as slow when I include both, but it takes a really long time. On on MySQL.

    Read the article

  • Saving a form using autocomplete instead of select field

    - by Jason Swett
    I have a form that looks like this: <%= form_for(@appointment) do |f| %> <% if @appointment.errors.any? %> <div id="error_explanation"> <h2><%= pluralize(@appointment.errors.count, "error") %> prohibited this appointment from being saved:</h2> <ul> <% @appointment.errors.full_messages.each do |msg| %> <li><%= msg %></li> <% end %> </ul> </div> <% end %> <%= f.fields_for @client do |client_form| %> <div class="field"> <%= client_form.label :name, "Client Name" %><br /> <%= client_form.text_field :name %> </div> <% end %> As you can see, the field for @client is a text field as opposed to select field. When I try to save my form, I get this error: Client(#23852094658120) expected, got ActiveSupport::HashWithIndifferentAccess(#23852079773520) That's not surprising. It seems to me that it was expecting a select field, which it could translate into a Client object, but instead it just got a string. I know I can do Client.find( :first, :conditions => { :name => params[:name] } ) to find a Client with that name, but how do I tell my form that that's what's going on?

    Read the article

  • Weird Rails database errors

    - by Jason Swett
    I've had some trouble getting my Rails app to connect to PostgreSQL so I decided to just say screw it and use SQLite for now. (I'm using the tutorial here: http://guides.rubyonrails.org/getting_started.html) I started a BRAND NEW, fresh Rails app from this tutorial. When I visit my app in the browser after deleting public/index.html, I get this the first time: Please install the pg adapter: `gem install activerecord-pg-adapter` (no such file to load -- active_record/connection_adapters/pg_adapter) That's odd to me because I'm not mentioning PostgreSQL anywhere. Here's my databases.yml: # SQLite version 3.x # gem install sqlite3-ruby (not necessary on OS X Leopard) development: adapter: sqlite3 database: db/development.sqlite3 pool: 5 timeout: 5000 # Warning: The database defined as "test" will be erased and # re-generated from your development database when you run "rake". # Do not set this db to the same as development or production. test: adapter: sqlite3 database: db/test.sqlite3 pool: 5 timeout: 5000 production: adapter: sqlite3 database: db/production.sqlite3 pool: 5 timeout: 5000 To make things more confusing, I only get that "pg adapter" error on the first load. For every subsequent page request, I get this error: ActiveRecord::ConnectionNotEstablished So even though I removed all mention of PostgreSQL, I'm still getting errors. What could be going on?

    Read the article

  • Rails using plural table names even though I told it to use singular

    - by Jason Swett
    I tried to run rake test:profile and I got this error: ... Table 'mcif2.accounts' doesn't exist: DELETE FROM `accounts` I know accounts doesn't exist. It's called account. I know Rails uses plural table names by default but here's what my config/environment.rb looks like: # Load the rails application require File.expand_path('../application', __FILE__) # Initialize the rails application McifRails::Application.initialize! ActiveRecord::Base.pluralize_table_names = false And here's what db/schema.rb looks like: ActiveRecord::Schema.define(:version => 0) do create_table "account", :force => true do |t| t.integer "customer_id", :limit => 8, :null => false t.string "account_number", :null => false t.integer "account_type_id", :limit => 8 t.date "open_date", :null => false So I don't understand why Rails still wants to call it accounts sometimes. Any ideas? If it helps give any clues at all, here are the results of grep -ir 'accounts' *.

    Read the article

  • Sql order by within a group by with aggregate

    - by NG
    Say I have Team/Name/Some number Cardinals Jason 8 Cardinals Chris 5 Yankees Joba 6 Cubs Carlos 6 Cardinals Chris 6 And I want Cardinals Jason 8 Cardinals Chris 11 Cubs Carlos 6 Yankees Joba 6 So, what I'm doing is grouping by team, grouping by name, summing by some number However, within cardinals I want to make sure the names are in a particular order. If I just do an "order by name desc" for example then the the whole grouping gets ignored. So how can I order within a group.

    Read the article

  • Including email, IMs, configs, etc. in documentation or notes

    - by Jason Antman
    The shop I work in is pretty laid-back. We're on a documentation kick, only because historically we've been very bad with it. We do a lot of our brainstorming in face-to-face meetings, and also do a lot of communication via IM in addition to email. While I'm usually pretty good about documentation and keeping copious lab notes, I just finished a build of a host and spent hours searching through IMs, emails, files on my workstation, etc. to pull out anything I missed in my lab notes, which formed a large amount of the basis for the internal documentation. Does anyone have any thoughts on, aside from manually saving things to a project directory, managing various data sources (especially email and IM) and tracking them on project basis? Ideally, I'd like an easy way to put copies of emails, IM logs, etc. into a project-specific directory on my workstation and then just have a cron job that syncs that up with a shared folder. This isn't really a candidate for anything more advanced, as the bulk of the data will be copies of configs, code, etc. Here are the big restrictions: Email is via a centralized Zimbra install, so nothing can happen server-side. My workstation is Linux. Aside from writing Pidgin and Thunderbird plugins that let me tag chats and emails as belonging to a project, and then copy them to the appropriate place... any thoughts? Suggestions? Thanks, Jason

    Read the article

  • CUDA: cudaMemcpy only works in emulation mode.

    - by Jason
    I am just starting to learn how to use CUDA. I am trying to run some simple example code: float *ah, *bh, *ad, *bd; ah = (float *)malloc(sizeof(float)*4); bh = (float *)malloc(sizeof(float)*4); cudaMalloc((void **) &ad, sizeof(float)*4); cudaMalloc((void **) &bd, sizeof(float)*4); ... initialize ah ... /* copy array on device */ cudaMemcpy(ad,ah,sizeof(float)*N,cudaMemcpyHostToDevice); cudaMemcpy(bd,ad,sizeof(float)*N,cudaMemcpyDeviceToDevice); cudaMemcpy(bh,bd,sizeof(float)*N,cudaMemcpyDeviceToHost); When I run in emulation mode (nvcc -deviceemu) it runs fine (and actually copies the array). But when I run it in regular mode, it runs w/o error, but never copies the data. It's as if the cudaMemcpy lines are just ignored. What am I doing wrong? Thank you very much, Jason

    Read the article

  • Article search engine in php

    - by Jason
    Hello, I am using sphinx as a search engine on my website its working perfect and I have no complain with it. The only thing it lacks is, it does not allow me to search articles whose query length is more than 15 words. I know in reality people don't use more than 3-4 words i want to use it for finding duplicate contents. I was wondering if there is any alternative solution to sphinx. I want to cope with duplicate contents. My main articles table is in innodb but I am also caching articles into MyISAM table as well for full text searching but when I search an article it takes ages to perform one search. Its not the query problem, i think mysql lacks the fulltext searching facility. Thanks Jason

    Read the article

  • MATLAB: dealing with java.lang.String

    - by Jason S
    I seem to be stuck in Kafka-land, with a java.lang.String that I can't seem to use in MATLAB functions: K>> name name = Jason K>> sprintf('%s', name) ??? Error using ==> sprintf Function is not defined for 'java.lang.String' inputs. K>> ['my name is ' name] ??? Error using ==> horzcat The following error occurred converting from char to opaque: Error using ==> horzcat Undefined function or method 'opaque' for input arguments of type 'char'. how can I get a java.lang.String to convert to a regular MATLAB character array?

    Read the article

  • Sharepoint calendar webpart change views

    - by Jason
    Hi there, I created a calendar list, and I added the calendar web part in another web part page. I noticed that i can not change the view without going to edit mode, then go to modify shared web part. But in the home page of the calendar. There is a same calendar web part with a drop down menu to change views. Also, there is a small calendar connected to the main calendar web part in the left navigation area. I don't know how to add it to my web part page. How can i make it look the same on my web part page? Thanks, Jason

    Read the article

  • Setting a cookie before Javascript Redirection

    - by Jason
    Hello, I have a Rails app where I set a set a session variable the moment a user lands on my site with the referer and the page they hit. Additionally, I have Google Optimizer sending traffic from my homepage to various landing pages. The problem is that I think Google Optimizer is sending users away before the cookie is set. Is that even possible? I believe that the cookie is set from the HTTP Header, which must have fully loaded before Google's Javascript has even loaded. Thanks, Jason

    Read the article

  • Named Blueprints with factory_girl

    - by Jason Nerer
    I am using Factory Girl but like the machinist syntax. So I wonder, if there is any way creating a named blueprint for class, so that I can have something like that: User.blueprint(:no_discount_user) do admin false hashed_password "226bc1eca359a09f5f1b96e26efeb4bb1aeae383" is_trader false name "foolish" salt "21746899800.223524289203464" end User.blueprint(:discount_user) do admin false hashed_password "226bc1eca359a09f5f1b96e26efeb4bb1aeae383" is_trader true name "deadbeef" salt "21746899800.223524289203464" discount_rate { DiscountRate.make(:rate => 20.00) } end DiscountRate.blueprint do rate {10} not_before ... not_after ... end Is there a way making factory_girl with machinist syntax acting like that? I did not find one. Help appreciated. Thx in advance Jason

    Read the article

  • Click GEvent.addListener with jquery

    - by Jason
    Created a google map with GMap2 and put pinpoints on there that open up a balloon with the address when the pinpoint is clicked. I would like users to be able to click text on the page itself and use jquery to open up the corresponding balloon. However I can't figure out the ID to use to call a jquery click event. Basically I've got a store listing down the left side and when user clicks store name I want it to open up the corresponding balloon. GEvent.addListener(marker_500, "click", function () { map.openInfoWindowHtml(point, myHtml); } Any idea what element tied to this click event is? Tried $("#marker_500").click(); And that doesn't work. Also tried alerting $(this).attr('id'); inside the click function and that is undefined. thanks jason

    Read the article

  • Javascript array of href's

    - by Jason
    Hi, I am trying to create an array with different href's to then attach to 5 separate elements. This is my code: var link = new Array('link1', 'link2', 'link3', 'link4', 'link5'); $(document.createElement("li")) .attr('class',options.numericId + (i+1)) .html('<a rel='+ i +' href=\"page.php# + 'link'\">'+ '</a>') .appendTo($("."+ options.numericId)) As you can see I am trying to append these items from the array to the end of my page so each link will take the user to a different section of the page. But i have not been able to do this. Is there a way to to create elements with different links? I am new to javascript so I am sorry if this doesn't make a whole lot of sense. If anyone is confused by what i am asking here I can try to clarify if I get some feedback. Any solutions would be greatly appreciated. Thanks, jason

    Read the article

  • ASP.NET Web User Control with Javascript used multiple times on a page - How to make javascript func

    - by Jason Summers
    I think I summed up the question in the title. Here is some further elaboration... I have a web user control that is used in multiple places, sometimes more than once on a given page. The web user control has a specific set of JavaScript functions (mostly jQuery code) that are containted within *.js files and automatically inserted into page headers. However, when I want to use the control more than once on a page, the *.js files are included 'n' number of times and, rightly so, the browser gets confused as to which control it's meant to be executing which function on. What do I need to do in order to resolve this problem? I've been staring at this all day and I'm at a loss. All comments greatly appreciated. Jason

    Read the article

  • glibc backtrace - can't redirect output to file

    - by Jason Antman
    Hi, I'm in the process of debugging a C program (that I didn't write). I have all of the internal debugging tools (a whole bunch of printf's) enabled, and I wrote a small PHP script that uses proc_open() and just grabs both stdout and stderr, and time-coordinates them in one file. At the moment, the binary is dieing with a realloc() error that's caught by glibc, and a glibc backtrace is printed, beginning with: *** glibc detected *** /sbin/rsyslogd: realloc(): invalid next size: 0x00002ace626ac910 *** Here's the thing I don't understand: I've confirmed that the PHP script is catching both stdout and stderr from the binary's process and writing them to the correct files, but this backtrace is still printed to the console. Where is this coming from? Is there some magical output channel other than stdout and stderr? Any ideas on how I go about capturing this backtrace to a file, or sending it out with stderr? Thanks, Jason

    Read the article

  • Problem with ajax and posting non-latin characters

    - by jason
    Posting non-latin based languages with ajax + jquery doesn't save to mysql the correct text. What I have done is this: I am getting multiple translated words from Google's translation api. The ajax request is showing the correct translations for all languages. But when i try and insert this into the db it shows up in php my admin as garbled text I added AddDefaultCharset UTF-8 to .htaccess file on the root. I tried setting the header in php to utf-8 and this did not work. I have tried adding a contentType to ajax setup but this didn't work also. Any suggestions appreciated. jason

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >