Search Results

Search found 9897 results on 396 pages for 'ruby protobuf'.

Page 253/396 | < Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >

  • How to triage this MySQL duplicate entry error after running Rails migration?

    - by keruilin
    I get the following error when I try to run this migration: == AddUniquenessConstraintOnAwards: migrating ================================ -- add_index(:awards, [:badge_id, :game_week_id], {:unique=>true, :name=>:game_badge_index}) rake aborted! An error has occurred, all later migrations canceled: Mysql::Error: Duplicate entry '35-8192' for key 'game_badge_index': CREATE UNIQUE INDEX `game_badge_index` ON `awards` (`badge_id`, `game_week_id`) Has anyone encountered? What's the error telling me? How did you troubleshoot it and ultimately fix it?

    Read the article

  • How to restrict text search to a certain subset of the database ?

    - by Nikhil Garg
    I have a large central database of around 1 million heavy records. In my app, for every user I would have a subset of rows from central table, which would be very small (probably 100 records each).When a particular user has logged in , I would want to search on this data set only. Example: Say I have a central database of all cars in the world. I have a user profile for General Motors(GM) , Ferrari etc. When GM is logged in I just want to search(a full text search and not fire a sql query) for those cars which are manufactured by GM. Also GM may launch/withdraw a model in which case central db would be updated & so would be rowset associated with GM. In case of acquisitions, db of certain profiles may change without launch/removal of new car. So central db wont change then , but rowsets may. Whats the best way to implement such a design ? These smaller row sets would need to be dynamic depending on user activities. We are on Rails 2.3.5 and use thinking_sphinx as the connector and Sphinx/MySQL for search and relational associations.

    Read the article

  • Action Controller: Exception - ID not found

    - by Danny McClelland
    Hi Everyone, I am slowly getting the hang of Rails and thanks to a few people I now have a basic grasp of the database relations and associations etc. You can see my previous questions here: http://stackoverflow.com/questions/2714621/rails-database-relationships I have setup my applications models with all of the necessary has_one and has_many :through etc. but when I go to add a kase and choose from a company from the drop down list - it doesnt seem to be assigning the company ID to the kase. You can see a video of the the application and error here: http://screenr.com/BHC You can see a full breakdown of the application and relevant source code at the Git repo here: http://github.com/dannyweb/surveycontrol If anyone could shed some light on my mistake I would be appreciate it very much! Thanks, Danny

    Read the article

  • RESTful membership

    - by FoxDemon
    I am currentlly trying to design a RESTful MembershipsController. The controller action update is used only for promoting, banning, approving,... members. To invoke the update action the URL must contain a Parameter called type with the appropriate value. I am not too sure if that is really RESTful design. Should I rather introduce sepearate actions for promoting,... members? class MembershipsController < ApplicationController def update @membership= Membership.find params[:id] if Membership.aasm_events.keys.include?(params[:type].to_sym) #[:ban, :promote,...] @membership.send("#{params[:type]}!") render :partial => 'update_membership' end end end

    Read the article

  • Prevent Rails link_to_remote multiple submits w Javascript

    - by Chris
    In a Rails project I need to keep a link_to_remote from getting double-clicked. It looks like :before and :after are my only choices - they get prepended/appended to the onclick Ajax call, respectively. But if I try something like: :before => "self.stopObserving()" t,he Ajax is never run. If I try it for :after the Ajax is run but the link never stops observing. The solutions I've seen rely on creating a variable and blocking the whole form, but there are multiple link_to_remote rows on this page and it is valid to click more than one of them at a time - just not the same one twice. One variable per row declared outside of link_to_remote seems very kludgey... Instead of using Prototype I originally tried plain Javascript first for this proof of concept - but it fails too: <a href="#" onclick="self.onclick = function(){alert('foo');};"click</a just puts up an alert when clicked - the lambda here does nothing? This next one is more like the desired goal and should only alert the first time. But instead it alerts every time: <a href="#" onclick="alert('bar'); self.onclick = function(){return false;};"click</a All ideas appreciated!

    Read the article

  • Performing AJAX calls on the "new" controller

    - by shmichael
    In my rails app, I want to have a sortable list as part of an object creation. The best practice suggested Railscast adds the acts_as_list plugin and then initiates AJAX calls to update item position. However, AJAX calls won't work on an unsaved model, which is the situation with new. One solution would be to save the model immediately on new and redirect to edit. This would have a nice side effect of persisting any change so the user could resume work should he be interrupted. However, this solution adds the unwanted complexity of saving an invalid model, compromising rails' validation processes. Is there any better way to allow AJAX + validations without going into too much work?

    Read the article

  • Rails 3.o MYSQL connection problem

    - by palani
    Hi I have installed RVM in my ubunut linux box and configured the Rails 3 app in that ... i can able to start app server... my problem is when i invoke http://localhost:3000 . i getting the follwing error Mysql::Error (Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)): I checked mysqld service is running well. I checked my database.yml file .... the defined well development: adapter: mysql encoding: utf8 reconnect: false database: test_development username: root password: admin socket: /var/run/mysqld/mysqld.sock my installed mysql gem version is 2.8.1.... I really don't know what is the problem here....

    Read the article

  • Heroku powered private restricted beta

    - by Ben Sand
    I'd like to run an app in a restricted private beta on heroku. We're changing the app regularly and haven't done a security audit. To stop anyone exploiting stuff, we'd like to lock down the whole site, so you need a password to access anything. Ideally similar to using .htaccess and .htpasswd files to lock an entire site on an Apache server. Is there a simple one shot way to do this for a heroku hosted app?

    Read the article

  • Eclipse users: Do you use Aptana too?

    - by Glenn
    This San Mateo development company makes a freely downloadable convenient packaging of many plugins for Eclipse called Aptana. I was recently in an environment where Aptana came pre-installed. Not only is it a good IDE for RoR, it also does a somewhat decent job (sans debugging) for PHP, Python, HTML, CSS, and Javascript. According to their own web site, their IDE also supports Adobe Air and the iPhone. If you are currently using Eclipse, then do you also use Aptana too? What, if any, are the drawbacks to using Aptana?

    Read the article

  • Problems setting up AuthLogic

    - by sscirrus
    Hi all, I'm trying to set up a simple login using AuthLogic into my User table. Every time I try, the login fails and I don't know why. I'm sure this is a simple error but I've been hitting a brick wall with it for a while. #user_sessions_controller def create @user_session = UserSession.new(params[:user_session]) if @user_session.save flash[:notice] = "Login successful!" else flash[:notice] = "We couldn't log you in. Please try again!" redirect_to :controller => "index", :action => "index" end end #_user_login.html.erb (this is the partial from my index page where Users log in) <% form_tag user_session_path do %> <p><label for="login">Login:</label> <%= text_field_tag "login", nil, :class => "inputBox", :id => "login", </p> <p><label for="password">Password: </label> <%= password_field_tag "password", nil, :class => "inputBox", :id => "password", </p> <p><%= submit_tag "submit", :class => "submit" %></p> <% end %> I had Faker generate some data for my user table but I cannot log in! Every time I try it just redirects to index. Where am I going wrong? Thanks everybody.

    Read the article

  • Rails and MongoDB with MongoMapper

    - by FCastellanos
    I'm new to Rails development and I'm starting with MongoDB also. I have been following this Railscast tutorial about complex forms with Rails but I'm using MongoDB as my database. I'm having no problems inserting documents with it's childs and retrieving the data to the edit form, but when I try to update it I get this error undefined method `assert_valid_keys' for false:FalseClass this is my entity class class Project include MongoMapper::Document key :name, String, :required => true key :priority, Integer many :tasks after_update :save_tasks def task_attributes=(task_attributes) task_attributes.each do |attributes| if attributes[:id].blank? tasks.build(attributes) else task = tasks.detect { |t| t.id.to_s == attributes[:id].to_s } task.attributes = attributes end end end def save_tasks tasks.each do |t| t.save(false) end end end Does anyone knows whats happening here? Thanks

    Read the article

  • Mongomapper - unit testing with shoulda on rails 2.3.5

    - by egarcia
    I'm trying to implement shoulda unit tests on a rails 2.3.5 app using mongomapper. So far I've: Configured a rails app that uses mongomapper (the app works) Added shoulda to my gems, and installed it with rake gems:install Added config.frameworks -= [ :active_record, :active_resource ] to config/environment.rb so ActiveRecord isn't used. My models look like this: class Account include MongoMapper::Document key :name, String, :required => true key :description, String key :company_id, ObjectId key :_type, String belongs_to :company many :operations end My test for that model is this one: class AccountTest < Test::Unit::TestCase should_belong_to :company should_have_many :operations should_validate_presence_of :name end It fails on the first should_belong_to: ./test/unit/account_test.rb:3: undefined method `should_belong_to' for AccountTest:Class (NoMethodError) Any ideas why this doesn't work? Should I try something different from shoulda? I must point out that this is the first time I try to use shoulda, and I'm pretty new to testing itself.

    Read the article

  • When to Store Temporary Values in Hidden Field vs. Session vs. Database?

    - by viatropos
    I am trying to build a simple OpenID login panel similar to how Stack Overflow's works. The goal is: User clicks OpenID/Oauth provider OpenID/Oauth stuff happens, we end up with the result (already made that) Then we want to confirm that the user wants to actually create a new account (vs. associating account with another OpenID account). In StackOverflow, they keep a hidden field on a form that looks like this: <form action="/users/openidconfirm" method="post"> <p>This is an OpenID we haven't seen on Stack Overflow before:</p> <p class="openid-identifier">https://me.yahoo.com/a/some-hash</p> <p>Do you want to associate this OpenID with your Stack Overflow account?</p> <div> <input type="hidden" name="fkey" value="9792ab2zza1q2a4ac414casdfa137eafba7"> <input type="hidden" name="s" value="c1a3q133-11fa-49r0-a7bz-da19849383218"> <input type="submit" value="Associate OpenID"> <input type="button" value="Cancel" onclick="window.location.href = 'http://stackoverflow.com/users/169992/viatropos?s=c1a3q133-11fa-49r0-a7bz-da19849383218'"> </div> </form> Initial question is, what are those hashes fkey and s? Not that I really care what these specific hashes are, but what it seems like is happening is they have processed the openid response and saved it to the DB in a temporary object or something, and from there they generate these keys, because they don't look like Oauth keys to me. Main situation is: after I have processed OpenID/Oauth responses, I don't yet want to create a new user/account until the user submits the "confirm" form. Should I store the keys and tokens temporarily in a "Confirm" form like this? Or is there a better way? It seems that using a temp database object would be a lot of work to manage properly. Thanks for the help. Lance

    Read the article

  • pickle on jruby

    - by brad
    Does anyone know if pickle is compatible with jruby? I just installed cucumber and cucumber-rails. I then tried gem install pickle and it installs, but a script/generate pickle yields Couldn't find 'pickle' generator I did everything according to the readme no dice. Anyone have any experience with this: specs jruby-1.4.0 rails-2.3.5 cucumber 0.7.2 pickle 0.2.10

    Read the article

  • Should I expect Comet to be this slow?

    - by Chad Johnson
    I have the following in a Rails controller: def poll records = [] start_time = Time.now.to_i while records.length == 0 do records = Something.uncached{Something.find(:all, :conditions => { :some_condition => false})} if records.length > 0 break end sleep 1 if Time.now.to_i - start_time >= 20 break end end responseData = [] records.each do |record| responseData << { 'something' => record.some_value } # Flag message as received. record.some_condition = true record.save end render :text => responseData.to_json end and then I have Javascript performing an AJAX request. The request sits there for 20 seconds or until the controller method finds a record in the database, waiting. That works. function poll() { $.ajax({ url: '/my_controller/poll', type: 'GET', dataType: 'json', cache: false, data: 'time=' + new Date().getTime(), success: function(response) { // show response here }, complete: function() { poll(); }, error: function() { alert('error'); poll(); } }); } When I have 5 - 10 tabs open in my browser, my web application becomes super slow. Is this to be expected? Or is there some obvious improvement(s) I can make?

    Read the article

  • Would a Centralized Blogging Service Work?

    - by viatropos
    If there's a better place to ask this, please let me know. Every time I build a new website/blog/shopping-cart/etc., I keep trying to do the following: Extract out common functionality into reusable code (Rubygems and jQuery plugins mostly) If possible, convert that gem into a small service so I never have to deal with a database for the objects involved (by service, I mean something lean and mean, usually built with the Sinatra Web Framework with a few core models. My assumption is, if I can remove dependencies on local databases, that will make it easier and more scalable in the long run (scalable in terms of reusability and manageability, not necessarily database/performance). I'm not sure if that's a good or bad assumption yet. What do you think? I've made this assumption because of the following reason: Most serious database/model functionality has been built on the internet somewhere. Just to name a few: Social Network API: Facebook Messaging API: Twitter Mailing API: Google Event API: Eventbrite Shopping API: Shopify Comment API: Disqus Form API: Wufoo Image API: Picasa Video API: Youtube ... Each of those things are fairly complicated to build from scratch and to make as optimized, simple, and easy to use as those companies have. So if I build an app that shows pictures (picasa) on an Event page (eventbrite), and you can see who joined the event (facebook events), and send them emails (google apps api), and have them fill out monthly surveys (wufoo), and watch a video when they're done (youtube), all integrated into a custom, easy to use website, and I can do that without ever creating a local database, is that a good thing? I ask because there's two things missing from the puzzle that keep forcing me to create that local database: Post API RESTful/Pretty Url API While there's plenty of Blogging systems and APIs for them, there is no one place where you can just write content and have it part of some massive thing. For every app, I have to use code for creating pretty/restful urls, and that saves posts. But it seems like that should be a service! Question is, is that the main point of a website? Will everyone always need "their own blog"? Why not just have a profile and write lots of content on an established platform like StackOverflow or Facebook?

    Read the article

  • Rails' page caching vs. HTTP reverse proxy caches

    - by John Topley
    I've been catching up with the Scaling Rails screencasts. In episode 11 which covers advanced HTTP caching (using reverse proxy caches such as Varnish and Squid etc.), they recommend only considering using a reverse proxy cache once you've already exhausted the possibilities of page, action and fragment caching within your Rails application (as well as memcached etc. but that's not relevant to this question). What I can't quite understand is how using an HTTP reverse proxy cache can provide a performance boost for an application that already uses page caching. To simplify matters, let's assume that I'm talking about a single host here. This is my understanding of how both techniques work (maybe I'm wrong): With page caching the Rails process is hit initially and then generates a static HTML file that is served directly by the Web server for subsequent requests, for as long as the cache for that request is valid. If the cache has expired then Rails is hit again and the static file is regenerated with the updated content ready for the next request With an HTTP reverse proxy cache the Rails process is hit when the proxy needs to determine whether the content is stale or not. This is done using various HTTP headers such as ETag, Last-Modified etc. If the content is fresh then Rails responds to the proxy with an HTTP 304 Not Modified and the proxy serves its cached content to the browser, or even better, responds with its own HTTP 304. If the content is stale then Rails serves the updated content to the proxy which caches it and then serves it to the browser If my understanding is correct, then doesn't page caching result in less hits to the Rails process? There isn't all that back and forth to determine if the content is stale, meaning better performance than reverse proxy caching. Why might you use both techniques in conjunction?

    Read the article

  • Automating rake doc:app

    - by jerhinesmith
    For you rails programmers, what's the easiest way to keep your RDoc files up-to-date? I know I can run rake doc:app manually, but I really don't feel like adding a manual step to the check-in process, and since we're already using cruisecontrolrb to handle deployment and testing automation, it seems like there should be an easy way to regenerate these files on check-in. Is anyone already automating rake doc:app? And, if so, what are your suggestions?

    Read the article

  • acts_as_xapian jobs table

    - by Grnbeagle
    Hi, Can someone explain to me the inner workings of acts_as_xapian_jobs table? I ran into an issue with the acts_as_xapian plugin recently, where I kept getting the following error when it creates an object with xapian indexed fields: Mysql::Error: Duplicate entry 'String-2147483647' for key 2: INSERT INTO `acts_as_xapian_jobs` (`action`, `model`, `model_id`) VALUES ('update', 'String', 23730251831560) It turns out the model_id exceeded the max int value of 2147483647. The workaround was to update model_id to use bigint. Why would the model_id be so huge? By looking at content of acts_as_xapian_jobs, it seems it creates a row for every field that is being indexed.. Understanding how a job gets created in the table would help a great deal. Here's a sampling of the table: mysql> select * from acts_as_xapian_jobs limit 5\G *************************** 1. row *************************** id: 19 model: String model_id: 23804037900560 action: update *************************** 2. row *************************** id: 49 model: String model_id: 23804037191200 action: update *************************** 3. row *************************** id: 79 model: String model_id: 23804037932180 action: update *************************** 4. row *************************** id: 109 model: String model_id: 23804037101700 action: update *************************** 5. row *************************** id: 139 model: String model_id: 23804037722160 action: update Thanks in advance, Amie

    Read the article

  • Do Websites need Local Databases Anymore?

    - by viatropos
    If there's a better place to ask this, please let me know. Every time I build a new website/blog/shopping-cart/etc., I keep trying to do the following: Extract out common functionality into reusable code (Rubygems and jQuery plugins mostly) If possible, convert that gem into a small service so I never have to deal with a database for the objects involved (by service, I mean something lean and mean, usually built with the Sinatra Web Framework with a few core models). My assumption is, if I can remove dependencies on local databases, that will make it easier and more scalable in the long run (scalable in terms of reusability and manageability, not necessarily database/performance). I'm not sure if that's a good or bad assumption yet. What do you think? I've made this assumption because of the following reason: Most serious database/model functionality has been built on the internet somewhere. Just to name a few: Social Network API: Facebook Messaging API: Twitter Mailing API: Google Event API: Eventbrite Shopping API: Shopify Comment API: Disqus Form API: Wufoo Image API: Picasa Video API: Youtube ... Each of those things are fairly complicated to build from scratch and to make as optimized, simple, and easy to use as those companies have made them. So if I build an app that shows pictures (picasa) on an Event page (eventbrite), and you can see who joined the event (facebook events), and send them emails (google apps api), and have them fill out monthly surveys (wufoo), and watch a video when they're done (youtube), all integrated into a custom, easy to use website, and I can do that without ever creating a local database, is that a good thing? I ask because there's two things missing from the puzzle that keep forcing me to create that local database: Post API RESTful/Pretty Url API While there's plenty of Blogging systems and APIs for them, there is no one place where you can just write content and have it part of some massive thing. For every app, I have to use code for creating pretty/restful urls, and that saves posts. But it seems like that should be a service! Question is, is that what the website is? ...That place to integrate the worlds services for my specific cause... and, sigh, to store posts that only my site has access to. Will everyone always need "their own blog"? Why not just have a profile and write lots of content on an established platform like StackOverflow or Facebook? ... That way I can write apps entirely without a database and know that I'm doing it right. Note: Of course at some point you'd need a database, if you were doing something unique or new. But for the case where you're just rewiring information or creating things like videos, events, and products, is it really necessary anymore??

    Read the article

  • ActiveRecord Save Dependent Model

    - by Dmitriy Likhten
    I am trying to save a model with it's dependency models being saved. Model1 has_many :model2, :autosave => true Model2 belongs_to :model1 has_many :model3, :autosave => true Model3 belongs_to :model2 I want to save Model1, and have Model2 and 3 save as well. I tried this without and with the autosave feature. What winds up happening is Model1 is saved, Model2 is saved, Model3 is untouched. Is there a way to tell ActiveRecord that for this save I want to save the model and all child models all at once? As a side note, all 3 are just created and are not in the database. I cannot do .create on the models because I cannot save them until all validation passes and all business logic succeeds (has to be a transaction).

    Read the article

  • mongodb read/write performance and mongo hosting in the cloud

    - by z3cko
    we are currently developing a high traffic rails application with facebooker (facebook game). since amazon simpledb (aws-sdb) is really slow, we are thinking of using a dedicated mongodb server as offered by mongoHQ for example. questions: what is the read/writes peak value for a mongodb server running on a amazon ec2 instance? what would be a recommended setup for a ec2 hosted app with mongodb - a master on amazon EBS and replicas on the ec2 instances? any examples or experiences? is there a company that offers mongodb hosting in the cloud? thanks, mz

    Read the article

  • image protection in rails

    - by Cezar
    Hello, I am looking for ways to protect my product images and I don't know if there's anything out there better than what I've already found: disable right click, use a transparent image in front of your picture and watermarking. Obviously none of them is perfect but I was curious if someone came up with a better solution to this problem. Also is there any rails plugin to aid with that ? Thanks

    Read the article

< Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >