Search Results

Search found 9469 results on 379 pages for 'live migration'.

Page 123/379 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • Error when pushing to Heroku - ...appear in group - Ruby on Rails

    - by bgadoci
    I am trying to deploy my first rails app to Heroku and seem to be having a problem. After git push heroku master, and heroku rake db:migrate I get an error saying: SELECT posts.*, count(*) as vote_total FROM "posts" INNER JOIN "votes" ON votes.post_id = posts.id GROUP BY votes.post_id ORDER BY created_at DESC LIMIT 5 OFFSET 0): I have included the full error below and also included the PostControll#index as it seems that is where I am doing the grouping. Lastly I included my routes.rb file. I am new to ruby, rails, and heroku so sorry for simple/obvious questions. Processing PostsController#index (for 99.7.50.140 at 2010-04-21 12:50:47) [GET] ActiveRecord::StatementInvalid (PGError: ERROR: column "posts.id" must appear in the GROUP BY clause or be used in an aggregate function : SELECT posts.*, count(*) as vote_total FROM "posts" INNER JOIN "votes" ON votes.post_id = posts.id GROUP BY votes.post_id ORDER BY created_at DESC LIMIT 5 OFFSET 0): vendor/gems/will_paginate-2.3.12/lib/will_paginate/finder.rb:82:in `send' vendor/gems/will_paginate-2.3.12/lib/will_paginate/finder.rb:82:in `paginate' vendor/gems/will_paginate-2.3.12/lib/will_paginate/collection.rb:87:in `create' vendor/gems/will_paginate-2.3.12/lib/will_paginate/finder.rb:76:in `paginate' app/controllers/posts_controller.rb:28:in `index' /home/heroku_rack/lib/static_assets.rb:9:in `call' /home/heroku_rack/lib/last_access.rb:25:in `call' /home/heroku_rack/lib/date_header.rb:14:in `call' thin (1.0.1) lib/thin/connection.rb:80:in `pre_process' thin (1.0.1) lib/thin/connection.rb:78:in `catch' thin (1.0.1) lib/thin/connection.rb:78:in `pre_process' thin (1.0.1) lib/thin/connection.rb:57:in `process' thin (1.0.1) lib/thin/connection.rb:42:in `receive_data' eventmachine (0.12.6) lib/eventmachine.rb:240:in `run_machine' eventmachine (0.12.6) lib/eventmachine.rb:240:in `run' thin (1.0.1) lib/thin/backends/base.rb:57:in `start' thin (1.0.1) lib/thin/server.rb:150:in `start' thin (1.0.1) lib/thin/controllers/controller.rb:80:in `start' thin (1.0.1) lib/thin/runner.rb:173:in `send' thin (1.0.1) lib/thin/runner.rb:173:in `run_command' thin (1.0.1) lib/thin/runner.rb:139:in `run!' thin (1.0.1) bin/thin:6 /usr/local/bin/thin:20:in `load' /usr/local/bin/thin:20 PostsController def index @tag_counts = Tag.count(:group => :tag_name, :order => 'count_all DESC', :limit => 20) conditions, joins = {}, :votes @ugtag_counts = Ugtag.count(:group => :ugctag_name, :order => 'count_all DESC', :limit => 20) conditions, joins = {}, :votes @vote_counts = Vote.count(:group => :post_title, :order => 'count_all DESC', :limit => 20) conditions, joins = {}, :votes unless(params[:tag_name] || "").empty? conditions = ["tags.tag_name = ? ", params[:tag_name]] joins = [:tags, :votes] end @posts=Post.paginate( :select => "posts.*, count(*) as vote_total", :joins => joins, :conditions=> conditions, :group => "votes.post_id", :order => "created_at DESC", :page => params[:page], :per_page => 5) @popular_posts=Post.paginate( :select => "posts.*, count(*) as vote_total", :joins => joins, :conditions=> conditions, :group => "votes.post_id", :order => "vote_total DESC", :page => params[:page], :per_page => 3) respond_to do |format| format.html # index.html.erb format.xml { render :xml => @posts } format.json { render :json => @posts } format.atom end end routes.rb ActionController::Routing::Routes.draw do |map| map.resources :ugtags map.resources :wysihat_files map.resources :users map.resources :votes map.resources :votes, :belongs_to => :user map.resources :tags, :belongs_to => :user map.resources :ugtags, :belongs_to => :user map.resources :posts, :collection => {:auto_complete_for_tag_tag_name => :get } map.resources :posts, :sessions map.resources :posts, :has_many => :comments map.resources :posts, :has_many => :tags map.resources :posts, :has_many => :ugtags map.resources :posts, :has_many => :votes map.resources :posts, :belongs_to => :user map.resources :tags, :collection => {:auto_complete_for_tag_tag_name => :get } map.resources :ugtags, :collection => {:auto_complete_for_ugtag_ugctag_name => :get } map.login 'login', :controller => 'sessions', :action => 'new' map.logout 'logout', :controller => 'sessions', :action => 'destroy' map.root :controller => "posts" map.connect ':controller/:action/:id' map.connect ':controller/:action/:id.:format' end UPDATE TO SHOW MODEL AND MIGRATION FOR POST class Post < ActiveRecord::Base has_attached_file :photo validates_presence_of :body, :title has_many :comments, :dependent => :destroy has_many :tags, :dependent => :destroy has_many :ugtags, :dependent => :destroy has_many :votes, :dependent => :destroy belongs_to :user after_create :self_vote def self_vote # I am assuming you have a user_id field in `posts` and `votes` table. self.votes.create(:user => self.user) end cattr_reader :per_page @@per_page = 10 end migrations for post class CreatePosts < ActiveRecord::Migration def self.up create_table :posts do |t| t.string :title t.text :body t.timestamps end end def self.down drop_table :posts end end _ class AddUserIdToPost < ActiveRecord::Migration def self.up add_column :posts, :user_id, :string end def self.down remove_column :posts, :user_id end end

    Read the article

  • marshal data too short!!!

    - by Nitin Garg
    My application requires to keep large data objects in session. There are like 3-4 data objects each created by parsing a csv containing 150 X 20 cells having strings of 3-4 characters. My application shows this error- "marshal data too short". I tried this- Deleting the old session table. Deleting the old migration for session table. Creating a new migration using rake db: sessions:create. editing the migration manually, changing "text: data" to "longtext: data". running the migration using rake db: migrate. now when i open my application, i see this page- link text please someone help me, this is getting on my nerves!! other details of application-- In view "index.html.erb"- There is a link that makes ajax call to an action in controller, that action parses large csv file and makes an object out of it. this object is stored in session. ERROR LOG ` ArgumentError in Scoring#index Showing app/views/scoring/index.html.erb where line #4 raised: marshal data too short Extracted source (around line #4): 1: 2: 3: 4: <%= link_to_remote "get csv file", 5: :url = { :action = 'show_static_1' }, 6: :update = "static_score", 7: :complete = "$('static_score').update(request.responseText)" % Application Trace | Framework Trace | Full Trace /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/session_store.rb:71:in load' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/session_store.rb:71:in unmarshal' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/session_store.rb:110:in data' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/session_store.rb:292:in get_session' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:1448:in silence' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/session_store.rb:288:in get_session' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:168:in load_session' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:62:in send' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:62:in load!' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:70:in stale_session_check!' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:61:in load!' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:28:in []' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/request_forgery_protection.rb:106:in form_authenticity_token' (eval):2:in send' (eval):2:in form_authenticity_token' app/views/scoring/index.html.erb:4:in _run_erb_app47views47scoring47index46html46erb' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/session_store.rb:71:in load' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/session_store.rb:71:in unmarshal' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/session_store.rb:110:in data' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/session_store.rb:292:in get_session' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:1448:in silence' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/session_store.rb:288:in get_session' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:168:in load_session' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:62:in send' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:62:in load!' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:70:in stale_session_check!' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:61:in load!' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:28:in []' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/request_forgery_protection.rb:106:in form_authenticity_token' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/helpers/prototype_helper.rb:1065:in options_for_ajax' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/helpers/prototype_helper.rb:449:in remote_function' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/helpers/prototype_helper.rb:256:in link_to_remote' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/renderable.rb:34:in send' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/renderable.rb:34:in render' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/base.rb:306:in with_template' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/renderable.rb:30:in render' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/template.rb:205:in render_template' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/base.rb:265:in render' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/base.rb:348:in _render_with_layout' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/base.rb:262:in render' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:1250:in render_for_file' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:945:in render_without_benchmark' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/benchmarking.rb:51:in render' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:17:in ms' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:10:in realtime' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:17:in ms' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/benchmarking.rb:51:in render' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:1326:in default_render' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:1338:in perform_action_without_filters' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/filters.rb:617:in call_filters' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/filters.rb:610:in perform_action_without_benchmark' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/benchmarking.rb:68:in perform_action_without_rescue' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:17:in ms' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:10:in realtime' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:17:in ms' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/benchmarking.rb:68:in perform_action_without_rescue' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/rescue.rb:160:in perform_action_without_flash' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/flash.rb:146:in perform_action' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:532:in send' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:532:in process_without_filters' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/filters.rb:606:in process' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:391:in process' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:386:in call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/routing/route_set.rb:437:in call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:87:in dispatch' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:121:in _call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:130:in build_middleware_stack' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/string_coercion.rb:25:in call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/string_coercion.rb:25:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/head.rb:9:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/methodoverride.rb:24:in call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/params_parser.rb:15:in call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:122:in call' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:29:in call' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/query_cache.rb:34:in cache' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:9:in cache' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:28:in call' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:361:in call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/failsafe.rb:26:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in synchronize' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:114:in call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/reloader.rb:34:in run' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:108:in call' /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/rails/rack/static.rb:31:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:46:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:40:in each' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:40:in call' /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/rails/rack/log_tailer.rb:17:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/content_length.rb:13:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/chunked.rb:15:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/mongrel.rb:64:in process' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:159:in process_client' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:158:in each' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:158:in process_client' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:285:in run' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:285:in initialize' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:285:in new' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:285:in run' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:268:in initialize' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:268:in new' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:268:in run' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/mongrel.rb:34:in run' /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/commands/server.rb:111 /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in gem_original_require' /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in require' script/server:3 /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/session_store.rb:71:in load' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/session_store.rb:71:in unmarshal' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/session_store.rb:110:in data' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/session_store.rb:292:in get_session' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:1448:in silence' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/session_store.rb:288:in get_session' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:168:in load_session' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:62:in send' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:62:in load!' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:70:in stale_session_check!' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:61:in load!' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:28:in []' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/request_forgery_protection.rb:106:in form_authenticity_token' (eval):2:in send' (eval):2:in form_authenticity_token' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/helpers/prototype_helper.rb:1065:in options_for_ajax' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/helpers/prototype_helper.rb:449:in remote_function' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/helpers/prototype_helper.rb:256:in link_to_remote' /app/views/scoring/index.html.erb:4:in _run_erb_app47views47scoring47index46html46erb' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/renderable.rb:34:in send' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/renderable.rb:34:in render' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/base.rb:306:in with_template' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/renderable.rb:30:in render' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/template.rb:205:in render_template' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/base.rb:265:in render' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/base.rb:348:in _render_with_layout' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/base.rb:262:in render' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:1250:in render_for_file' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:945:in render_without_benchmark' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/benchmarking.rb:51:in render' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:17:in ms' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:10:in realtime' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:17:in ms' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/benchmarking.rb:51:in render' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:1326:in default_render' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:1338:in perform_action_without_filters' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/filters.rb:617:in call_filters' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/filters.rb:610:in perform_action_without_benchmark' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/benchmarking.rb:68:in perform_action_without_rescue' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:17:in ms' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:10:in realtime' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:17:in ms' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/benchmarking.rb:68:in perform_action_without_rescue' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/rescue.rb:160:in perform_action_without_flash' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/flash.rb:146:in perform_action' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:532:in send' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:532:in process_without_filters' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/filters.rb:606:in process' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:391:in process' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:386:in call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/routing/route_set.rb:437:in call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:87:in dispatch' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:121:in _call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:130:in build_middleware_stack' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/string_coercion.rb:25:in call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/string_coercion.rb:25:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/head.rb:9:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/methodoverride.rb:24:in call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/params_parser.rb:15:in call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:122:in call' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:29:in call' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/query_cache.rb:34:in cache' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:9:in cache' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:28:in call' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:361:in call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/failsafe.rb:26:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in synchronize' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:114:in call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/reloader.rb:34:in run' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:108:in call' /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/rails/rack/static.rb:31:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:46:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:40:in each' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:40:in call' /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/rails/rack/log_tailer.rb:17:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/content_length.rb:13:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/chunked.rb:15:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/mongrel.rb:64:in process' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:159:in process_client' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:158:in each' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:158:in process_client' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:285:in run' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:285:in initialize' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:285:in new' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:285:in run' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:268:in initialize' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:268:in new' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:268:in run' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/mongrel.rb:34:in run' /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/commands/server.rb:111 /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in gem_original_require' /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' script/server:3 Request Parameters: None Show session dump Response Headers: {"Content-Type"="text/html", "Cache-Control"="no-cache"} `

    Read the article

  • What a Performance! MySQL 5.5 and InnoDB 1.1 running on Oracle Linux

    - by zeynep.koch(at)oracle.com
    The MySQL performance team in Oracle has recently completed a series of benchmarks comparing Read / Write and Read-Only performance of MySQL 5.5 with the InnoDB and MyISAM storage engines. Compared to MyISAM, InnoDB delivered 35x higher throughput on the Read / Write test and 5x higher throughput on the Read-Only test, with 90% scalability across 36 CPU cores. A full analysis of results and MySQL configuration parameters are documented in a new whitepaperIn addition to the benchmark, the new whitepaper, also includes:- A discussion of the use-cases for each storage engine- Best practices for users considering the migration of existing applications from MyISAM to InnoDB- A summary of the performance and scalability enhancements introduced with MySQL 5.5 and InnoDB 1.1.The benchmark itself was based on Sysbench, running on AMD Opteron "Magny-Cours" processors, and Oracle Linux with the Unbreakable Enterprise Kernel You can learn more about MySQL 5.5 and InnoDB 1.1 from here and download it from here to test whether you witness performance gains in your real-world applications.  By Mat Keep

    Read the article

  • Developer Training – Various Options for Maximum Benefit – Part 4

    - by pinaldave
    Developer Training - Importance and Significance - Part 1 Developer Training – Employee Morals and Ethics – Part 2 Developer Training – Difficult Questions and Alternative Perspective - Part 3 Developer Training – Various Options for Developer Training – Part 4 Developer Training – A Conclusive Summary- Part 5 If you have been reading this series, by now you are aware of all the pros and cons that can come along with training.  We’ve asked and answered hard questions, and investigated them “whys” and “hows” of training.  Now it is time to talk about all the different kinds of training that are out there! On Job Training The most common type of training is on the job training.  Everyone receives this kind of education – even experts who come in to consult have to be taught where the printer, pens, and copy machines are.  If you are thinking about more concrete topics, though, on the job training can be some of the easiest to come across.  Picture this: someone in the company whom you really admire is hard at work on a project.  You come up to them and ask to help them out – if they are a busy developer, the odds are that they will say “yes, please!”   If you phrase your question as an offer of help, you can receive training without ever putting someone in the awkward position of acting as a mentor.  However, some people may want the task of being a mentor.  It can never hurt to ask.  Most people will be more than willing to pass their knowledge along. Extreme Programming If your company and coworkers are willing, you can even investigate Extreme Programming.  This is a type of programming that allows small teams to quickly develop code and products that are released with almost immediate user feedback.  You can find more information at http://www.extremeprogramming.org/.  If this is something your company could use, suggest it to your supervisor.  Even if they say no, it will make it clear that you are a go-getter who is interested in new and exciting projects.  If the answer is yes, then you have the opportunity to get some of the best on the job training around. In Person Training Click on Image to Enlarge When you say the word “training,” most people’s minds go back to the classroom, an image they are familiar with.  While training doesn’t always have to be in a traditional setting, because it is so familiar it can also be the most valuable type of training.  There are many ways to get training through a live instructor.  Some companies may be willing to send a representative to you, where employees will get training, sometimes food and coffee, and a live instructor who can answer questions immediately.  Sometimes these trainers are also able to do consultations at the same time, which can invaluable to a company.  If you are the one to asks your supervisor for a training session that can also be turned into a consultation, you may stick in their minds as an incredibly dedicated employee.  If you can’t find a representative, local colleges can also be a good resource for free or cheap classes – or they may have representatives coming who are willing to take on a few more students. Benefits of On Demand Developer Training Of course, you can often get the best of all these types of training with online or On Demand training.  You can get the benefit of a live instructor who is willing to answer questions (although in this case, usually through e-mail or other online venues), there are often real-world examples to follow along – like on the job training – and best of all you can learn whenever you have the time or need.  Did a problem with your server come up at midnight when all your supervisors are safe at home and probably in bed?  No problem!  On Demand training is especially useful if you need to slow down, pause, or rewind a training session.  Not even a real-life instructor can do that! When I was writing this blog post, I felt that each of the subject, which I have covered can be blog posts of itself. However, I wanted to keep the the blog post concise and so touch based on three major training aspects 1) On Job Training 2) In Person Training and 3) Online training. Here is the question for you – is there any other kind of training methods available, which are effective and one should consider it? If yes, what are those, I may write a follow up blog post on the same subject next week. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Developer Training, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Best WordPress Video Themes for a Video Blog

    - by Matt
    WordPress has made blogging so easy & fun, there are plenty of video blog themes that you can pick from. However there is always rarity in quality. We at JustSkins have gathered some high quality, tested, tried video themes list. We tried to find some WordPress themes for vloggers, we knew all along that there are very few yet some of them are just brilliant premium wordpress themes. More on that later, let’s find out some themes which you can install on your vlog right now. On Demand 2.0 A fully featured video WordPress premium theme from Press75. Includes  theme options panel for personal customization and content management options, post thumbnails, drop down navigation menu, custom widgets and lots more. Demo | Price: $75 | DOWNLOAD VideoZoom An outstanding premium WordPress video theme from WPZoom featuring standard video integration plus additionally it lets you play any video from all the popular video websites. VideoZoom theme also includes a featured video slider on the homepage, multiple post layout options, theme options panel, WordPress 3.0 menus, backgrounds etc. Demo | Price Single: $69, Developer: $149 | DOWNLOAD Vidley Press75′s easy to use premium WordPress video theme. This theme is full of great features, it can be a perfect choice if you intend to make it a portal someday..it is scalable to shape like a news portal or portfolios. The Theme is widget ready. It has ability to place Featured Content and Featured Category section on homepage. The drop down menus on this theme are nifty! Demo | Price $75 |  DOWNLOAD Live A video premium WordPress theme designed for streaming video, and live event broadcasting. You can embed live video broadcasts from third party services like Ustream etc, and features a prominent timer counting down to the next broadcast, rotating bumper images, Facebook and twitter integration for viewer interaction, theme admin options panel and more make this theme one of its kind. Demo | Price: $99, Support License: $149| DOWNLOAD Groovy Video Woo Themes is pioneer in making beautiful wordpress themes,  One such theme that is built by keeping the video blogger in mind. The Groovy Theme is very colourful video blog premium WordPress theme. Creating video posts is quick and easy with just a copy / paste of the video’s embed code. The theme enables automatic video resizing, plenty of widgets. Also allows you to pick color of your choice. Price: Single Use $70, Developer Price : $150 | DOWNLOAD Video Flick Another exciting Video blogging theme by Press75 is the Video Flick theme. Video Flick is compatible with any video service that provides embed code, or if you want to host your own videos, Video Flick is also compatible with FLV (Flash Video) and Quicktime formats. This theme allows you to either keep standard Blog and/or have Video posts. You can pick a light or dark color option. Demo | Price : $75 | DOWNLOAD Woo Tube An excellent video premium WordPress theme from Woothemes, the WooTube theme is a very easy video blog platform, as it comes with  automatic video resizing, a completely widgetised sidebar and 7 different colour schemes to choose from. The theme  has the ability to be used as a normal blog or a gallery. A very wise choice! Price: Single Use $70, Developer Price : $150 | DOWNLOAD eVid Theme One of the nicest WordPress theme designed specifically for the video bloggers. Simple to integrate videos from video hosts such as Youtube, Vimeo, Veoh, MetaCafe etc. Demo | Price: $19 | DOWNLOAD Tubular A video premium WordPress theme from StudioPress which can also be used as a used a simple website or a blog. The theme is also available in a light color version. Demo | Price: $59.95 | DOWNLOAD Video Elements 2.0 Another beautiful video premium WordPress theme from Press75. Video Elements 2.0 has been re-designed to include the features you need to easily run and maintain a video blog on WordPress. Demo | Price: $75 | DOWNLOAD TV Elements 3.0 The theme includes a featured video carousel on the homepage which can display any number of videos, a featured category section which displays up to 12 channels, creates automatic thumbnails and a lots more… Demo | Price: $75 | DOWNLOAD Wave A beautiful premium video wordpress theme, Flexible & Super cool looking. The Design has very earthy feel to it. The theme has featured video area & latest listing on the homepage. All in all a simple design no fancy features. Demo | Price: $35 | Download

    Read the article

  • Feedback on "market manipulation", a peripheral game mechanic for a satirical MMO

    - by BerndBrot
    This question asks for feedback on a specific game-mechanic. Since there is not one right feedback on a game mechanic, I tried to provide enough context and guidelines to still make it possible for users to rate answers and to accept an answer as the best answer (following these criteria from Writer.SE's meta website). Please comment if you have any suggestions on how I could improve the question in that regard. So, let's begin with the game itself and some of its elements which are relevant for this question. Context I'm working on a satirical, text-based multiplayer adventure and role-playing game set in modern-day London. The game resolves around the concept of sin and features a myriad of (venomous) allusions to all the things that go wrong in this world. Players can choose between character classes like bullshit artist (consultant), bankster, lawyer, mobster, celebrity, politician, etc. In order to complete the game, the player has to live so sinfully with regard to any of the seven deadly sins that a demon is willing to offer them a contract of sponsorship. On their quest to live a sinful live, characters explore more and more locations of modern-day London (on a GoogleMap), fight "monsters" like insurance sales agents or Jehovah's Witnesses, and complete quests, like building a PowerPoint presentation out of marketing buzz words or keeping up a number of substance abuse effects in order to progress on the gluttony path. Battles are turn based with both combatants having a deck of cards, with which they try to make their enemy give in to temptations of all sorts. Tempted enemies sometimes become contacts (an item drop mechanic), which can be exploited for various benefits, depending on their area of influence (finance, underworld, bureaucracy, etc.), level of influence, and kind of sway that the player has over them (bribed, seduced, threatened, etc.) Once a contract has been exploited, the player loses that contact. Most actions require turns. Turns are limited, but refill each day. Criteria A number of peripheral game mechanics are supposed to represent real world abuses and mischief in a humorous way integrate real world data and events to strengthen the feeling of relevance of the game's humor with regard to real world problems add fun ways of interacting with other players add ways for players to express themselves through game-play Market manipulation is one such peripheral game mechanic and should fulfill all of these goals. Market manipulation This is my initial design of the mechanic: Players can enter the London Stock Exchange (LSE) (without paying a turn) LSE displays the stock prices of a number of companies in industries like weapons or tobacco as well as some derivatives based on wheat and corn. The stock prices are calculated based on the actual stock prices of these companies and derivatives (in real time) any market manipulations that were conducted by the players any market corrections of the system Players can buy and sell shares with cash, a resource in the game, at current in-game market value (without paying a turn). Players can manipulate the market, i.e. let the price of a share either rise or fall, by some amount, over a certain period of time. Manipulating the market requires 1 turn A contact in the financial sector (see above). The higher the level of influence of the contact, the stronger the effect of the manipulation on the stock price, and/or the shorter it takes for the manipulation to manifest itself. Market manipulation also adds a crime to the player's record. (There are a multitude of ways to take care of that, but it is still another "cost" of market manipulations.) The system continuously corrects market manipulations by letting the in-game prices converge towards their real world counterparts at a rate of 2% of the difference between the two per hour. Because of this market correction mechanism, pushing up prices (and screwing down prices) becomes increasingly difficult the higher (lower) the price already is. Whenever food prices reach a certain level, in-game stories are posted about hunger catastrophes happening somewhere far, far away (maybe with links to real world news stories). Whenever a player sells a certain number of shares with a sufficiently high margin, they are mentioned in that day's in-game financial news. Since the number of stock options is very limited, players will inevitably collide in their efforts to manipulate the market in their favor. Hopefully, it will also be a fun side-arena for guilds and covenants to fight each other. Question(s) What do you think of this mechanism given the criteria for peripheral game mechanics that I specified for my game? Do you have any ideas how the mechanic could be improved with regard to these criteria (or otherwise)? Could it be improved to allow for more expressive game-play, or involve an allusion to some other real world madness (like short selling, leveraging, or some other banking magic)? Are there any game-theoretic problems with this mechanic, like maybe certain dominant individual strategies that, collectively, lead to every player profiting and thus eliminating the idea of market manipulation PVP? Also, if you like (or dislike) this question, feel free to participate in the discussion on GDSE meta: "Should we be more lax with regard to SE's question/answer format to make game design questions possible?"

    Read the article

  • Dual Boot Oracle Solaris 11/11 and Linux (Ubuntu 11.10/grub2)

    - by HartmutStreppel
    After having worked with Open Solaris on my laptop first, then with an upgrade to Oracle Solaris 11 Express, I finally did a fresh install of Oracle Solaris 11/11, when it became available. I am not a big fan of upgrades as I know that I am not the perfect administrator and my system gets spoiled with unclean configurations, outdated packages and wrong settings that cannot be reversed. So I prefer to start from scratch. Especially with Oracle Solaris 11 I wanted to have a system just like a customer would have it in production. The installation was smooth - more or less, if I had only read the documentation a bit better in advance. For a number of reasons I prefer a dual boot system. The most important one is, that especially with mobile devices you often run into network problems. And you have a hard time figuring out where the problem is: in your laptop hardware, in the OS you are running, or really within the network. If you have an alternate OS to boot, you can exclude the OS and your hardware. This makes you feel better. The second OS should be a Linux variant - and for some not so obvious reason I decided to go with the latest Ubuntu release (11.10). It replaced a very old Open Suse installation that had not been booted for a while. I knew that it was probably best to install Ubuntu first and then Oracle Solaris 11, as this would put the right boot information for Oracle Solaris  into the MBR and onto the root partition. But then, how to enable dual boot with the 2 OSes. Searching the web one mainly finds information about dual boot of: Linux and Linux Linux and Windows I do not want to explain which wrong configurations I worked through, but I prefer to explain the final setup, which is extremely simple, and I am wondering why this is not covered as the easiest solution for most dual boot setups. I use chainloader from and to both OS'es, with the only disadvantage that I have to confirm two grub menus each time I want to boot the "other" OS. Still there were some hurdles to jump over: Ubuntu did not like getting its boot blocks being placed on the partition instead of the disk; I must admit that I do not fully understand why. But using the --force option you could get that done Ubuntu needs an active partition; that was easy to achieve grub2 uses a different numbering scheme for the partitions. That is in the docs, if you read them. BTW: The usual disclaimer is valid. There is  no guarantee that what I describe works or works well. Please back up your data carefully before trying any of this. So, Oracle Solaris 11 is installed on the first partition and Ubuntu on the third. With Ubtuntu things initially were a bit more complicated, as I did not know how to boot it. And the live CD did not offer the capability to boot the on-disk image (at least I did not find it). So I booted the live CD, mounted the Ubuntu installation at /mnt and wrote the boot blocks into the partition. This is something that does not seem to be recommended, at least grub-install refrained from doing what I intended. After a bit more research I was bold enough to use the --force option and wrote the boot blocks to /dev/sda3 using grub-install --boot-directory=/mnt/boot --force --no-floppy /dev/sda3 So, I now had a system with the Solaris boot loader in the MBR, Solaris specific boot blocks on the Solaris root partition and Ubuntu specific boot blocks in the Ubuntu partition. I just had to chain them together and I was done. Oracle Solaris 11: I have added the following lines to /rpool/boot/grub/menu.lst (be aware of the /rpool!!!!) title Ubuntu 11.10root (hd0,2)makeactivechainloader +1boot The Ubuntu root file system sits on the third partition (/dev/sda3). Ubuntu: I have added the following lines to /etc/grub.d/40_custom: menuentry "Solaris 11/11" {      set root=(hd0,1)      chainloader +1} Two things need to be mentioned: a) grub2 starts numbering partitions with 1; so my /dev/sda1 is partition 1. b) Oracle Solaris boots without the partition being made active (btw: the command to make a partition active with grub2 is "parttool (hd0,1) boot+", which currently does not work for me). As debugging grub is a bit complicated, I used the grub CLI to perform some tests and also used a tool, that I found on sourceforge.net that was able to prepare a list of all boot loaders on all partitions. This told me that the basic setup was correct. Unfortunately I lost it in the live CD environment. I hope this is helpful for some of the readers.Hartmut

    Read the article

  • UKOUG Application Server & Middleware SIG Meeting

    - by JuergenKress
    Date: Wednesday 10th Oct 2012 Time: 09:00 - 16:00 Location: Reading Venue: Oracle, Thames Valley Park, Reading Agenda: 09:00 Registration and Coffee 10:00 Welcome Application Server & Middleware Committee 10:10 Oracle Support Updates Nick Pounder, Oracle Customer Services 10:30 OpenWorld 2012 - News Round-up for Middleware Admins Simon Haslam, Veriton Limited 11:00 Coffee break 11:20 Oracle Single-Sign on to Oracle Access Manager Migration Rob Otto, Oracle Consulting Services UK 12:05 Supporting Fusion Middleware through First Failure Capture (theory) Greg Cook, Oracle 12:50 Lunch and Network 13:35 Deputy Chair Elections UKOUG 13:45 Supporting Fusion Middleware through First Failure Capture (demos) Greg Cook, Oracle 14:15 Networking session including tea/coffee 14:45 Real Life WebLogic Performance Tuning: Tales and Techniques from the Field Steve Millidge, C2B2 Consulting Limited 15:30 WLST: WebLogic's Swiss Army Knife Simon Haslam, Veriton Limited 15:45 AOB and Close For details please visit the registration page. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: UK user group,Simon Haslam,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Migrating SQL Server Databases – The DBA’s Checklist (Part 2)

    - by Sadequl Hussain
    Continuing from Part 1  , our Migration Checklist continues: Step 5: Update statistics It is always a good idea to update the statistics of the database that you have just installed or migrated. To do this, run the following command against the target database: sp_updatestats The sp_updatestats system stored procedure runs the UPDATE STATISTICS command against every user and system table in the database.  However, a word of caution: running the sp_updatestats against a database with a compatibility level below 90 (SQL Server 2005) will reset the automatic UPDATE STATISTICS settings for every index and statistics of every table in the database. You may therefore want to change the compatibility mode before you run the command. Another thing you should remember to do is to ensure the new database has its AUTO_CREATE_STATISTICS and AUTO_UPDATE_STATISTICS properties set to ON. You can do so using the ALTER DATABASE command or from the SSMS. Step 6: Set database options You may have to change the state of a database after it has been restored. If the database was changed to single-user or read-only mode before backup, the restored copy will also retain these settings. This may not be an issue when you are manually restoring from Enterprise Manager or the Management Studio since you can change the properties. However, this is something to be mindful of if the restore process is invoked by an automated job or script and the database needs to be written to immediately after restore. You may want to check the database’s status programmatically in such cases. Another important option you may want to set for the newly restored / attached database is PAGE_VERIFY. This option specifies how you want SQL Server to ensure the physical integrity of the data. It is a new option from SQL Server 2005 and can have three values: CHECKSUM (default for SQL Server 2005 and latter databases), TORN_PAGE_DETECTION (default when restoring a pre-SQL Server 2005 database) or NONE. Torn page detection was itself an option for SQL Server 2000 databases. From SQL Server 2005, when PAGE_VERIFY is set to CHECKSUM, the database engine calculates the checksum for a page’s contents and writes it to the page header before storing it in disk. When the page is read from the disk, the checksum is computed again and compared with the checksum stored in the header.  Torn page detection works much like the same way in that it stores a bit in the page header for every 512 byte sector. When data is read from the page, the torn page bits stored in the header is compared with the respective sector contents. When PAGE_VERIFY is set to NONE, SQL Server does not perform any checking, even if torn page data or checksums are present in the page header.  This may not be something you would want to set unless there is a very specific reason.  Microsoft suggests using the CHECKSUM page verify option as this offers more protection. Step 7: Map database users to logins A common database migration issue is related to user access. Windows and SQL Server native logins that existed in the source instance and had access to the database may not be present in the destination. Even if the logins exist in the destination, the mapping between the user accounts and the logins will not be automatic. You can use a special system stored procedure called sp_change_users_login to address these situations. The procedure needs to be run against the newly attached or restored database and can accept four parameters. Depending on what you want to do, you may be using less than four though. The first parameter, @Action, can take three values. When you specify @Action = ‘Report’, the system will provide you with a list of database users which are not mapped to any login. If you want to map a database user to an existing SQL Server login, the value for @Action will be ‘Update_One’. In this case, you will only need to provide the database user name and the login it will map to. So if your newly restored database has a user account called “bob” and there is already a SQL Server login with the same name and you want to map the user to the login, you will execute a query like the following: sp_change_users_login         @Action = ‘Update_One’,         @UserNamePattern = ‘bob’,         @LoginName = ‘bob’ If the login does not exist, you can instruct SQL Server to create the login with the same name. In this case you will need to provide a password for the login and the value of the @Action parameter will be ‘Auto_Fix’. If the login already exists, it will be automatically mapped to the user account. Unfortunately sp_change_users_login system stored procedure cannot be used to map database users to trusted logins (Windows accounts) in SQL Server. You will need to follow a manual process to re-map the database user accounts.  Continues…

    Read the article

  • Local Events | Azure Bootcamp

    - by Jeff Julian
    Coming to Kansas City April 8th and 9th is the Microsoft Azure Bootcamp. This event looks very promising for those developers who are looking into Azure for themselves or their companies. It covers the wide range of topics required to understand what Azure really is and is not. Space is limited so if you are considering Azure, register for this event today.Agenda:Module 1: Introduction to cloud computer and AzureHow it worksKey ScenariosThe development environment and SDKModule 2: Using Web RolesBasic ASP.NETBasic configurationModule 3: Blobs: File Storage in the cloudModule 4: Tables: Scalable hierarchical storageModule 5: Queues: Decoupling your systemsModule 6: Basic Worker RolesExecuting backend processesConsuming a queueLeveraging local storageModule 7: Advanced Worker RolesExternal EndpointsInter-role communicationModule 8: Building a business with AzureUsing Azure as an ISV or a partnerAdvantages to delivering valueBPOSPricingModule 9: SQL AzureSetting it upSQL Azure firewallRemote managementMigrating dataModule 10: AppFabricService BusAccess Control SystemIdentity in the cloudModule 11: Cloud ScenariosApp migration strategiesDisposable computingDynamic scaleShuntingPrototypingMultitenant applications (This is my second attempt at this post after MacJournal decided to crash and not save my work. Authoring tools all need auto-save features by now, that is a requirement set in stone by Microsoft Word 97) Related Tags: Azure, Microsoft, Kansas City

    Read the article

  • Personal search – the future of search

    - by jamiet
    [Four months ago I wrote a meandering blog post on another blogging site entitled Personal search – the future of search. The points I made therein are becoming more relevant to what I'm reading about and hoping to get involved in in the future so I'm re-posting here to a wider audience to hopefully get some more feedback and guage reaction to it. This has been prompted by the book Pull by David Siegel that is forming my current holiday reading (recommended to me by a commenter on my previous post Interesting things – Twitter annotations and your phone as a web server) and in particular by Siegel's notion of us all in the future having a personal online data vault.] My one-time colleague Paul Dawson recently wrote an article called The Future of Search and in it he proposed some interesting ideas. Some choice quotes: The growth of Chinese search giant Baidu is an indicator that fully localised and tailored content and offerings have great traction with local audiences This trend is already driving an increase in the use of specialist searches … Look at how Farecast is now integrated into Bing for example, or how Flightstats is now integrated into Google. Search does not necessarily have to begin with a keyword, but could start instead with a click or a touch. Take a look at Retrievr. Start drawing a picture in the box and see what happens. This is certainly search without the need for typing in keywords search technology has advanced greatly in recent years. The recent launch of Microsoft Live Labs’ Pivot has given us a taste of what we can expect to see in the future This really got me thinking about where search might go in the future and as my mind wandered I realised that as the amount of data that we collect about ourselves increases so too will the need and the desire to search it. The amount of electronic data that exists about each and every person is increasing and in the near future I fully expect that we are going to be able to store personal data such as: A history of our location (in fact Google Latitude already offers this facility) Recordings of all our phone conversations Health information history (weight, blood pressure etc…) Energy usage Spending history What films we watch, what radio stations we listen to Voting history Of course, most of this stuff is already stored somewhere but crucially we don’t have easy access to it. My utilities supplier knows how much electricity I’m using but if I want to know for myself I have to go and dig through my statements (assuming I have kept them). Similarly my doctor probably has ready access to all of my health records, my bank knows exactly what I have spent my money on, my cable supplier knows what I watch on TV and my mobile phone supplier probably knows exactly where I am and where I’ve been for the past few years. Strange then that none of this electronic information is available to me in a way that I can really make use of it; after all, its MY information. Its MY data. I created it. That is set to change. As technologies mature and customers become more technically cognizant they will demand more access to the data that companies hold about them. The companies themselves will realise the benefit that they derive from giving users what they want and will embrace ways of providing it. As a result the amount of data that we store about ourselves is going to increase exponentially and the desire to search and derive value from that data is going to grow with it; we are about to enter the era of the “personal datastore” and we will want, and need, to search through it in order to make sense of it all. Its interesting then that today when we think of search we think of search engines and yet in these personal datastores we’re referring to data that search engines can’t touch because WE own it and we (hopefully) choose to keep it private. Someone, I know not who, is going to lead in this space by making it easy for us to search our data and retrieve information that we have either forgotten or maybe didn’t even know in the first place. We will learn new things about ourselves and about our habits; we will share these findings with whomever we choose; we will compare what we discover with others; we will collaborate for mutual benefit and, most of all, we will educate ourselves as to how to live our lives better. Search will be the means to that end, it will enable us to make sense of the wealth of information that we will collect day in day out. The future of search is personal, why would we be interested in anything else? @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL SERVER – PHP on Windows and SQL Server Training Kit

    - by pinaldave
    The PHP on Windows and SQL Server Training Kit includes a comprehensive set of technical content including demos and hands-on labs to help you understand how to build PHP applications using Windows, IIS 7.5 and SQL Server 2008 R2. This release includes the following: PHP & SQL Server Demos Integrating SQL Server Geo-Spatial with PHP SQL Server Reporting Services and PHP PHP & SQL Server Hands On Labs Introduction to Using SQL Server with PHP Using SQL Server Full-Text Search and FILESTREAM Storage with PHP New: Getting Started with SQL Server Migration Assistant for MySQL Download SQL Server PHP on Windows and SQL Server Training Kit Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • New Exadata Book Available Soon

    - by Rob Reynolds
    Oracle Press is set to released the first book on data warehouse performance and Exadata on March 14th. Achieving Extreme Performance with Oracle Exadata , by my colleagues Rick Greenwald, Robert Stackowiak, Maqsood Alam, and Mans Bhuller will be available at your favorite booksellers next week. I've seen a sneak peak of the content in this book and its a great way to fully grasp the power of Exadata and how to best apply it to achieve extreme data warehouse performance. From the publisher's description: Achieving Extreme Performance with Oracle Exadata and the Sun Oracle Database Machine is filled with best practices for deployments, hardware sizing, architecting the database machine environments for maximum availability, and backup and recovery. Oracle Database 11gR2 features used within these offerings, as well as migration options and paths for Oracle and non-Oracle databases to Oracle Exadata are covered. This Oracle Press guide also discusses architecture, administration, maintenance, monitoring, and tuning of Oracle Exadata Storage Servers and the Sun Oracle Database Machine. If your company is considering Exadata, or if you need more horsepower out of your data warehouse, I highly recommend grabbing a copy of this book next week.

    Read the article

  • Sun2Oracle: Hub City Media Webcast Reminder - Thursday, September 13, 2012

    - by Darin Pendergraft
    Our Sun2Oracle webcast featuring Steve Giovanetti from Hub City Media is this Thursday, September 13th at 10:00 am PST.  If you haven't registered yet, there is still time: Register Here. Scott Bonell, Sr. Director of Product Management will be talking to Steve about their recent project to upgrade a large University from Sun DSEE Directory to Oracle Unified Directory.  Scott and Steve will talk through details of the project, from planning through implementation. In addition to this webcast, Steve Giovanetti will also be participating in two sessions at Oracle OpenWorld 2012: CON9465 - Next-Generation Directory: Oracle Unified Directory  Etienne Remillon, Principal Product Manager, Oracle  Steve Giovanetti, CTO Hub City Media  Warren Leung, Sr. Architect, UCLA  Tuesday, Oct 2, 5:00 PM – 6:00 PM  Moscone West – 3008 CON5749 - Solutions for Migration of Oracle Waveset to Oracle Identity Manager Steve Giovanetti, CTO Hub City Media Kevin Moulton, Senior Sales Consulting  Manager, Oracle Thursday, Oct 4, 11:15 AM - 12:15 PM Moscone West - 3008

    Read the article

  • New Whitepaper: Oracle E-Business Suite on Exadata

    - by Steven Chan
    Our Maximum Availability Architecture (MAA) team has quietly been amassing a formidable set of whitepapers about the Oracle Exadata Database Machine.  They're available here:MAA Best Practices - Exadata Database MachineIf you're one of the lucky ones with access to this hardware platform, you'll be pleased to hear that the MAA team has just published a new whitepaper with best practices for EBS environments:Oracle E-Business Suite on ExadataThis whitepaper covers the following topics:Getting to Exadata -- a high level overview of fresh installation on, and migration to, Exadata Database Machine with pointers to more detailed documentation High Availability and Disaster Recovery -- an overview of our MAA best practices with pointers to our detailed MAA Best Practices documentation Performance and Scalability -- best practices for running Oracle E-Business Suite on Exadata Database Machine based on our internal testing

    Read the article

  • Personal search – the future of search

    - by jamiet
    [Four months ago I wrote a meandering blog post on another blogging site entitled Personal search – the future of search. The points I made therein are becoming more relevant to what I'm reading about and hoping to get involved in in the future so I'm re-posting here to a wider audience to hopefully get some more feedback and guage reaction to it. This has been prompted by the book Pull by David Siegel that is forming my current holiday reading (recommended to me by a commenter on my previous post Interesting things – Twitter annotations and your phone as a web server) and in particular by Siegel's notion of us all in the future having a personal online data vault.] My one-time colleague Paul Dawson recently wrote an article called The Future of Search and in it he proposed some interesting ideas. Some choice quotes: The growth of Chinese search giant Baidu is an indicator that fully localised and tailored content and offerings have great traction with local audiences This trend is already driving an increase in the use of specialist searches … Look at how Farecast is now integrated into Bing for example, or how Flightstats is now integrated into Google. Search does not necessarily have to begin with a keyword, but could start instead with a click or a touch. Take a look at Retrievr. Start drawing a picture in the box and see what happens. This is certainly search without the need for typing in keywords search technology has advanced greatly in recent years. The recent launch of Microsoft Live Labs’ Pivot has given us a taste of what we can expect to see in the future This really got me thinking about where search might go in the future and as my mind wandered I realised that as the amount of data that we collect about ourselves increases so too will the need and the desire to search it. The amount of electronic data that exists about each and every person is increasing and in the near future I fully expect that we are going to be able to store personal data such as: A history of our location (in fact Google Latitude already offers this facility) Recordings of all our phone conversations Health information history (weight, blood pressure etc…) Energy usage Spending history What films we watch, what radio stations we listen to Voting history Of course, most of this stuff is already stored somewhere but crucially we don’t have easy access to it. My utilities supplier knows how much electricity I’m using but if I want to know for myself I have to go and dig through my statements (assuming I have kept them). Similarly my doctor probably has ready access to all of my health records, my bank knows exactly what I have spent my money on, my cable supplier knows what I watch on TV and my mobile phone supplier probably knows exactly where I am and where I’ve been for the past few years. Strange then that none of this electronic information is available to me in a way that I can really make use of it; after all, its MY information. Its MY data. I created it. That is set to change. As technologies mature and customers become more technically cognizant they will demand more access to the data that companies hold about them. The companies themselves will realise the benefit that they derive from giving users what they want and will embrace ways of providing it. As a result the amount of data that we store about ourselves is going to increase exponentially and the desire to search and derive value from that data is going to grow with it; we are about to enter the era of the “personal datastore” and we will want, and need, to search through it in order to make sense of it all. Its interesting then that today when we think of search we think of search engines and yet in these personal datastores we’re referring to data that search engines can’t touch because WE own it and we (hopefully) choose to keep it private. Someone, I know not who, is going to lead in this space by making it easy for us to search our data and retrieve information that we have either forgotten or maybe didn’t even know in the first place. We will learn new things about ourselves and about our habits; we will share these findings with whomever we choose; we will compare what we discover with others; we will collaborate for mutual benefit and, most of all, we will educate ourselves as to how to live our lives better. Search will be the means to that end, it will enable us to make sense of the wealth of information that we will collect day in day out. The future of search is personal, why would we be interested in anything else? @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How to UEFI install Ubuntu 12.10?

    - by Geezanansa
    Running a newer FM1 motherboard which is using an AMD 3870k APU with a new 1TB HDD. Following the advice in the motherboard manual and https://help.ubuntu.com/community/UEFI have now got to grub option screen for UEFI install. see http://imgur.com/VW5vz The dvd.iso being used is Ubuntu 12.10 desktop amd64 from ubuntu .com. The hdd has had a gpt partition table made for, by using gparted when in a live desktop session when booted in bios mode. (*edit/update: Although the old cd updates on running it is an old kernel and it did make a gpt but that version of gparted uses fdisk whereas gdisk is required to make gpt. Think am going to have to spend more time here http://www.dedoimedo.com/computers/gparted.html lol Using the gparted from 12.10 live session to make partitions; following the guidance regarding this at https://help.ubuntu.com/community/UEFI#Creating_an_EFI_partition, but can only boot to grub option screen http://imgur.com/VW5vz when 12.10 options to "try ubuntu" or "install ubuntu" are selected they give errors as described below*) but after making the gpt decided to leave it unformatted/unallocated space with the intention of using installer to set up partitions. update-originally but gparted now sees hdd as http://imgur.com/hFIvm as described above. *Booting live dvd in EFI mode gives "Secure Boot not installed" just before grub kernel option list with the option to "install ubuntu" but get "can not read cd/0" and "the kernel must be loaded first" errors; when that option is selected. Any pointers on how to get installer going for UEFI install would be good. Thanks in advance. update: Hopefully these screenshots can help better highlight where i am going wrong or if there is something else going on http://imgur.com/g30RB, http://imgur.com/VW5vz, http://imgur.com/31E0q, http://imgur.com/bnuaG, http://imgur.com/y4KGu, http://imgur.com/3u2QE, http://imgur.com/n9lN3, http://imgur.com/FEKvz, http://imgur.com/hFIvm, update: Thank you fernando garcia for pointing me in the right direction to start the process of elimantion. What i have done since asking question is a little home work starting here http://askubuntu.com/faq#bounty and here http://askubuntu.com/questions/how-to-ask. Looking at other similar questions was good fun and found this 12.10 UEFI Secure Boot install the most relative in helping getting ubuntu to uefi install on my system. In response to wolverine's question this article was referred to http://web.dodds.net/~vorlon/wiki/blog/SecureBoot_in_Ubuntu_12.10/ This article in the first sentence gives a link to http://www.ubuntu.com/download which is where i downloaded the 12.10 desktop amd64 .iso(and others) but have been unable to do a efi install of ubuntu on this system and as this is a new system have ended up just going with bios installer running which at least puts my mind at ease that i have not bricked my new mobo.(had to do a clrcmos and flash to latest bios version) So it possibly could be the bios settings or the bios version being used that is problem. To try and eliminate bios version i can not get to post screen in order to id bios version being used. Pressing tab to show post instead of logo and trying to pausebreak to catch post is proving difficult. If logo screen in bios is disabled just get black screen no post shown and pressing tab does not show post. Appreciate using appropriate bios settings and latest 12.10 release should simply get uefi installer running when selected from the grub list (nice graphic details in Identifying if computer boots the cd in efi mode section at https://help.ubuntu.com/community/UEFI#Identifying_if_the_computer_boots_the_CD_in_EFI_mode) And to confirm the hdd is booting in efi mode https://help.ubuntu.com/community/UEFI#Identifying_if_the_computer_boots_the_HDD_in_EFI_mode running the command [ -d /sys/firmware/efi ] && echo "EFI boot on HDD" || echo "Legacy boot on HDD" gave Legacy boot on HDD This is as expected because i allowed the bios installer (which was 12.04 desktop amd64 after trying 12.10 desktop amd64 in efi mode) to run to get a working installation. Which is not what was intended or wished for but wanted to get a working os to bench test new mobo i.e. prove it is working. There are other options as in installing other bootmanagers/loaders but do not wish to do so as shim should get grub2 going that is after secure boot has been signed.(Now got rough idea what should happen just it aint happening. Is it possible ahci drivers are required?) Will post boot info script url of the updated config/setup. The original question asked seems irrelevant to what is being said in this update but as the problem is not resolved will keep on trying efi installing! i.e the problem is same as when question asked just trying to update. Have tried to edit and update the best i can!

    Read the article

  • Ubuntu Server 14.04 Apache 2.4 TLSv1.2/TLSv1.1 support

    - by Jan
    Recently I upgraded my server from Ubuntu Server 12.04 to Ubuntu Server 14.04 by means of a new installation from scratch. Only one problem remains. In 12.04 Apache 2.2 with mod_ssl supported TLS version 1, 1.1 and 1.2. After upgrading to 14.04 and Apache 2.4, Apache only supports TLS version 1, support for 1.1 and 1.2 is missing. I followed both the migration guide for 2.2 - 2.4 (no changes to the mod_ssl settings) as well as the documentation of mod_ssl regarding the SSLProtocol configuration option. Both SSLProtocol TLSv1.2 TLSv1.1 TLSv1 and SSLProtocol TLSv1.2 do not work. Is there any way to convince Apache to support the new TLS versions as well? Problem seems to be solved now without any change from my side. Apparently libssl was initially compiled without TLSv1.1/TLSv1.2 support.

    Read the article

  • Migrating Databases from SQL Server 2008 to SqlAzure

    - by kaleidoscope
    Connect SQL Azure through SSMS. (It will get connected, only if you have port 1433 open.) Create required databases on SQL Azure. Create and execute schema scripts for databases.(Make sure you have written scripts which are 100% compatible with SQL Azure. Tool called SQL Azure migration tool by CodePlex helps with doing this.) Create and execute insert scripts for databases. (One can use SSMS for generating insert scripts.) For further details refer to the Windows Azure Platform Kit Nov 2009.   Anish

    Read the article

  • SUPINFO International University in Mauritius

    Since a while I'm considering to pick up my activities as a student and I'd like to get a degree in Computer Science. Personal motivation I mean after all this years as a professional software (and database) developer I have the personal urge to complete this part of my education. Having various certifications by Microsoft and being awarded as an Microsoft Most Valuable Professional (MVP) twice looks pretty awesome on a resume but having a "proper" degree would just complete my package. During the last couple of years I already got in touch with C-SAC (local business school with degree courses), the University of Mauritius and BCS, the Chartered Institute for IT to check the options to enroll as an experienced software developer. Quite frankly, it was kind of alienating to receive that feedback: Start from scratch! No seriously? Spending x amount of years to sit for courses that might be outdated and form part of your daily routine? Probably being in an awkward situation in which your professional expertise might exceed the lecturers knowledge? I don't know... but if that's path to walk... Well, then I might have to go for it. SUPINFO International University Some weeks ago I was contacted by the General Manager, Education Recruitment and Development of Medine Education Village, Yamal Matabudul, to have a chat on how the local IT scene, namely the Mauritius Software Craftsmanship Community (MSCC), could assist in their plans to promote their upcoming campus. Medine went into partnership with the French-based SUPINFO International University and Mauritius will be the 36th location world-wide for SUPINFO. Actually, the concept of SUPINFO is very likely to the common understanding of an apprenticeship in Germany. Not only does a student enroll into the programme but will also be placed into various internships as part of the curriculum. It's a big advantage in my opinion as the person stays in touch with the daily procedures and workflows in the real world of IT. Statements like "We just received a 'crash course' of information and learned new technology which is equivalent to 1.5 months of lectures at the university" wouldn't form part of the experience of such an education. Open Day at the Medine Education Village Last Saturday, Medine organised their Open Day and it was the official inauguration of the SUPINFO campus in Mauritius. It's now listed on their website, too - but be warned, the site is mainly in French language although the courses are all done in English. Not only was it a big opportunity to "hang out" on the campus of Medine but it was great to see the first professional partners for their internship programme, too. Oh, just for the records, IOS Indian Ocean Software Ltd. will also be among the future employers for SUPINFO students. More about that in an upcoming blog entry. Open Day at Medine Education Village - SUPINFO International University in Mauritius Mr Alick Mouriesse, President of SUPINFO, arrived the previous day and he gave all attendees a great overview of the roots of SUPINFO, the general development of the educational syllabus and their high emphasis on their partnerships with local IT companies in order to assist their students to get future jobs but also feel the heartbeat of technology live. Something which is completely missing in classic institutions of tertiary education in Computer Science. And since I was on tour with my children, as usual during weekends, he also talked about the outlook of having a SUPINFO campus in Mauritius. Apart from the close connection to IT companies and providing internships to students, SUPINFO clearly works on an international level. Meaning students of SUPINFO can move around the globe and can continue their studies seamlessly. For example, you might enroll for your first year in France, then continue to do 2nd and 3rd year in Canada or any other country with a SUPINFO campus to earn your bachelor degree, and then live and study in Mauritius for the next 2 years to achieve a Master degree. Having a chat with Dale Smith, Expand Technologies, after his interesting session on Technological Entrepreneurship - TechPreneur More questions by other craftsmen of the Mauritius Software Craftsmanship Community And of course, this concept works in any direction, giving Mauritian students a huge (!) opportunity to live, study and work abroad. And thanks to this, Medine already announced that there will be new facilities near Cascavelle to provide dormitories and other facilities to international students coming to our island. Awesome! Okay, but why SUPINFO? Well, coming back to my original statement - I'd like to get a degree in Computer Science - SUPINFO has a process called Validation of Acquired Experience (VAE) which is tailor-made for employees in the field of IT, and allows you to enroll in their course programme. I already got in touch with their online support chat but was only redirected to some FAQs on their website, unfortunately. So, during the Open Day I seized the opportunity to have an one-on-one conversation with Alick Mouriesse, and he clearly encouraged me to gather my certifications and working experience. SUPINFO does an individual evaluation prior to their assignment regarding course level, and hopefully my chances of getting some modules ahead of studies are looking better than compared to the other institutes. Don't get me wrong, I don't want to go down the easy route but why should someone sit for "Database 101" or "Principles of OOP" when applying and preaching database normalisation and practicing Clean Code Developer are like flesh and blood? Anyway, I'll be off to get my transcripts of certificates together with my course assignments from the old days at the university. Yes, I studied Applied Chemistry for a couple of years before intersecting into IT and software development particularly... ;-)

    Read the article

  • How to install iodbc in Ubuntu 12.10

    - by Hanynowsky
    I need to install the library iodbc (depends on libodbc2) in my Quantal machine. Yet there is creepy dependency problem. This was replaced somehow by unixodbc which I don't have, installed. Here is what I get when I try to install : sudo apt-get install libiodbc2-dev Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libiodbc2-dev : Depends: libiodbc2 (= 3.52.7-2build2) but it is not going to be installed Depends: iodbc (= 3.52.7-2build2) but it is not going to be installed E: Unable to correct problems, you have held broken packages. I can tell that iodbc conflicts with odbcinst. Yet, I cannot remove it due to the following: sudo apt-get remove odbcinst Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: akonadi-backend-mysql calligra-l10n-engb kde-l10n-engb kdevelop-l10n kdevelop-php-docs-l10n kdevelop-php-l10n libakonadi-kabc4 libakonadi-notes4 libakonadiprotocolinternals1 libboost-thread1.49.0 libdmtx0a libgpgme++2 libkcalcore4 libkdgantt2 libkholidays4 libkimap4 libkldap4 libkmbox4 libkmime4 libkolabxml0 libkpgp4 libkresources4 libksieve4 libprison0 libqgpgme1 libqrencode3 libxerces-c3.1 Use 'apt-get autoremove' to remove them. The following packages will be REMOVED: akonadi-server k3b k3b-i18n katepart kde-runtime kdelibs-bin kdelibs5-plugins kdepim-runtime kdepim-strigi-plugins kdepimlibs-kio-plugins kdoctools kget kmag kmail kmix kmousetool ksystemlog kubuntu-debug-installer language-pack-kde-ar language-pack-kde-en libakonadi-calendar4 libakonadi-contact4 libakonadi-kcal4 libakonadi-kde4 libakonadi-kmime4 libcalendarsupport4 libincidenceeditorsng4 libk3b6 libkabc4 libkactivities-bin libkactivities6 libkalarmcal2 libkatepartinterfaces4 libkcal4 libkcalutils4 libkcddb4 libkde3support4 libkdepim4 libkdepimdbusinterfaces4 libkdewebkit5 libkemoticons4 libkfile4 libkgapi0 libkhtml5 libkio5 libkleo4 libkmanagesieve4 libkmediaplayer4 libknewstuff3-4 libknotifyconfig4 libkolab0 libkonq-common libkonq5abi1 libkontactinterface4 libkparts4 libkpimidentities4 libkpimtextedit4 libkpimutils4 libkprintutils4 libksieveui4 libktexteditor4 libktnef4 libktorrent4 libkworkspace4abi2 libkxmlrpcclient4 libmailcommon4 libmailimporter4 libmailtransport4 libmessagecomposer4 libmessagecore4 libmessagelist4 libmessageviewer4 libmicroblog4 libnepomuk4 libnepomukcore4abi1 libnepomukquery4a libnepomuksync4 libnepomukutils4 libplasma3 libsoprano4 libtemplateparser4 libvirtodbc0 nepomuk-core odbcinst odbcinst1debian2 plasma-scriptengine-javascript qapt-batch soprano-daemon virtuoso-minimal 0 upgraded, 0 newly installed, 89 to remove and 0 not upgraded. After this operation, 126 MB disk space will be freed. Do you want to continue [Y/n]? Other info: Why do the "iodbc" and "libmyodbc" packages conflict with each other? The root problem is that I need to use a new feature introduced in the latest MySQL Workbench builds (DB Migration) which uses odbc for that matter. Here is what Mysql doc says about it : Linux The Migration Wizard uses iODBC as a driver manager for all of its ODBC connections in Linux. This may give you some troubles because most Linux distributions provide ODBC drivers compiled against unixODBC. This is another driver manager not supported by MySQL Workbench so you won’t be able to use those drivers unless you compile them against iODBC. Here’s what you should do. Make sure that you have iODBC installed. If you are using Debian, Ubuntu or another Debian based distro, type this command in your terminal: $ sudo apt-get install iodbc libiodbc2-dev libpq-dev libssl-dev For RPM based distros (RedHat, Fedora, etc.) type this command instead of the previous one: $ sudo yum install iodbc iodbc-dev libpqxx-devel openssl-devel Now we need to install the PostgreSQL ODBC drivers. Download the psqlODBC source tarball from here. Use the latest version available for download. As of this writing the latest version corresponds to the file psqlodbc-09.01.0200.tar.gz. Extract this tarball to a directory in your hard drive and open a terminal and cd into that directory. Type this in the terminal window: $ ./configure --with-iodbc --enable-pthreads $ make $ sudo make install Verify that you have the file psqlodbcw.so in the /usr/local/lib directory. This package seems to pose the problem probably: dpkg -s odbcinst1debian2 Package: odbcinst1debian2 Status: install ok installed Priority: optional Section: libs Installed-Size: 241 Maintainer: Ubuntu Developers <[email protected]> Architecture: amd64 Multi-Arch: same Source: unixodbc Version: 2.2.14p2-5ubuntu4 Replaces: unixodbc (<< 2.1.1-2) Depends: libc6 (>= 2.14), libltdl7 (>= 2.4.2), odbcinst Pre-Depends: multiarch-support Breaks: libiodbc2, libmyodbc (<< 5.1.6-2), odbc-postgresql (<< 1:09.00.0310-1.1), tdsodbc (<< 0.82-8) Conflicts: odbcinst1, odbcinst1debian1 Description: Support library for accessing odbc ini files This package contains the libodbcinst library from unixodbc, a library used by ODBC drivers for reading their configuration settings from /etc/odbc.ini and ~/.odbc.ini. . Also contained in this package are the driver setup plugins, which describe the features supported by individual ODBC drivers. Homepage: http://www.unixodbc.org/ Original-Maintainer: Steve Langasek <[email protected]> I have no broken packages:

    Read the article

  • Oracle Developer Day: Provisioning und Patching mit Cloud Control

    - by Ralf Durben (DBA Community)
    Mit Oracle Enterprise Manager 12c Cloud Control und dem Lifecycle Management Pack können Sie Ihren Aufwand in den Bereichen Erstellung und Wartung von Oracle Datenbanken erheblich senken und so Ihre wertvolle Zeit wieder anderen Aufgaben widmen. Dieser Oracle Developer Day zeigt in einer halbtägigen Veranstaltung, wie Sie die Provisionierungs- und Patchinglösungen in Cloud Control für sich nutzen und so viel Zeit einsparen können. Dabei wird die Nutzung anhand von praktischen Beispielen erläutert. Themen dieser Veranstaltung sind: Grundlagen des Provisionings in Cloud Control Datenbank Provisioning Patching und Migration von Datenbanken Sicherheitsmodell rund um Deployment Prozeduren Provisionierung sonstiger Software Weitere Nutzungsmöglichkeiten von Deployment Prozeduren Veranstaltungszeit: 12:00 Uhr Networking Lunch13:00 Uhr Beginn der Präsentationen17:00 Uhr Ende der Veranstaltung Veranstaltungen: 08.10.2012  München10.10.2012  Frankfurt25.10.2012  Hamburg Die Teilnahme zu dieser Veranstaltung ist kostenlos. Anmelden können Sie sich mit einem Klick auf den Veranstaltungsort.

    Read the article

  • Tab Sweep - More OSGi, Coherence, Oracle Java moves, JMS 2.0 and more

    - by alexismp
    Recent Tips and News on Java, Java EE 6, GlassFish & more : • Why I will use Java EE (JEE, and not J2EE) instead of Spring in new Enterprise Java Projects (Kai) • What is Happening vs. What is Interesting (Geertjan) • Oracle Coherence & Oracle Service Bus: REST API Integration (Nino) • Oracle's Top 10 Java Moves of 2011 (eWeek) • JEP 122: Remove the Permanent Generation (OpenJDK.org) • JEE6 – Glassfish 3.1, Clustering & Failover (Xebia.fr) • Testing LAZY mechanism in EJB 3 (e-blog-java) • Discoing with Vorpal (Chuk) • Devoxx : les évolutions de JMS 2.0 (Ippon.fr) • More OSGi... (Jarda) • Practical Migration to Java 7 - Small Codeexamples (FOSSLC) • Coherence Part III : Filtres (Zenika.com)

    Read the article

  • SPRoleAssignment Crazy Caviats

    - by MOSSLover
    I’m not sure if this bug exists on any other environment, but here are a few issues I ran into when trying to use SPRoleAssignment and SPGroup: When trying to use Web.Groups[“GroupName”] it basically told me the group did not exist, so I had to change the code to use Web.SiteGroups[“GroupName”]. I could not add the Role Assignment to the Web and run a Web.Update() without adding an additional Web.AllowUnsafeUpdates= true; , however on my virtual machine I could do a Web.Update() without the extra piece of code.  I kept receiving an error in my browser stating that I should hit the back button and update my permissions. So after fixing those two issues I was able to copy the permissions from a page item into a Site for my migration.  Hopefully, one of you can learn from my error messages if you have any issues in the future. Technorati Tags: SPRoleAssignment,MOSS,SPGroup

    Read the article

  • Partner Webcast – Weblogic for Developers - 12 July 2012

    - by Thanos
    Oracle Weblogic Server is the industry’s leading application server for deploying Java EE applications with support for new features for lowering cost of operations, improving performance and enhancing scalability. But it’s also a great choice for the Java developers because of the differentiating capabilities that facilitate integration with other tools and frameworks, promote reusability and rapid redeployment of your applications. During the webinar we’re going to explore these differentiation features in more detail. Agenda: Java EE standards support in different Weblogic Server versions Weblogic  Classloading How Weblogic load the classes Filtering classloader Shared libraries Classloader Analisys Tool Spring support in Weblogic Weblogic integration with Apache Maven Advanced deployment features Fast Swap Side by Side deployment Q&A session Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now! For any questions please contact us at [email protected] Visit regularly our ISV Migration Center blog Or Follow us @oracleimc to learn more on Oracle Technologies as well as upcoming partner webcasts and events.

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >