Search Results

Search found 31356 results on 1255 pages for 'database backups'.

Page 542/1255 | < Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >

  • Refactoring a Single Rails Model with large methods & long join queries trying to do everything

    - by Kelseydh
    I have a working Ruby on Rails Model that I suspect is inefficient, hard to maintain, and full of unnecessary SQL join queries. I want to optimize and refactor this Model (Quiz.rb) to comply with Rails best practices, but I'm not sure how I should do it. The Rails app is a game that has Missions with many Stages. Users complete Stages by answering Questions that have correct or incorrect Answers. When a User tries to complete a stage by answering questions, the User gets a Quiz entry with many Attempts. Each Attempt records an Answer submitted for that Question within the Stage. A user completes a stage or mission by getting every Attempt correct, and their progress is tracked by adding a new entry to the UserMission & UserStage join tables. All of these features work, but unfortunately the Quiz.rb Model has been twisted to handle almost all of it exclusively. The callbacks began at 'Quiz.rb', and because I wasn't sure how to leave the Quiz Model during a multi-model update, I resorted to using Rails Console to have the @quiz instance variable via self.some_method do all the heavy lifting to retrieve every data value for the game's business logic; resulting in large extended join queries that "dance" all around the Database schema. The Quiz.rb Model that Smells: class Quiz < ActiveRecord::Base belongs_to :user has_many :attempts, dependent: :destroy before_save :check_answer before_save :update_user_mission_and_stage accepts_nested_attributes_for :attempts, :reject_if => lambda { |a| a[:answer_id].blank? }, :allow_destroy => true #Checks every answer within each quiz, adding +1 for each correct answer #within a stage quiz, and -1 for each incorrect answer def check_answer stage_score = 0 self.attempts.each do |attempt| if attempt.answer.correct? == true stage_score += 1 elsif attempt.answer.correct == false stage_score - 1 end end stage_score end def winner return true end def update_user_mission_and_stage ####### #Step 1: Checks if UserMission exists, finds or creates one. #if no UserMission for the current mission exists, creates a new UserMission if self.user_has_mission? == false @user_mission = UserMission.new(user_id: self.user.id, mission_id: self.current_stage.mission_id, available: true) @user_mission.save else @user_mission = self.find_user_mission end ####### #Step 2: Checks if current UserStage exists, stops if true to prevent duplicate entry if self.user_has_stage? @user_mission.save return true else ####### ##Step 3: if step 2 returns false: ##Initiates UserStage creation instructions #checks for winner (winner actions need to be defined) if they complete last stage of last mission for a given orientation if self.passed? && self.is_last_stage? && self.is_last_mission? create_user_stage_and_update_user_mission self.winner #NOTE: The rest are the same, but specify conditions that are available to add badges or other actions upon those conditions occurring: ##if user completes first stage of a mission elsif self.passed? && self.is_first_stage? && self.is_first_mission? create_user_stage_and_update_user_mission #creates user badge for finishing first stage of first mission self.user.add_badge(5) self.user.activity_logs.create(description: "granted first-stage badge", type_event: "badge", value: "first-stage") #If user completes last stage of a given mission, creates a new UserMission elsif self.passed? && self.is_last_stage? && self.is_first_mission? create_user_stage_and_update_user_mission #creates user badge for finishing first mission self.user.add_badge(6) self.user.activity_logs.create(description: "granted first-mission badge", type_event: "badge", value: "first-mission") elsif self.passed? create_user_stage_and_update_user_mission else self.passed? == false return true end end end #Creates a new UserStage record in the database for a successful Quiz question passing def create_user_stage_and_update_user_mission @nu_stage = @user_mission.user_stages.new(user_id: self.user.id, stage_id: self.current_stage.id) @nu_stage.save @user_mission.save self.user.add_points(50) end #Boolean that defines passing a stage as answering every question in that stage correct def passed? self.check_answer >= self.number_of_questions end #Returns the number of questions asked for that stage's quiz def number_of_questions self.attempts.first.answer.question.stage.questions.count end #Returns the current_stage for the Quiz, routing through 1st attempt in that Quiz def current_stage self.attempts.first.answer.question.stage end #Gives back the position of the stage relative to its mission. def stage_position self.attempts.first.answer.question.stage.position end #will find the user_mission for the current user and stage if it exists def find_user_mission self.user.user_missions.find_by_mission_id(self.current_stage.mission_id) end #Returns true if quiz was for the last stage within that mission #helpful for triggering actions related to a user completing a mission def is_last_stage? self.stage_position == self.current_stage.mission.stages.last.position end #Returns true if quiz was for the first stage within that mission #helpful for triggering actions related to a user completing a mission def is_first_stage? self.stage_position == self.current_stage.mission.stages_ordered.first.position end #Returns true if current user has a UserMission for the current stage def user_has_mission? self.user.missions.ids.include?(self.current_stage.mission.id) end #Returns true if current user has a UserStage for the current stage def user_has_stage? self.user.stages.include?(self.current_stage) end #Returns true if current user is on the last mission based on position within a given orientation def is_first_mission? self.user.missions.first.orientation.missions.by_position.first.position == self.current_stage.mission.position end #Returns true if current user is on the first stage & mission of a given orientation def is_last_mission? self.user.missions.first.orientation.missions.by_position.last.position == self.current_stage.mission.position end end My Question Currently my Rails server takes roughly 500ms to 1 sec to process single @quiz.save action. I am confident that the slowness here is due to sloppy code, not bad Database ERD design. What does a better solution look like? And specifically: Should I use join queries to retrieve values like I did here, or is it better to instantiate new objects within the model instead? Or am I missing a better solution? How should update_user_mission_and_stage be refactored to follow best practices? Relevant Code for Reference: quizzes_controller.rb w/ Controller Route Initiating Callback: class QuizzesController < ApplicationController before_action :find_stage_and_mission before_action :find_orientation before_action :find_question def show end def create @user = current_user @quiz = current_user.quizzes.new(quiz_params) if @quiz.save if @quiz.passed? if @mission.next_mission.nil? && @stage.next_stage.nil? redirect_to root_path, notice: "Congratulations, you have finished the last mission!" elsif @stage.next_stage.nil? redirect_to [@mission.next_mission, @mission.first_stage], notice: "Correct! Time for Mission #{@mission.next_mission.position}", info: "Starting next mission" else redirect_to [@mission, @stage.next_stage], notice: "Answer Correct! You passed the stage!" end else redirect_to [@mission, @stage], alert: "You didn't get every question right, please try again." end else redirect_to [@mission, @stage], alert: "Sorry. We were unable to save your answer. Please contact the admministrator." end @questions = @stage.questions.all end private def find_stage_and_mission @stage = Stage.find(params[:stage_id]) @mission = @stage.mission end def find_question @question = @stage.questions.find_by_id params[:id] end def quiz_params params.require(:quiz).permit(:user_id, :attempt_id, {attempts_attributes: [:id, :quiz_id, :answer_id]}) end def find_orientation @orientation = @mission.orientation @missions = @orientation.missions.by_position end end Overview of Relevant ERD Database Relationships: Mission - Stage - Question - Answer - Attempt <- Quiz <- User Mission - UserMission <- User Stage - UserStage <- User Other Models: Mission.rb class Mission < ActiveRecord::Base belongs_to :orientation has_many :stages has_many :user_missions, dependent: :destroy has_many :users, through: :user_missions #SCOPES scope :by_position, -> {order(position: :asc)} def stages_ordered stages.order(:position) end def next_mission self.orientation.missions.find_by_position(self.position.next) end def first_stage next_mission.stages_ordered.first end end Stage.rb: class Stage < ActiveRecord::Base belongs_to :mission has_many :questions, dependent: :destroy has_many :user_stages, dependent: :destroy has_many :users, through: :user_stages accepts_nested_attributes_for :questions, reject_if: :all_blank, allow_destroy: true def next_stage self.mission.stages.find_by_position(self.position.next) end end Question.rb class Question < ActiveRecord::Base belongs_to :stage has_many :answers, dependent: :destroy accepts_nested_attributes_for :answers, :reject_if => lambda { |a| a[:body].blank? }, :allow_destroy => true end Answer.rb: class Answer < ActiveRecord::Base belongs_to :question has_many :attempts, dependent: :destroy end Attempt.rb: class Attempt < ActiveRecord::Base belongs_to :answer belongs_to :quiz end User.rb: class User < ActiveRecord::Base belongs_to :school has_many :activity_logs has_many :user_missions, dependent: :destroy has_many :missions, through: :user_missions has_many :user_stages, dependent: :destroy has_many :stages, through: :user_stages has_many :orientations, through: :school has_many :quizzes, dependent: :destroy has_many :attempts, through: :quizzes def latest_stage_position self.user_missions.last.user_stages.last.stage.position end end UserMission.rb class UserMission < ActiveRecord::Base belongs_to :user belongs_to :mission has_many :user_stages, dependent: :destroy end UserStage.rb class UserStage < ActiveRecord::Base belongs_to :user belongs_to :stage belongs_to :user_mission end

    Read the article

  • Free Team Foundation Service a Boon for Play Projects

    - by Ken Cox [MVP]
    My ‘jump in and sink or swim’ learning style leads me to create dozens of unbillable ‘play’ projects in Visual Studio 2012.  For example, I’m currently learning to customize the powerful auction/fixed price/classifieds software called AuctionWorx Enterprise.  I make stupid mistakes as I grasp a new API and configure projects. It’s frustrating to go down a rat hole and discover the VS IDE’s Undo doesn’t reach back to a working build from the previous weekend. Enter Visual Studio’s Free Team Foundation Service.  Put your play projects into the free source control and check in (or shelve) a snapshot before embarking on something risky. (I already use Discount ASP.NET’s Team Foundation Server hosting for client projects so there’s zero TFS/VS learning curve for me and first class GUI support.) TFS is free right now for teams of five or fewer. I’m not naive enough to expect ‘free’ to last forever. So, it’ll be interesting to see how much Microsoft intends to charge in 2013 for TFS. In the meantime, why not grab some free source code storage? BTW, it’s weird to realize that Microsoft is backing up my junky little playtime code on three physically-distinct servers every day - and taking incremental backups every hour.

    Read the article

  • Ubuntu 14.04, only wallpaper and a mouse cursor is showing

    - by Abhishek Verma
    I upgraded to Ubuntu 14.04 weeks ago, that was a fresh install. And as usual installed, Unity tweak tool, some themes, Chrome, eclipse etc. Today, while playing with Unity tweak tool, I mistakenly switched Window spread to off, and bang, no problem yet. After realizing what I did I switched it to on again and then, the title bar of all windows, the status bar (guessing it's called that), and the launcher disappeared. Now I was left with a mouse cursor and the Unity tweak tool window, without the title bar. I thought a restart is needed, but sadly no shutdown option to click on, moreover, the keyboard didn't worked, neither the hardware shutdown button. So, I pressed the reset button. After a mundane restart, I was welcomed by the wallpaper and the mouse cursor, moving, that's it. Searched everywhere for a working solution, found some solutions in AskUbuntu itself, but they where all old and required terminal to be used, but since my keyboard isn't working, no access to the terminal. So, ofcource the question isn't a dublicate, at least, IMHO. Also, I was working on an Android project and have got no backups, so I won't be going for a clean install, or else I will start crying watching my weeks of work being deleted, any solutions, please.

    Read the article

  • SharePoint 2010 and Windows Server Backup

    - by Enrique Lima
    A couple of months ago, a friend found a bit of information on TechNet that has proven to be quite useful. See, I am of the opinion SharePoint allows for smaller deployments to be made, and with that said, I am talking about SharePoint Foundation 2010 being used for the most part. But truly the point here is not to discuss whether or not a deployment of SharePoint Foundation 2010 or SharePoint Server 2010 is right or not.  The fact is they do take place and happen.  And information will reside there. Now, the point of this post is to raise awareness on options available for companies that have implemented it and maybe are a bit “iffy” on how to protect the information being placed in libraries and lists.  In many cases I have found SharePoint comes first and business continuity becomes an afterthought.  The documentation piece from TechNet states: “You can register SharePoint Server 2010 with Windows Server Backup by using the stsadm.exe -o -registerwsswriter operation to configure the Volume Shadow Copy Service (VSS) writer for SharePoint Server. Windows Server Backup then includes SharePoint Server 2010 in server-wide backups. When you restore from a Windows Server backup, you can select Microsoft SharePoint Foundation (no matter which version of SharePoint 2010 Products is installed), and all components reported by the VSS writer forSharePoint Server 2010 on that server at the time of the backup will be restored. Windows Server Backup is recommended only for use with for single-server deployments.” Even in the event of single-server deployments you will have options to safeguard your data. The process will require that after you have executed the stsadm command above, you will then use Windows Server Backup to do a Full Server Backup.  Then when the restore operation is needed you will be able to select specifically the section that has the SharePoint technologies backup. The restore process: Hope you find this to be a helpful post.  I have found this to be specially handy in SharePoint deployments that are part of a Team Foundation Server deployment and that are isolated from any other SharePoint farm and such.   Credits:  Sean McDonough for passing along the information available on TechNet.

    Read the article

  • Story of success: MySQL Enterprise Backup (MEB) was successfully integrated with IBM Tivoli Storage Manager (TSM) via System Backup to Tape (SBT) interface.

    - by user13334359
    Since version 3.6 MEB supports backups to tape through the SBT interface.The officially supported tool for such backups to tape is Oracle Secure Backup (OSB).But there are a lot of other Storage Managers. MEB allows to use them through the SBT interface. Since version 3.7 it also has option --sbt-environment which allows to pass environment variables, not needed by OSB, to third-party managers. At the same time MEB can not guarantee it would work with all of them.This month we were contacted by a customer who wanted to use IBM Tivoli Storage Manager (TSM) with MEB. We could only say them same thing I wrote in previous paragraph: this solution is supposed to work, but you have to be pioneers of this technology. And they agreed. They agreed to be the pioneers and so the story begins.MEB requires following options to be specified by those who want to connect it to SBT interface:--sbt-database-name: a name which should be handed over to SBT interface. This can be any name. Default, MySQL, works for most cases, so user is not required to specify this option.--sbt-lib-path: path to SBT library. For TSM this library comes with "Data Protection for Oracle", which, in its turn, interfaces with Oracle Recovery Manager (RMAN), which uses SBT interface. So you need to install it even if you don't use Oracle.--sbt-environment: environment for third-party manager. This option is not needed when you use OSB, but almost always necessary for third-party SBT managers. TSM requires variable TDPO_OPTFILE to be set and point to the TSM configuration file.--backup-image=sbt:: path to the image. Prefix "sbt:" indicates that image should be sent through SBT interfaceSo full command in our case would look like: ./mysqlbackup --port=3307 --protocol=tcp --user=backup_user --password=foobar \ --backup-image=sbt:my-first-backup --sbt-lib-path=/usr/lib/libobk.so \ --sbt-environment="TDPO_OPTFILE=/path/to/my/tdpo.opt" --backup-dir=/path/to/my/dir backup-to-imageAnd this command results in the following output log: MySQL Enterprise Backup version 3.7.1 [2012/02/16] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved. INFO: Starting with following command line ...  ./mysqlbackup --port=3307 --protocol=tcp --user=backup_user         --password=foobar --backup-image=sbt:my-first-backup         --sbt-lib-path=/usr/lib/libobk.so         --sbt-environment="TDPO_OPTFILE=/path/to/my/tdpo.opt"         --backup-dir=/path/to/my/dir backup-to-image sbt-environment: 'TDPO_OPTFILE=/path/to/my/tdpo.opt' INFO: Got some server configuration information from running server. IMPORTANT: Please check that mysqlbackup run completes successfully.             At the end of a successful 'backup-to-image' run mysqlbackup             prints "mysqlbackup completed OK!". --------------------------------------------------------------------                        Server Repository Options: --------------------------------------------------------------------   datadir                          =  /path/to/data   innodb_data_home_dir             =  /path/to/data   innodb_data_file_path            =  ibdata1:2048M;ibdata2:2048M;ibdata3:64M:autoextend:max:2048M   innodb_log_group_home_dir        =  /path/to/data   innodb_log_files_in_group        =  2   innodb_log_file_size             =  268435456 --------------------------------------------------------------------                        Backup Config Options: --------------------------------------------------------------------   datadir                          =  /path/to/my/dir/datadir   innodb_data_home_dir             =  /path/to/my/dir/datadir   innodb_data_file_path            =  ibdata1:2048M;ibdata2:2048M;ibdata3:64M:autoextend:max:2048M   innodb_log_group_home_dir        =  /path/to/my/dir/datadir   innodb_log_files_in_group        =  2   innodb_log_file_size             =  268435456 Backup Image Path= sbt:my-first-backup mysqlbackup: INFO: Unique generated backup id for this is 13297406400663200 120220 08:54:00 mysqlbackup: INFO: meb_sbt_session_open: MMS is 'Data Protection for Oracle: version 5.5.1.0' 120220 08:54:00 mysqlbackup: INFO: meb_sbt_session_open: MMS version '5.5.1.0' mysqlbackup: INFO: Uses posix_fadvise() for performance optimization. mysqlbackup: INFO: System tablespace file format is Antelope. mysqlbackup: INFO: Found checkpoint at lsn 31668381. mysqlbackup: INFO: Starting log scan from lsn 31668224. 120220  8:54:00 mysqlbackup: INFO: Copying log... 120220  8:54:00 mysqlbackup: INFO: Log copied, lsn 31668381.           We wait 1 second before starting copying the data files... 120220  8:54:01 mysqlbackup: INFO: Copying /path/to/ibdata/ibdata1 (Antelope file format). mysqlbackup: Progress in MB: 200 400 600 800 1000 1200 1400 1600 1800 2000 120220  8:55:30 mysqlbackup: INFO: Copying /path/to/ibdata/ibdata2 (Antelope file format). mysqlbackup: Progress in MB: 200 400 600 800 1000 1200 1400 1600 1800 2000 120220  8:57:18 mysqlbackup: INFO: Copying /path/to/ibdata/ibdata3 (Antelope file format). mysqlbackup: INFO: Preparing to lock tables: Connected to mysqld server. 120220 08:57:22 mysqlbackup: INFO: Starting to lock all the tables.... 120220 08:57:22 mysqlbackup: INFO: All tables are locked and flushed to disk mysqlbackup: INFO: Opening backup source directory '/path/to/data/' 120220 08:57:22 mysqlbackup: INFO: Starting to backup all files in subdirectories of '/path/to/data/' mysqlbackup: INFO: Backing up the database directory 'mysql' mysqlbackup: INFO: Backing up the database directory 'test' mysqlbackup: INFO: Copying innodb data and logs during final stage ... mysqlbackup: INFO: A copied database page was modified at 31668381.           (This is the highest lsn found on page)           Scanned log up to lsn 31670396.           Was able to parse the log up to lsn 31670396.           Maximum page number for a log record 328 120220 08:57:23 mysqlbackup: INFO: All tables unlocked mysqlbackup: INFO: All MySQL tables were locked for 0.000 seconds 120220 08:59:01 mysqlbackup: INFO: meb_sbt_backup_close: blocks: 4162  size: 1048576  bytes: 4363985063 120220  8:59:01 mysqlbackup: INFO: Full backup completed! mysqlbackup: INFO: MySQL binlog position: filename bin_mysql.001453, position 2105 mysqlbackup: WARNING: backup-image already closed mysqlbackup: INFO: Backup image created successfully.:            Image Path: 'sbt:my-first-backup' -------------------------------------------------------------    Parameters Summary -------------------------------------------------------------    Start LSN                  : 31668224    End LSN                    : 31670396 ------------------------------------------------------------- mysqlbackup completed OK!Backup successfully completed.To restore it you should use same commands like you do for any other MEB image, but need to provide sbt* options as well: $./mysqlbackup --backup-image=sbt:my-first-backup --sbt-lib-path=/usr/lib/libobk.so \ --sbt-environment="TDPO_OPTFILE=/path/to/my/tdpo.opt" --backup-dir=/path/to/my/dir image-to-backup-dirThen apply log as usual: $./mysqlbackup --backup-dir=/path/to/my/dir apply-logThen stop mysqld and finally copy-back: $./mysqlbackup --defaults-file=path/to/my.cnf --backup-dir=/path/to/my/dir copy-back  Disclaimer. This is only story of one success which can be useful for someone else. MEB is not regularly tested and not guaranteed to work with IBM TSM or any other third-party storage manager.

    Read the article

  • Why is the root partition on my disk full?

    - by Agmenor
    I installed Ubuntu 12.04 by doing a fresh install where there was previously Ubuntu 11.10. My computer warns me now that my disk is nearly full. After having run apt-get purge, run apt-get autoremove and emptied the Trash can, I still have this problem as shown by this screenshot of Gparted: The disk /dev/sda7 is indeed full. I ran the Disk Usage Analyzer (Baobab) and I am still not sure of what is happening: One of my hypothesis is that when installing Ubuntu 12.04, I didn't configure my disks well and the disk /dev/sda6 is not mounted well as /home. Is this the reason indeed? What should I do to verify this and then to get the things fixed? Here are a few additional details to answer the questions I received (thank you everybody): My home directory is not encrypted. The Backup utility (Déjà Dup) is not set for automatic backups. (I do it myself and manually.) After I mount /dev/sda6, the command df -h gives Filesystem Size Used Avail Use% Mounted on /dev/sda7 244G 221G 12G 96% / udev 3,9G 4,0K 3,9G 1% /dev tmpfs 1,6G 904K 1,6G 1% /run none 5,0M 0 5,0M 0% /run/lock none 3,9G 164K 3,9G 1% /run/shm /dev/sda6 653G 189G 433G 31% /media/8ec2fa69-039b-4c52-ab1b-034d785132a1 (sorry but formatting this into code does not work, for an unknown reason) Thanks to izx's post, I realized /dev/sda6 was not even mounted before. It contains all the documents I used to have when I was running Ubuntu 11.10.

    Read the article

  • Disk failure is imminent Laptop Hard drive ~5 months old

    - by Drew
    There's another post about this, but I don't have enough 'points' to say anything on that thread. So I'll start my own ... with more details! My computer still boots, but gnome domain reports problems with HDD smart. This has been confirmed in the bios as it makes me press f1 to boot up now. I tried running HDD disk check in the bios, but it fails running the tests. As in, running the tests failed not that the tests themselves indicated a failed drive. Here is what disk utility is reporting as failing: Reallocated Sector Count FAILING Normalized: 132 Worst: 132 Threshold: 140 Value: 544 Current Pending Sector Count WARNING Normalized: 200 Worst: 1 Threshold: 0 Value: 2 Is this related to the insane number of DRDY errors on the drive? kernel: [51345.233069] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 kernel: [51345.233076] ata1.00: BMDMA stat 0x4 kernel: [51345.233081] ata1.00: failed command: READ DMA kernel: [51345.233090] ata1.00: cmd c8/00:00:00:8b:4a/00:00:00:00:00/e0 tag 0 dma 131072 in kernel: [51345.233092] res 51/40:00:a8:8b:4a/10:04:00:00:00/e0 Emask 0x9 (media error) kernel: [51345.233097] ata1.00: status: { DRDY ERR } kernel: [51345.233103] ata1.00: error: { UNC } kernel: [51345.291929] ata1.00: configured for UDMA/100 kernel: [51345.291944] ata1: EH complete kernel: [51347.682748] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 kernel: [51347.682754] ata1.00: BMDMA stat 0x4 kernel: [51347.682759] ata1.00: failed command: READ DMA kernel: [51347.682768] ata1.00: cmd c8/00:00:00:8b:4a/00:00:00:00:00/e0 tag 0 dma 131072 in kernel: [51347.682770] res 51/40:00:a8:8b:4a/10:04:00:00:00/e0 Emask 0x9 (media error) kernel: [51347.682774] ata1.00: status: { DRDY ERR } kernel: [51347.682777] ata1.00: error: { UNC } Did Ubuntu 10.10 and/or EXT4 eat my work laptop? What steps can I take to backup my important information, which is probably the home folder. Please include steps to recover my data on the new hard drive as well. It does me little good to have backups I can't use.

    Read the article

  • Embedding generic sql queries into c# program

    - by Pooja Balkundi
    Okay referring to my first question code in the main, I want the user to enter employee name at runtime and then i take this name which user has entered and compare it with the e_name of my emp table , if it exists i want to display all information of that employee , how can I achieve this ? using System; using System.Collections.Generic; using System.Linq; using System.Windows.Forms; using MySql.Data.MySqlClient; namespace ConnectCsharppToMySQL { public class DBConnect { private MySqlConnection connection; private string server; private string database; private string uid; private string password; string name; //Constructor public DBConnect() { Initialize(); } //Initialize values private void Initialize() { server = "localhost"; database = "test"; uid = "root"; password = ""; string connectionString; connectionString = "SERVER=" + server + ";" + "DATABASE=" + database + ";" + "UID=" + uid + ";" + "PASSWORD=" + password + ";"; connection = new MySqlConnection(connectionString); } //open connection to database private bool OpenConnection() { try { connection.Open(); return true; } catch (MySqlException ex) { //When handling errors, you can your application's response based //on the error number. //The two most common error numbers when connecting are as follows: //0: Cannot connect to server. //1045: Invalid user name and/or password. switch (ex.Number) { case 0: MessageBox.Show("Cannot connect to server. Contact administrator"); break; case 1045: MessageBox.Show("Invalid username/password, please try again"); break; } return false; } } //Close connection private bool CloseConnection() { try { connection.Close(); return true; } catch (MySqlException ex) { MessageBox.Show(ex.Message); return false; } } //Insert statement public void Insert() { string query = "INSERT INTO emp (e_name, age) VALUES('Pooja R', '21')"; //open connection if (this.OpenConnection() == true) { //create command and assign the query and connection from the constructor MySqlCommand cmd = new MySqlCommand(query, connection); //Execute command cmd.ExecuteNonQuery(); //close connection this.CloseConnection(); } } //Update statement public void Update() { string query = "UPDATE emp SET e_name='Peachy', age='22' WHERE e_name='Pooja R'"; //Open connection if (this.OpenConnection() == true) { //create mysql command MySqlCommand cmd = new MySqlCommand(); //Assign the query using CommandText cmd.CommandText = query; //Assign the connection using Connection cmd.Connection = connection; //Execute query cmd.ExecuteNonQuery(); //close connection this.CloseConnection(); } } //Select statement public List<string>[] Select() { string query = "SELECT * FROM emp where e_name=(/*I WANT USER ENTERED NAME TO GET INSERTED HERE*/)"; //Create a list to store the result List<string>[] list = new List<string>[3]; list[0] = new List<string>(); list[1] = new List<string>(); list[2] = new List<string>(); //Open connection if (this.OpenConnection() == true) { //Create Command MySqlCommand cmd = new MySqlCommand(query, connection); //Create a data reader and Execute the command MySqlDataReader dataReader = cmd.ExecuteReader(); //Read the data and store them in the list while (dataReader.Read()) { list[0].Add(dataReader["e_id"] + ""); list[1].Add(dataReader["e_name"] + ""); list[2].Add(dataReader["age"] + ""); } //close Data Reader dataReader.Close(); //close Connection this.CloseConnection(); //return list to be displayed return list; } else { return list; } } public static void Main(String[] args) { DBConnect db1 = new DBConnect(); Console.WriteLine("Initializing"); db1.Initialize(); Console.WriteLine("Search :"); Console.WriteLine("Enter the employee name"); db1.name = Console.ReadLine(); db1.Select(); Console.ReadLine(); } } }

    Read the article

  • Too much I/O in the morning ?

    - by steveh99999
    Interesting little improvement on a SQL 2005 system I encountered recently….. Some background - this system had a fairly ‘traditional OLTP’ workload ie  heavily used during day – till around 9pm, then had a batch window for several hours, then not much activity in the early hours of the day, until normal workload resumed the following morning. Using perfmon, I noticed that every morning, we would see a big spike in SQL Server I/O when the application started to be used... As it was 2005 I decided to look at what tables were in cache before and after the overnight batch processing ran… ( using DMV equivalent of dbcc memusage that I posted earlier). Here’s what I saw :-     So, contents of data cache split fairly evenly between my 'important/heavily used' tables.   After this:- some application batch processing,backups, DBCC checks and reindexes were run.  A fairly standard batch I'd suggest. Cache contents then looked like this :- Hmmmm – most of cache is now being used by a table I’ve described as ‘unimportant’. Why ? Well, that table was the last to be reindexed…. purely due to luck, as  the reindexing stored procedure performing a loop in alphabetical order through all application tables...  When the application starts to be used again – all this ‘unimportant’ data has to be replaced in cache by data that is heavily used… So, we changed the overnight reindex scripts –  the most heavily accessed tables are now the last to be reindexed. Obvious really, but we did see a significant reduction in early-morning I/O after changing the order of our reindexing.  

    Read the article

  • Checking who is connected to your server, with PowerShell.

    - by Fatherjack
    There are many occasions when, as a DBA, you want to see who is connected to your SQL Server, along with how they are connecting and what sort of activities they are carrying out. I’m going to look at a couple of ways of getting this information and compare the effort required and the results achieved of each. SQL Server comes with a couple of stored procedures to help with this sort of task – sp_who and its undocumented counterpart sp_who2. There is also the pumped up version of these called sp_whoisactive, written by Adam Machanic which does way more than these procedures. I wholly recommend you try it out if you don’t already know how it works. When it comes to serious interrogation of your SQL Server activity then it is absolutely indispensable. Anyway, back to the point of this blog, we are going to look at getting the information from sp_who2 for a remote server. I wrote this Powershell script a week or so ago and was quietly happy with it for a while. I’m relatively new to Powershell so forgive both my rather low threshold for entertainment and the fact that something so simple is a moderate achievement for me. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server # connection and query stuff         $ConnectionStr = "Server=$Server;Database=Master;Integrated Security=True" $Query = "EXEC sp_who2" $Connection = new-object system.Data.SQLClient.SQLConnection $Table = new-object "System.Data.DataTable" $Connection.connectionstring = $ConnectionStr try{ $Connection.open() $Command = $Connection.CreateCommand() $Command.commandtext = $Query $result = $Command.ExecuteReader() $Table.Load($result) } catch{ # Show error $error[0] | format-list -Force } $Title = "Data access processes (" + $Table.Rows.Count + ")" $Table | Out-GridView -Title $Title $Connection.close() So this is pretty straightforward, create an SMO object that represents our chosen server, define a connection to the database and a table object for the results when we get them, execute our query over the connection, load the results into our table object and then, if everything is error free display these results to the PowerShell grid viewer. The query simply gets the results of ‘EXEC sp_who2′ for us. Depending on how many connections there are will influence how long the query runs. The grid viewer lets me sort and search the results so it can be a pretty handy way to locate troublesome connections. Like I say, I was quite pleased with this, it seems a pretty simple script and was working well for me, I have added a few parameters to control the output and give me more specific details but then I see a script that uses the $SMOServer object itself to provide the process information and saves having to define the connection object and query specifications. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server $Processes = $SMOServer.EnumProcesses() $Title = "SMO processes (" + $Processes.Rows.Count + ")" $Processes | Out-GridView -Title $Title Create the SMO object of our server and then call the EnumProcesses method to get all the process information from the server. Staggeringly simple! The results are a little different though. Some columns are the same and we can see the same basic information so my first thought was to which runs faster – so that I can get my results more quickly and also so that I place less stress on my server(s). PowerShell comes with a great way of testing this – the Measure-Command function. All you have to do is wrap your piece of code in Measure-Command {[your code here]} and it will spit out the time taken to execute the code. So, I placed both of the above methods of getting SQL Server process connections in two Measure-Command wrappers and pressed F5! The Powershell console goes blank for a while as the code is executed internally when Measure-Command is used but the grid viewer windows appear and the console shows this. You can take the output from Measure-Command and format it for easier reading but in a simple comparison like this we can simply cross refer the TotalMilliseconds values from the two result sets to see how the two methods performed. The query execution method (running EXEC sp_who2 ) is the first set of timings and the SMO EnumProcesses is the second. I have run these on a variety of servers and while the results vary from execution to execution I have never seen the SMO version slower than the other. The difference has varied and the time for both has ranged from sub-second as we see above to almost 5 seconds on other systems. This difference, I would suggest is partly due to the cost overhead of having to construct the data connection and so on where as the SMO EnumProcesses method has the connection to the server already in place and just needs to call back the process information. There is also the difference in the data sets to consider. Let’s take a look at what we get and where the two methods differ Query execution method (sp_who2) SMO EnumProcesses Description - Urn What looks like an XML or JSON representation of the server name and the process ID SPID Spid The process ID Status Status The status of the process Login Login The login name of the user executing the command HostName Host The name of the computer where the  process originated BlkBy BlockingSpid The SPID of a process that is blocking this one DBName Database The database that this process is connected to Command Command The type of command that is executing CPUTime Cpu The CPU activity related to this process DiskIO - The Disk IO activity related to this process LastBatch - The time the last batch was executed from this process. ProgramName Program The application that is facilitating the process connection to the SQL Server. SPID1 - In my experience this is always the same value as SPID. REQUESTID - In my experience this is always 0 - Name In my experience this is always the same value as SPID and so could be seen as analogous to SPID1 from sp_who2 - MemUsage An indication of the memory used by this process but I don’t know what it is measured in (bytes, Kb, Mb…) - IsSystem True or False depending on whether the process is internal to the SQL Server instance or has been created by an external connection requesting data. - ExecutionContextID In my experience this is always 0 so could be analogous to REQUESTID from sp_who2. Please note, these are my own very brief descriptions of these columns, detail can be found from MSDN for columns in the sp_who results here http://msdn.microsoft.com/en-GB/library/ms174313.aspx. Where the columns are common then I would use that description, in other cases then the information returned is purely for interpretation by the reader. Rather annoyingly both result sets have useful information that the other doesn’t. sp_who2 returns Disk IO and LastBatch information which is really useful but the SMO processes method give you IsSystem and MemUsage which have their place in fault diagnosis methods too. So which is better? On reflection I think I prefer to use the sp_who2 method primarily but knowing that the SMO Enumprocesses method is there when I need it is really useful and I’m sure I’ll use it regularly. I’m OK with the fact that it is the slower method because Measure-Command has shown me how close it is to the other option and that it really isn’t a large enough margin to matter.

    Read the article

  • Oracle Internet Directory 11gR1 11.1.1.6 Certified with Oracle E-Business Suite

    - by B Shashikumar
    We are very pleased to announce that Oracle Internet Directory 11gR1 (11.1.1.6) is now certified with Oracle E-Business Suite Releases 11i, 12.0 and 12.1. With this certification, we are offering several benefits to Oracle E-Business Suite customers: · Massive Scale: Oracle Internet Directory (OID) is a proven solution for mission critical deployments. OID can scale to extremely large deployments on less hardware as demonstrated by its published Two-Billion-User Benchmark. This reduces the footprint required to deploy enterprise directory services in the data-center resulting in cost savings and a greener enterprise. · Enhanced Security: OID is the most secure directory service that provides security at every level from data in transit to storage and backups. In addition to LDAP security, it leverages powerful Oracle database security features like Database Vault and Transparent Data Encryption · Investment Protection: This certification leverages Identity Management’s hot-pluggable capabilities enabling E-Business Suite customers to store and manage user identities in existing directory servers thus helping them maximize their investments For a complete matrix of platforms supported by Oracle Internet Directory and its components, refer to the Oracle Identity and Access Management 11gR1 certification matrix. For more information about this certification, check out the Oracle E-Business Suite blog. 

    Read the article

  • To SYNC or not to SYNC – Part 3

    - by AshishRay
    I can't believe it has been almost a year since my last blog post. I know, that's an absolute no-no in the blogosphere. And I know that "I have been busy" is not a good excuse. So - without trying to come up with an excuse - let me state this - my apologies for taking such a long time to write the next Part. Without further ado, here goes. This is Part 3 of a multi-part blog article where we are discussing various aspects of setting up Data Guard synchronous redo transport (SYNC). In Part 1 of this article, I debunked the myth that Data Guard SYNC is similar to a two-phase commit operation. In Part 2, I discussed the various ways that network latency may or may not impact a Data Guard SYNC configuration. In this article, I will talk in details regarding why Data Guard SYNC is a good thing. I will also talk about distance implications for setting up such a configuration. So, Why Good? Why is Data Guard SYNC a good thing? Because, at the end of the day, this gives you the assurance of zero data loss - it doesn’t matter what outage may befall your primary system. Befall! Boy, that sounds theatrical. But seriously - think about this - it minimizes your data risks. That’s a big deal. Whether you have an outage due to bad disks, faulty hardware components, hardware / software bugs, physical data corruptions, power failures, lightning that takes out significant part of your data center, fire that melts your assets, water leakage from the cooling system, human errors such as accidental deletion of online redo log files - it doesn’t matter - you can have that “Om - peace” look on your face and then you can failover to the standby system, without losing a single bit of data in your Oracle database. You will be a hero, as shown in this not so imaginary conversation: IT Manager: Well, what’s the status? You: John is doing the trace analysis on the storage array. IT Manager: So? How long is that gonna take? You: Well, he is stuck, waiting for a response from <insert your not-so-favorite storage vendor here>. IT Manager: So, no root cause yet? You: I told you, he is stuck. We have escalated with their Support, but you know how long these things take. IT Manager: Darn it - the site is down! You: Not really … IT Manager: What do you mean? You: John is stuck, but Sreeni has already done a failover to the Data Guard standby. IT Manager: Whoa, whoa - wait! Failover means we lost some data, why did you do this without letting the Business group know? You: We didn’t lose any data. Remember, we had set up Data Guard with SYNC? So now, any problems on the production – we just failover. No data loss, and we are up and running in minutes. The Business guys don’t need to know. IT Manager: Wow! Are we great or what!! You: I guess … Ok, so you get it - SYNC is good. But as my dear friend Larry Carpenter says, “TANSTAAFL”, or "There ain't no such thing as a free lunch". Yes, of course - investing in Data Guard SYNC means that you have to invest in a low-latency network, you have to monitor your applications and database especially in peak load conditions, and you cannot under-provision your standby systems. But all these are good and necessary things, if you are supporting mission-critical apps that are supposed to be running 24x7. The peace of mind that this investment will give you is priceless, especially if you are serious about HA. How Far Can We Go? Someone may say at this point - well, I can’t use Data Guard SYNC over my coast-to-coast deployment. Most likely - true. So how far can you go? Well, we have customers who have deployed Data Guard SYNC over 300+ miles! Does this mean that you can also deploy over similar distances? Duh - no! I am going to say something here that most IT managers don’t like to hear - “It depends!” It depends on your application design, application response time / throughput requirements, network topology, etc. However, because of the optimal way we do SYNC, customers have been able to stretch Data Guard SYNC deployments over longer distances compared to traditional, storage-centric ways of doing this. The MAA Database 10.2 best practices paper Data Guard Redo Transport & Network Configuration, and Oracle Database 11.2 High Availability Best Practices Manual talk about some of these SYNC-related metrics. For example, a test deployment of Data Guard SYNC over 330 miles with 10ms latency showed an impact less than 5% for a busy OLTP application. Even if you can’t deploy Data Guard SYNC over your WAN distance, or if you already have an ASYNC standby located 1000-s of miles away, here’s another nifty way to boost your HA. Have a local standby, configured SYNC. How local is “local”? Again - it depends. One customer runs a local SYNC standby across the campus. Another customer runs it across 15 miles in another data center. Both of these customers are running Data Guard SYNC as their HA standard. If a localized outage affects their primary system, no problem! They have all the data available on the standby, to which they can failover. Very fast. In seconds. Wait - did I say “seconds”? Yes, Virginia, there is a Santa Claus. But you have to wait till the next blog article to find out more. I assure you tho’ that this time you won’t have to wait for another year for this.

    Read the article

  • git for personal (one-man) projects. Overkill?

    - by Anto
    I know, and use, two version control systems: Subversion and git. Subversion, as of now, gets used for personal projects where I am the only developer and git gets used for open source projects and projects where I believe others will also work on the project. This is mostly because of git's amazing forking and merging capabilities, where everyone may work on their own branch; very handy. Now, I use Subversion for personal projects, as I think git makes little sense there. It seems to be a little bit of overkill. It is OK for me if it is centralized (on my home server, usually) when I am the only developer; I take regular backups anyway. I don't need the ability to make my own branch, the main branch is my branch. Yes, SVN has simple support for branching, but much more powerful support for it makes no sense, I think. Merging can be a pain with it, or at least from my little experience. Is there any good reason for me to use git on personal projects, or is it just simply overkill?

    Read the article

  • PHP MVC Framework Structure

    - by bigstylee
    I am sorry about the amount of code here. I have tried to show enough for understanding while avoiding confusion (I hope). I have included a second copy of the code at Pastebin. (The code does execute without error/notice/warning.) I am currently creating a Content Management System while trying to implement the idea of Model View Controller. I have only recently come across the concept of MVC (within the last week) and trying to implement this into my current project. One of the features of the CMS is dynamic/customisable menu areas and each feature will be represented by a controller. Therefore there will be multiple versions of the Controller Class, each with specific extended functionality. I have looked at a number of tutorials and read some open source solutions to the MVC Framework. I am now trying to create a lightweight solution for my specific requirements. I am not interested in backwards compatibility, I am using PHP 5.3. An advantage of the Base class is not having to use global and can directly access any loaded class using $this->Obj['ClassName']->property/function();. Hoping to get some feedback using the basic structure outlined (with performance in mind). Specifically; a) Have I understood/implemented the concept of MVC correctly? b) Have I understood/implemented Object Orientated techniques with PHP 5 correctly? c) Should the class propertise of Base be static? d) Improvements? Thank you very much in advance! <?php /* A "Super Class" that creates/stores all object instances */ class Base { public static $Obj = array(); // Not sure this is the correct use of the "static" keyword? public static $var; static public function load_class($directory, $class) { echo count(self::$Obj)."\n"; // This does show the array is getting updated and not creating a new array :) if (!isset(self::$Obj[$class]) && !is_object(self::$Obj[$class])) //dont want to load it twice { /* Locate and include the class file based upon name ($class) */ return self::$Obj[$class] = new $class(); } return TRUE; } } /* Loads general configuration objects into the "Super Class" */ class Libraries extends Base { public function __construct(){ $this->load_class('library', 'Database'); $this->load_class('library', 'Session'); self::$var = 'Hello World!'; //testing visibility /* Other general funciton classes */ } } class Database extends Base { /* Connects to the the database and executes all queries */ public function query(){} } class Session extends Base { /* Implements Sessions in database (read/write) */ } /* General functionality of controllers */ abstract class Controller extends Base { protected function load_model($class, $method) { /* Locate and include the model file */ $this->load_class('model', $class); call_user_func(array(self::$Obj[$class], $method)); } protected function load_view($name) { /* Locate and include the view file */ #include('views/'.$name.'.php'); } } abstract class View extends Base { /* ... */ } abstract class Model extends Base { /* ... */ } class News extends Controller { public function index() { /* Displays the 5 most recent News articles and displays with Content Area */ $this->load_model('NewsModel', 'index'); $this->load_view('news', 'index'); echo $this->var; } public function menu() { /* Displays the News Title of the 5 most recent News articles and displays within the Menu Area */ $this->load_model('news/index'); $this->load_view('news/index'); } } class ChatBox extends Controller { /* ... */ } /* Lots of different features extending the controller/view/model class depending upon request and layout */ class NewsModel extends Model { public function index() { echo $this->var; self::$Obj['Database']->query(/*SELECT 5 most recent news articles*/); } public function menu() { /* ... */ } } $Libraries = new Libraries; $controller = 'News'; // Would be determined from Query String $method = 'index'; // Would be determined from Query String $Content = $Libraries->load_class('controller', $controller); //create the controller for the specific page if (in_array($method, get_class_methods($Content))) { call_user_func(array($Content, $method)); } else { die('Bad Request'. $method); } $Content::$var = 'Goodbye World'; echo $Libraries::$var . ' - ' . $Content::$var; ?> /* Ouput */ 0 1 2 3 Goodbye World! - Goodbye World

    Read the article

  • D2K to OA Framework Transition

    - by PRajkumar
    What is the difference between D2K form and OA Framework? It is a very innocent but important question for someone that desires to make transition from D2K to OA Framework. I hope you have already read and implemented OA Framework Getting Started. I will re-visit my own experience of implementing HelloWorld program in "OA Framework". When I implemented HelloWorld a year ago, I had no clue as to what I was doing & why I was doing those steps. I merely copied the steps from Oracle Tutorial without understanding them. Hence in this blog, I will try to explain in simple manner the meaning of OA Framework HelloWorld Program and compare the steps to D2K form [where possible]. To keep things simple, only basics will be discussed. Following key Steps were needed for HelloWorld Step 1 Create a new Workspace and a new Project as dictated by Oracle's tutorial. When defining project, you will specify a default package, which in this case was oracle.apps.ak.hello This means the following: - ak is the short name of the Application in Oracle           [means fnd_applications.short_name] hello is the name of your project Step 2 Next, you will create a OA Page within hello project Think OA Page as the fmx file itself in D2K. I am saying so because this page gets attached to the form function. This page will be created within hello project, hence the package name oracle.apps.ak.hello.webui Note the webui, it is a convention to have page in webui, means this page represents the Web User Interface You will assign the default AM [OAApplicationModule]. Think of AM "Connection Manager" and "Transaction State Manager" for your page          I can't co-relate this to anything in D2k, as there is no concept of Connection Pooling and that D2k is not stateless. Reason being that as soon as you kick off a D2K Form, it connects to a single session of Oracle and sticks to that single Oracle database session. So is not the case in OAF, hence AM is needed. Step 3 You create Region within the Page. ·         Region is what will store your fields. Text input fields will be of type messageTextInput. Think of Canvas in D2K. You can have nested regions. Stacked Canvas in D2K comes the closest to this component of OA Framework Step 4 Add a button to one of the nested regions The itemStyle should be submitButton, in case you want the page to be submitted when this button is clicked There is no WHEN-BUTTON-PRESSED trigger in OAF. In Framework, you will add a controller java code to handle events like Form Submit button clicks. JDeveloper generates the default code for you. Primarily two functions [should I call methods] will be created processRequest [for UI Rendering Handling] and processFormRequest          Think of processRequest as WHEN-NEW-FORM-INSTANCE, though processRequest is very restrictive. Note What is the difference between processRequest and processFormRequest? These two methods are available in the Default Controller class that gets created. processFormRequest This method is commonly used to react/respond to the event that has taken place, for example click of a button. Some examples are if(oapagecontext.getParameter("Cancel") != null) (Do your processing for Cancellation/ Rollback) if(oapagecontext.getParameter("Submit") != null) (Do your validations and commit here) if(oapagecontext.getParameter("Update") != null) (Do your validations and commit here) In the above three examples, you could be calling oapagecontext.forwardImmediately to re-direct the page navigation to some other page if needed. processRequest In this method, usually page rendering related code is written. Effectively, each GUI component is a bean that gets initialised during processRequest. Those who are familiar with D2K forms, something like pre-query may be written in this method. Step 5 In the controller to access the value in field "HelloName" the command is String userContent = pageContext.getParameter("HelloName"); In D2k, we used :block.field. In OAFramework, at submission of page, all the field values get passed into to OAPageContext object. Use getParameter to access the field value To set the value of the field, use OAMessageTextInputBean field HelloName = (OAMessageTextInputBean)webBean.findChildRecursive("HelloName"); fieldHelloName.setText(pageContext,"Setting the default value" ); Note when setting field value in controller: Note 1. Do not set the value in processFormRequest Note 2. If the field comes from View Object, then do not use setText in controller Note 3. For control fields [that are not based on View Objects], you can use setText to assign values in processRequest method Lets take some notes to expand beyond the HelloWorld Project Note 1 In D2K-forms we sort of created a Window, attached to Canvas, and then fields within that Canvas. However in OA Framework, think of Page being fmx/Window, think of Region being a Canvas, and fields being within Regions. This is not a formal/accurate understanding of analogy between D2k and Framework, but is close to being logical. Note 2 In D2k, your Forms fmb file was compiled to fmx. It was fmx file that was deployed on mid-tier. In case of OAF, your OA Page is nothing but a XML file. We call this MDS [meta data]. Whatever name you give to "Page" in OAF, an XML file of the same name gets created. This xml file must then be loaded into database by using XML Importer command. Note 3 Apart from MDS XML file, almost everything else is merely deployed to your mid-tier. Usually this is underneath $JAVA_TOP/oracle/apps/../.. All java files will go underneath java top/oracle/apps/../.. etc. Note 4 When building tutorial, ignore the steps for setting "Attribute Sets". These are not mandatory. Oracle might just have developed their tutorials without including these. Think of these like Visual Attributes of D2K forms Note 5 Controller is where you will write any java code in OA Framework. You can create a Controller per Page or have a different Controller for each of the Regions with the same Page. Note 6 In the method processFormRequest of the Controller, you can access the values of the page by using notation pageContext.getParameter("<fieldname here>"). This method processFormRequest is executed when the OAF Screen/Page is submitted by click of a button. Note 7 Inside the controller, all the Database Related interactions for example interaction with View Objects happen via Application Module. But why so? Because Application Module Manages the transaction state of the Application. OAApplicationModuleImpl oaapplicationmoduleimpl = OAApplicationModuleImpl)oapagecontext.getApplicationModule(oawebbean); OADBTransaction oadbtransaction = OADBTransaction)oaapplicationmoduleimpl.getDBTransaction(); Note 8 In D2K, we have control block or a block based on database view. Similarly, in OA Framework, if the field does not have view Object attached, then it is like a control field. Hence in HelloWorld example, field HelloName is a control field [in D2K terminology]. A view Object can either be based on a view/table, synonym or on a SQL statement. Note 9 I wish to access the fields in multi record block that is based on view Object. Can I do this in Controller? Sure you can. To traverse through those records, do the below ·         Get the reference to the View Object using (OAViewObject)oapagecontext.getApplicationModule(oawebbean).findViewObject("VO Name Here") ·         Loop through the records in View Objects using count returned from oaviewobject.getFetchedRowCount() ·         For each record, fetch the value of the fields within the loop as oracle.jbo.Row row = oaviewobject.getRowAtRangeIndex(loop index here); (String)row.getAttribute("Column name of VO here ");

    Read the article

  • Make an object slide around an obstacle

    - by Isaiah
    I have path areas set up in a game I'm making for canvas/html5 and have got it working to keep the player within these areas. I have a function isOut(boundary, x, y) that returns true if the point is outside the boundary. What I do is check only the new position x/y separately with the corresponding old position x/y. Then if each one is out I assign them the past value from the frame before. The old positions are kept in a variable from a closure I made. like this: opos = [x,y];//old position npos = [x,y];//new position if(isOut(bound, npos[0], opos[1])){ npos[0] = opos[0]; //assign it the old x position } if(isOut(bound, opos[0], npos[1])){ npos[1] = opos[1]; //assign it the old y position } It looks nice and works good at certain angles, but if your boundary has diagonal regions it results in jittery motion. What's happening is the y pos exits the area while x doesn't and continues pushing the player to the side, once it has moved the player to the side a bit the player can move forward and then the y exits again and the whole process repeats. Anyone know how I may be able to achieve a smoother slide? I have access to the player's velocity vector, the angle, and the speed(when used with the angle). I can move the play with either angle/speed or x/yvelocities as I've built in backups to translate one to the other if either have been altered manually.

    Read the article

  • Which is more important in a web application code promotion hierarchy? production environment to repo equivalence or unidirectional propagation?

    - by ghbarratt
    Lets say you have a code promotion hierarchy consisting of several environments, (the polar end) two of which are development (dev) and production (prod). Lets say you also have a web application where important (but not developer controlled) files are created (and perhaps altered) in the production environment. Lets say that you (or someone above you) decided that the files which are controlled/created/altered/deleted in the production environment needed to go into the repository. Which of the following two sets of practice / approaches do you find best: Committing these non-developed file modifications made in the production environment so that the repository reflects the production environment as closely and as often as possible. Generally ignoring the non-developed production environment alterations, placing confidence in backups to restore the production environment should it be harmed, and keeping a resolution to avoid pushing developments through the promotion hierarchy in the reverse direction (avoiding pushing from prod to dev), only committing the files found in the production environment if they were absolutely necessary in other environments for development. So, 1 or 2, and why? PS - I am currently slightly biased toward maintaining production environment to repository equivalence (option 1), but I keep an open mind and would accept an answer supporting either.

    Read the article

  • LUKS no longer accepts my my passphrase

    - by Two Spirit
    I created a 4 drive RAID5 setup using mdadm and upgrading from 2TB drives to the new Hitachi 7200RPM 4TB drives. I can initially open my luks partition, but later can no longer access it. I can no longer access my LUKS partition even tho I have the right passphrases. It was working and then at an unknown point in time loose access to LUKS. I've used the same procedures for upgrading from 500G to 1TB to 1.5TB to 2TB. After the first time this happened a week ago, I thought maybe there was some corruption so I added a 2nd Key as a backup. After the second time the LUKS became unaccessible, none of the keys worked. I put LUKS on it using cryptsetup -c aes -s 256 -y luksFormat /dev/md0 # cryptsetup luksOpen /dev/md0 md0_crypt Enter LUKS passphrase: Enter LUKS passphrase: Enter LUKS passphrase: Command failed: No key available with this passphrase. The first time this happened while I was upgrading to 4TB drives, I thought it was a fluke, and ultimately had to recover from backups. I went an used luksAddKey to add a 2nd key as a backup. It happened again and I tried both passphrases, and neither worked. The only thing I'm doing differently this time around is that I've upgraded to 4TB drives which use GPT instead of fdisk. The last time I had to even reboot the box was over 2 years ago. I'm using ubuntu-8.04-server with kernel 2.6.24-29 and upgraded to -2.6.24-31, but that didn't fix the problem.

    Read the article

  • Proof Identify stolen computer getting computer identification info from Launchpad bugs and comparing

    - by Kangarooo
    I sold my old laptop to neighbours and it was stolen from them. Well i think i have found thief so i want to check his computer id and compare it to my old Launchpad bugs id. How in Launchpad i can find from my bugs: Motherboard HDD Somthing else that can help identify it Maybe how to recover or find some overwritten files (couse now there is windows) I found in Launchpad one my bugs has LSPCI autogenerated from bug 682846 https://launchpadlibrarian.net/70611231/Lspci.txt but i dont see any id that can be used to identify specificly my comp. This can be used to identify many same models. Or i missed something in there? And what commands should i use to get all identification on that comp in one go fast? Just lspci? How to get same lspci as it is in that Launchpad link? Now testing laspci on my computer i dont get so much info. Also im now doing a search in my external hdd where i have many backups and maybe i have there result from lspci. So what containing keywords would help doing search with for small lspci and full reports ive done? I might have done sudo lshw somefilename

    Read the article

  • What You Said: How You Sync and Organize Your Bookmarks

    - by Jason Fitzpatrick
    Earlier this week we asked you to share your favorite techniques for synchronizing and organizing your browser bookmarks. Now we’re back to highlight the most popular techniques, tricks, and services. By far and away, Xmarks was the most frequently mentioned service. For the unfamiliar, Xmarks is a bookmark syncing service that is packed with features. Not only does Xmarks sync bookmarks between browsers and/or computers it also supports iOS, Android, and BlackBerry (mobile integration requires an upgrade to the premium account). In addition to syncing the bookmarks it also integrates with your search results so you can see how other Xmarks users have ranked sites within your search results. Steve-O-Rama highlights one of the many benefits of Xmarks: Xmarks seems to do the job for me. I’ve got a handful of machines, each with three or four browsers; over the years, I’ve accumulated thousands of bookmarks, stretching across many areas of interest. Trying to keep them all straight had been quite a struggle until Xmarks came along. I freaked out when the company was acquired by LastPass, but was subsequently relieved when they continued the free service. Xmarks has a very nice web interface to access, export, search, organize, and do many other things with your bookmarks. In this way, even if I’m on the go, I can access every bookmark I’ve made. Even so, I still make occasional local backups, directly from the browsers to a network folder. Delicious bookmarks, another veteran of the bookmark syncing services, had a fair number of supporters among the HTG readership. Use Amazon’s Barcode Scanner to Easily Buy Anything from Your Phone How To Migrate Windows 7 to a Solid State Drive Follow How-To Geek on Google+

    Read the article

  • How to get sharing options to stick - ubuntu 12.04

    - by Devin
    I'm having a really hard time trying to get my sharing set up on my 12.04 system. I've tried both desktop version and server - I'm a bit of a linux n00b, so I need a GUI, command line is beyond me and no time to learn it (not till after I get the shares setup, at least) My problem is, whenever I try to set permissions in Nautilus, it just reverts back to the default which is to "none" Basically when I choose an option... it doesn't stick. I can create shares, and it asks me if I want to add permissions automatically - but they do not stick either. When I go to look at the shared folders in Windows (or even on my Android Phone, or Mac) it gives me permissions errors and doesn't let me log in, despite me clicking "allow guest access" I have no idea what to do or where to go. I've tried searching forums and google, and I've tried everything I come across - no avail. I've even tried Mint builds to see if it's different, no change there either. Please help! I really want to setup a server to share my media files and do backups in my house. Thanks for your help!

    Read the article

  • Recommended storage scheme for home server? (LVM/JBOD/RAID 5...)

    - by j-g-faustus
    Are there any guidelines for which storage scheme(s) makes most sense for a multiple-disk home server? I am assuming a separate boot/OS disk (so bootability is not a concern, this is for data storage only) and 4-6 storage disks of 1-2 TB each, for a total storage capacity in the range 4-12 TB. The file system is ext4, I expect there will be only one big partition spanning all disks. As far as I can tell, the alternatives are individual disks pros: works with any combination of disk sizes; losing a disk loses only the data on that disk; no need for volume management. cons: data management is clumsy when logical units (like a "movies" folder) are larger than the capacity of any single drive. JBOD span pros: can merge disks of any size. cons: losing a disk loses all data on all disks LVM pros: can merge disks of any size; relatively simple to add and remove disks. cons: losing a disk loses all data on all disks RAID 0 pros: speed cons: losing one drive loses all data; disks must be same size RAID 5 pros: data survives losing one disk cons: gives up one disk worth of capacity; disks must be same size RAID 6 pros: data survives losing two disks cons: gives up two disks worth of capacity; disks must be same size I'm primarily considering either LVM or JBOD span simply because it will let me reuse older, smaller-capacity disks when I upgrade the system. The runner-up is RAID 0 for speed. I'm planning on having full backups to a separate system, so I expect the extra redundancy from RAID levels 5 or 6 won't be important. Is this a fair representation of the alternatives? Are there other considerations or alternatives I have missed? And what would you recommend?

    Read the article

  • How to make and restore incremental snapshots of hard disk

    - by brunopereira81
    I use Virtual Box a lot for distro / applications testing purposes. One of the features I simply love about it is virtual machines snapshots, its saves a state of a virtual machine and is able to restore it to its former glory if something you did went wrong without any problems and without consuming your all hard disk space. On my live systems I know how to create a 1:1 image of the file system but all the solutions I'v known will create a new image of the complete file system. Are there any programs / file systems that are capable of taking a snapshot of a current file system, save it on another location but instead of making a complete new image it creates incremental backups? To easy describe what I want, it should be as dd images of a file system, but instead of only a full backup it would also create incremental. I am not looking for clonezilla, etc. It should run within the system itself with no (or almost none) intervention from the user, but contain all the data of the file systems. I am also not looking for a duplicity backup your all system excluding some folders script + dd to save your mbr. I can do that myself, looking for extra finesse. I'm looking for something I can do before doing massive changes to a system and then if something when wrong or I burned my hard disk after spilling coffee on it I can just boot from a liveCD and restore a working snapshot to a hard disk. It does not need to be daily, it doesn't even need a schedule. Just run once in a while and let it its job and preferably RAW based not file copy based.

    Read the article

  • GRUB2 not working after installing xubuntu 14.04

    - by h3bm
    I have a vaio laptop and it used to have installed windows 8 and Xubuntu 13.04 in dual boot, everything was working fine. I decided to update my version of xubuntu 14.04 LTS mainly because the support for 13.04 is finished and LTS version have 3 years of support. What I did was to format the partition where xubuntu 13.04 was installed and install 14.04 in that formated partition. When I restarted my computer willing to start using my new system I got the following message: error: symbol 'grub_term_highlight_color' not found and I was not able to enter any OS. I tried boot-repair from live USB more than two times and it did not fix the problem. I tried to enter to my computer using super GRUB2 disk, however it does not apperar to work with UEFI active (besides super grub2 disk says it can) I only get the message "no operating system found". If I boot super grub2 disk with UEFI disabled, super grub2 disk can not detect any OS,I also tried Rescatux distro, however, as of super grub2 disk, rescatux cannot enter when UEFI is active. I tried boot-repair with the option of "restore EFI backups", after that I was able to boot on windows, but no grub menu appeared. I ran boot-repair again with no improving results Here is the last Bootinfo report I got: http://paste.ubuntu.com/7609801/ Do you have any idea of what is happening? I really appreciate your help, Best regards,

    Read the article

  • Restore Failure from Ubuntu One

    - by Qawi Robinson
    Had to do a reinstall of Ubuntu 12 after 13.10 failed. Lost all my data, but I remembered that I had data backed up to Ubuntu One. It recognized my previous backups but I got errors and a restore failure when I went to restore the data. This is what I got. Can anyone make heads or tails of this? I still don't have my data. Thanks. Traceback (most recent call last): File "/usr/bin/duplicity", line 1412, in <module> with_tempdir(main) File "/usr/bin/duplicity", line 1405, in with_tempdir fn() File "/usr/bin/duplicity", line 1339, in main restore(col_stats) File "/usr/bin/duplicity", line 630, in restore restore_get_patched_rop_iter(col_stats)): File "/usr/lib/python2.7/dist-packages/duplicity/patchdir.py", line 522, in Write_ROPaths for ropath in rop_iter: File "/usr/lib/python2.7/dist-packages/duplicity/patchdir.py", line 495, in integrate_patch_iters final_ropath = patch_seq2ropath( normalize_ps( patch_seq ) ) File "/usr/lib/python2.7/dist-packages/duplicity/patchdir.py", line 475, in patch_seq2ropath misc.copyfileobj( current_file, tempfp ) File "/usr/lib/python2.7/dist-packages/duplicity/misc.py", line 166, in copyfileobj buf = infp.read(blocksize) File "/usr/lib/python2.7/dist-packages/duplicity/librsync.py", line 80, in read self._add_to_outbuf_once() File "/usr/lib/python2.7/dist-packages/duplicity/librsync.py", line 94, in _add_to_outbuf_once raise librsyncError(str(e)) librsyncError: librsync error 103 while in patch cycle

    Read the article

< Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >