Search Results

Search found 41565 results on 1663 pages for 'sql xml'.

Page 576/1663 | < Previous Page | 572 573 574 575 576 577 578 579 580 581 582 583  | Next Page >

  • Refactoring a Single Rails Model with large methods & long join queries trying to do everything

    - by Kelseydh
    I have a working Ruby on Rails Model that I suspect is inefficient, hard to maintain, and full of unnecessary SQL join queries. I want to optimize and refactor this Model (Quiz.rb) to comply with Rails best practices, but I'm not sure how I should do it. The Rails app is a game that has Missions with many Stages. Users complete Stages by answering Questions that have correct or incorrect Answers. When a User tries to complete a stage by answering questions, the User gets a Quiz entry with many Attempts. Each Attempt records an Answer submitted for that Question within the Stage. A user completes a stage or mission by getting every Attempt correct, and their progress is tracked by adding a new entry to the UserMission & UserStage join tables. All of these features work, but unfortunately the Quiz.rb Model has been twisted to handle almost all of it exclusively. The callbacks began at 'Quiz.rb', and because I wasn't sure how to leave the Quiz Model during a multi-model update, I resorted to using Rails Console to have the @quiz instance variable via self.some_method do all the heavy lifting to retrieve every data value for the game's business logic; resulting in large extended join queries that "dance" all around the Database schema. The Quiz.rb Model that Smells: class Quiz < ActiveRecord::Base belongs_to :user has_many :attempts, dependent: :destroy before_save :check_answer before_save :update_user_mission_and_stage accepts_nested_attributes_for :attempts, :reject_if => lambda { |a| a[:answer_id].blank? }, :allow_destroy => true #Checks every answer within each quiz, adding +1 for each correct answer #within a stage quiz, and -1 for each incorrect answer def check_answer stage_score = 0 self.attempts.each do |attempt| if attempt.answer.correct? == true stage_score += 1 elsif attempt.answer.correct == false stage_score - 1 end end stage_score end def winner return true end def update_user_mission_and_stage ####### #Step 1: Checks if UserMission exists, finds or creates one. #if no UserMission for the current mission exists, creates a new UserMission if self.user_has_mission? == false @user_mission = UserMission.new(user_id: self.user.id, mission_id: self.current_stage.mission_id, available: true) @user_mission.save else @user_mission = self.find_user_mission end ####### #Step 2: Checks if current UserStage exists, stops if true to prevent duplicate entry if self.user_has_stage? @user_mission.save return true else ####### ##Step 3: if step 2 returns false: ##Initiates UserStage creation instructions #checks for winner (winner actions need to be defined) if they complete last stage of last mission for a given orientation if self.passed? && self.is_last_stage? && self.is_last_mission? create_user_stage_and_update_user_mission self.winner #NOTE: The rest are the same, but specify conditions that are available to add badges or other actions upon those conditions occurring: ##if user completes first stage of a mission elsif self.passed? && self.is_first_stage? && self.is_first_mission? create_user_stage_and_update_user_mission #creates user badge for finishing first stage of first mission self.user.add_badge(5) self.user.activity_logs.create(description: "granted first-stage badge", type_event: "badge", value: "first-stage") #If user completes last stage of a given mission, creates a new UserMission elsif self.passed? && self.is_last_stage? && self.is_first_mission? create_user_stage_and_update_user_mission #creates user badge for finishing first mission self.user.add_badge(6) self.user.activity_logs.create(description: "granted first-mission badge", type_event: "badge", value: "first-mission") elsif self.passed? create_user_stage_and_update_user_mission else self.passed? == false return true end end end #Creates a new UserStage record in the database for a successful Quiz question passing def create_user_stage_and_update_user_mission @nu_stage = @user_mission.user_stages.new(user_id: self.user.id, stage_id: self.current_stage.id) @nu_stage.save @user_mission.save self.user.add_points(50) end #Boolean that defines passing a stage as answering every question in that stage correct def passed? self.check_answer >= self.number_of_questions end #Returns the number of questions asked for that stage's quiz def number_of_questions self.attempts.first.answer.question.stage.questions.count end #Returns the current_stage for the Quiz, routing through 1st attempt in that Quiz def current_stage self.attempts.first.answer.question.stage end #Gives back the position of the stage relative to its mission. def stage_position self.attempts.first.answer.question.stage.position end #will find the user_mission for the current user and stage if it exists def find_user_mission self.user.user_missions.find_by_mission_id(self.current_stage.mission_id) end #Returns true if quiz was for the last stage within that mission #helpful for triggering actions related to a user completing a mission def is_last_stage? self.stage_position == self.current_stage.mission.stages.last.position end #Returns true if quiz was for the first stage within that mission #helpful for triggering actions related to a user completing a mission def is_first_stage? self.stage_position == self.current_stage.mission.stages_ordered.first.position end #Returns true if current user has a UserMission for the current stage def user_has_mission? self.user.missions.ids.include?(self.current_stage.mission.id) end #Returns true if current user has a UserStage for the current stage def user_has_stage? self.user.stages.include?(self.current_stage) end #Returns true if current user is on the last mission based on position within a given orientation def is_first_mission? self.user.missions.first.orientation.missions.by_position.first.position == self.current_stage.mission.position end #Returns true if current user is on the first stage & mission of a given orientation def is_last_mission? self.user.missions.first.orientation.missions.by_position.last.position == self.current_stage.mission.position end end My Question Currently my Rails server takes roughly 500ms to 1 sec to process single @quiz.save action. I am confident that the slowness here is due to sloppy code, not bad Database ERD design. What does a better solution look like? And specifically: Should I use join queries to retrieve values like I did here, or is it better to instantiate new objects within the model instead? Or am I missing a better solution? How should update_user_mission_and_stage be refactored to follow best practices? Relevant Code for Reference: quizzes_controller.rb w/ Controller Route Initiating Callback: class QuizzesController < ApplicationController before_action :find_stage_and_mission before_action :find_orientation before_action :find_question def show end def create @user = current_user @quiz = current_user.quizzes.new(quiz_params) if @quiz.save if @quiz.passed? if @mission.next_mission.nil? && @stage.next_stage.nil? redirect_to root_path, notice: "Congratulations, you have finished the last mission!" elsif @stage.next_stage.nil? redirect_to [@mission.next_mission, @mission.first_stage], notice: "Correct! Time for Mission #{@mission.next_mission.position}", info: "Starting next mission" else redirect_to [@mission, @stage.next_stage], notice: "Answer Correct! You passed the stage!" end else redirect_to [@mission, @stage], alert: "You didn't get every question right, please try again." end else redirect_to [@mission, @stage], alert: "Sorry. We were unable to save your answer. Please contact the admministrator." end @questions = @stage.questions.all end private def find_stage_and_mission @stage = Stage.find(params[:stage_id]) @mission = @stage.mission end def find_question @question = @stage.questions.find_by_id params[:id] end def quiz_params params.require(:quiz).permit(:user_id, :attempt_id, {attempts_attributes: [:id, :quiz_id, :answer_id]}) end def find_orientation @orientation = @mission.orientation @missions = @orientation.missions.by_position end end Overview of Relevant ERD Database Relationships: Mission - Stage - Question - Answer - Attempt <- Quiz <- User Mission - UserMission <- User Stage - UserStage <- User Other Models: Mission.rb class Mission < ActiveRecord::Base belongs_to :orientation has_many :stages has_many :user_missions, dependent: :destroy has_many :users, through: :user_missions #SCOPES scope :by_position, -> {order(position: :asc)} def stages_ordered stages.order(:position) end def next_mission self.orientation.missions.find_by_position(self.position.next) end def first_stage next_mission.stages_ordered.first end end Stage.rb: class Stage < ActiveRecord::Base belongs_to :mission has_many :questions, dependent: :destroy has_many :user_stages, dependent: :destroy has_many :users, through: :user_stages accepts_nested_attributes_for :questions, reject_if: :all_blank, allow_destroy: true def next_stage self.mission.stages.find_by_position(self.position.next) end end Question.rb class Question < ActiveRecord::Base belongs_to :stage has_many :answers, dependent: :destroy accepts_nested_attributes_for :answers, :reject_if => lambda { |a| a[:body].blank? }, :allow_destroy => true end Answer.rb: class Answer < ActiveRecord::Base belongs_to :question has_many :attempts, dependent: :destroy end Attempt.rb: class Attempt < ActiveRecord::Base belongs_to :answer belongs_to :quiz end User.rb: class User < ActiveRecord::Base belongs_to :school has_many :activity_logs has_many :user_missions, dependent: :destroy has_many :missions, through: :user_missions has_many :user_stages, dependent: :destroy has_many :stages, through: :user_stages has_many :orientations, through: :school has_many :quizzes, dependent: :destroy has_many :attempts, through: :quizzes def latest_stage_position self.user_missions.last.user_stages.last.stage.position end end UserMission.rb class UserMission < ActiveRecord::Base belongs_to :user belongs_to :mission has_many :user_stages, dependent: :destroy end UserStage.rb class UserStage < ActiveRecord::Base belongs_to :user belongs_to :stage belongs_to :user_mission end

    Read the article

  • PHP FastCGI/XML/DOM Configure

    - by James
    Guys, any ideas why when I configure PHP 5.3.1, these options fail? Notice: Following unknown configure options were used: --with-xml --with-dom --enable-fastcgi --enable-discard-path --enable-force-cgi-redirect

    Read the article

  • using tcpdump to display XML API requests without headers or ack packets

    - by Carmageddon
    I need assistance, I am trying to use tcpdump in order to capture API requests and responses between two servers, so far I have the following command: tcpdump -iany -tpnAXs0 host xxx.xxx.xxx.xxx and port 6666 My problem is, that the output is still hard to read, because it sends the Headers, and the ack packets. I would like to remove those and only see the XML bodies. I tried to use grep -v, but apparently this is all one request, so it filters the entire thing... Thanks!

    Read the article

  • Autounattend.xml not being recognized in VirtualBox

    - by beagle
    I am working my way through the steps on this page to prepare an unattended installation of Windows 7 Enterprise x64 for purposes of a college assignment which simply requires the process to be carried out and documented. Both the "technician" and "reference" computers are virtual machines created in VirtualBox 4.3.12, as will be the destination computer. I seem to have successfully completed Step 1, building an Autounattend.xml answer file using Windows System Image Manager, in as far as the answer file validates successfully. The problem arises when I try to install Windows on the reference machine from the DVD image in conjunction with the Autounattend file on a USB drive. I have tried a couple of different USB devices, and the devices themselves seem to be recognized, but the answer file does not, as instead of taking the configuration settings from the file the user interface appears as in a manual installation. Has anyone come across this problem or a solution? The xml created by Windows SIM is below for reference in case the problem is with the file itself. <?xml version="1.0" encoding="utf-8"?> <unattend xmlns="urn:schemas-microsoft-com:unattend"> <settings pass="oobeSystem"> <component name="Microsoft-Windows-Deployment" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <Reseal> <Mode>Audit</Mode> </Reseal> </component> <component name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <OOBE> <HideEULAPage>true</HideEULAPage> <ProtectYourPC>3</ProtectYourPC> </OOBE> </component> </settings> <settings pass="windowsPE"> <component name="Microsoft-Windows-International-Core-WinPE" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <SetupUILanguage> <UILanguage>en-IE</UILanguage> </SetupUILanguage> <InputLocale>en-IE</InputLocale> <SystemLocale>en-IE</SystemLocale> <UILanguage>en-IE</UILanguage> <UserLocale>en-IE</UserLocale> </component> <component name="Microsoft-Windows-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <DiskConfiguration> <Disk wcm:action="add"> <CreatePartitions> <CreatePartition wcm:action="add"> <Order>1</Order> <Size>300</Size> <Type>Primary</Type> </CreatePartition> <CreatePartition wcm:action="add"> <Order>2</Order> <Extend>true</Extend> <Type>Primary</Type> </CreatePartition> </CreatePartitions> <ModifyPartitions> <ModifyPartition wcm:action="add"> <Active>true</Active> <Format>NTFS</Format> <Label>System</Label> <Order>1</Order> <PartitionID>1</PartitionID> </ModifyPartition> <ModifyPartition wcm:action="add"> <Format>NTFS</Format> <Label>Windows</Label> <Order>2</Order> <PartitionID>2</PartitionID> </ModifyPartition> </ModifyPartitions> <DiskID>0</DiskID> <WillWipeDisk>true</WillWipeDisk> </Disk> <WillShowUI>OnError</WillShowUI> </DiskConfiguration> <ImageInstall> <OSImage> <InstallTo> <DiskID>0</DiskID> <PartitionID>2</PartitionID> </InstallTo> <InstallToAvailablePartition>false</InstallToAvailablePartition> <WillShowUI>OnError</WillShowUI> </OSImage> </ImageInstall> <UserData> <ProductKey> <WillShowUI>OnError</WillShowUI> </ProductKey> <AcceptEula>true</AcceptEula> </UserData> </component> </settings> <settings pass="specialize"> <component name="Microsoft-Windows-IE-InternetExplorer" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <Home_Page>http://www.example.com</Home_Page> </component> </settings> <cpi:offlineImage cpi:source="wim://technician/users/user/desktop/install.wim#Windows 7 ENTERPRISE" xmlns:cpi="urn:schemas-microsoft-com:cpi" />

    Read the article

  • Get the RSS/XML feed for an iTunes U Podcast to use in another podcatcher

    - by matt
    There are numerous ways to get the podcast feed for standard iTunes podcasts like this or this, however, neither of these methods work on the podcast feeds in iTunes U. I don't want to use iTunes, how can I find the alternative xml podcast feed? Here's one for example: http://deimos3.apple.com/WebObjects/Core.woa/Feed/fora.tv.1901773207.01901773213 How can I subscribe to this feed outside of iTunes? I have tried emailing the publisher (Fora.tv) numerous times but they never respond.

    Read the article

  • systemstate dump ??

    - by JaneZhang(???)
            ???????????????hang????,????????systemstate dump?????????,?????,????????,???????????????,????systemstate dump?????????????       ??????,????????systemstate dump, ?????“WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK”?        systemstate dump???????????,??????:??????,???????,????dump????????,???????M????)1. ?sysdba???????:$sqlplus / as sysdba??$sqlplus -prelim / as sysdba <==??????????hang?????SQL>oradebug setmypidSQL>oradebug unlimit;SQL>oradebug dump systemstate 266;?1~2??SQL>oradebug dump systemstate 266;?1~2??SQL>oradebug dump systemstate 266;SQL>oradebug tracefile_name;==>????????2. ????systemstate dump,??????hang analyze??????????????????$sqlplus / as sysdba??$sqlplus -prelim / as sysdba <==??????????hang?????SQL>oradebug setmypidSQL>oradebug unlimit;SQL>oradebug dump hanganalyze 3?1~2??SQL>oradebug dump hanganalyze 3?1~2??SQL>oradebug dump hanganalyze 3SQL>oradebug tracefile_name;==>??????????RAC???,????????????systemstate dump,???????????(?????????):$sqlplus / as sysdba??$sqlplus -prelim / as sysdba <==??????????hang?????SQL>oradebug setmypidSQL>oradebug unlimitSQL>oradebug -g all dump systemstate 266  <==-g all ??????????dump?1~2??SQL>oradebug -g all dump systemstate 266?1~2??SQL>oradebug -g all dump systemstate 266?RAC???hang analyze:SQL>oradebug setmypidSQL>oradebug unlimitSQL>oradebug -g all hanganalyze 3?1~2??SQL>oradebug -g all hanganalyze 3?1~2??SQL>oradebug -g all hanganalyze 3?????????????????systemstate dump,?????????????backgroud_dump_dest??diag trace???????????????????????????,?????hang?,?????systemstate dump?????:10:   dump11:   dump + global cache of RAC256: short stack (????)258: dump(???lock element) + short stack (????)266: 256+10 -->short stack+ dump267: 256+11 -->short stack+ dump + global cache of RAClevel 11? 267? dump global cache, ??????trace ??,??????????????,????????,???266,??????dump?????????,????????????????????short stack????,???????,??2000???,??????30??????????,????level 10 ?? level 258, level 258 ? level 10????short short stack, ??level 10?????lock element data.?????systemstate dump???,??????level?????:??????37???:-rw-r----- 1 oracle oinstall    72721 Aug 31 21:50 rac10g2_ora_31092.trc==>256 (short stack, ????2K)-rw-r----- 1 oracle oinstall  2724863 Aug 31 21:52 rac10g2_ora_31654.trc==>10    (dump,????72K )-rw-r----- 1 oracle oinstall  2731935 Aug 31 21:53 rac10g2_ora_32214.trc==>266 (dump + short stack ,????72K)RAC:-rw-r----- 1 oracle oinstall 55873057 Aug 31 21:49 rac10g2_ora_30658.trc ==>11   (dump+global cache,????1.4M)-rw-r----- 1 oracle oinstall 55879249 Aug 31 21:48 rac10g2_ora_28615.trc ==>267 (dump+global cache+short stack,????1.4M) ??,??????dump global cache(level 11?267,???????????????)??????????,?????????systemstate dump ??

    Read the article

  • Mybatis nested collection doesn't work correctly with column prefix

    - by Shikarn-O
    I need to set collection for object in another collection using mybatis mappings. It works for me w/o using columnPrefix, but I need it since there are a lot of repeteable columns. <collection property="childs" javaType="ArrayList" ofType="org.example.mybatis.Child" resultMap="ChildMap" columnPrefix="c_"/> </resultMap> <resultMap id="ChildMap" type="org.example.mybatis.Parent"> <id column="Id" jdbcType="VARCHAR" property="id" /> <id column="ParentId" jdbcType="VARCHAR" property="parentId" /> <id column="Name" jdbcType="VARCHAR" property="name" /> <id column="SurName" jdbcType="VARCHAR" property="surName" /> <id column="Age" jdbcType="INTEGER" property="age" /> <collection property="toys" javaType="ArrayList" ofType="org.example.mybatis.Toy" resultMap="ToyMap" columnPrefix="t_"/> </resultMap> <resultMap id="ToyMap" type="org.example.mybatis.Toy"> <id column="Id" jdbcType="VARCHAR" property="id" /> <id column="ChildId" jdbcType="VARCHAR" property="childId" /> <id column="Name" jdbcType="VARCHAR" property="name" /> <id column="Color" jdbcType="VARCHAR" property="color" /> </resultMap> <sql id="Parent_Column_List"> p.Id, p.Name, p.SurName, </sql> <sql id="Child_Column_List"> c.Id as c_Id, c.ParentId as c_ParentId, c.Name as c_Name, c.SurName as c_Surname, c.Age as c_Age, </sql> <sql id="Toy_Column_List"> t.Id as t_Id, t.Name as t_Name, t.Color as t_Color </sql> <select id="getParent" parameterType="java.lang.String" resultMap="ParentMap" > select <include refid="Parent_Column_List"/> <include refid="Child_Column_List" /> <include refid="Toy_Column_List" /> from Parent p left outer join Child c on p.Id = c.ParentId left outer join Toy t on c.Id = t.ChildId where p.id = #{id,jdbcType=VARCHAR} With columnPrefix all works fine, but nested toys collection is empty. Sql query on database works correctly and all toys are joined. May be i missed something or this is bug with mybatis?

    Read the article

  • Guides for PostgreSQL query tuning?

    - by Joe
    I've found a number of resources that talk about tuning the database server, but I haven't found much on the tuning of the individual queries. For instance, in Oracle, I might try adding hints to ignore indexes or to use sort-merge vs. correlated joins, but I can't find much on tuning Postgres other than using explicit joins and recommendations when bulk loading tables. Do any such guides exist so I can focus on tuning the most run and/or underperforming queries, hopefully without adversely affecting the currently well-performing queries? I'd even be happy to find something that compared how certain types of queries performed relative to other databases, so I had a better clue of what sort of things to avoid. update: I should've mentioned, I took all of the Oracle DBA classes along with their data modeling and SQL tuning classes back in the 8i days ... so I know about 'EXPLAIN', but that's more to tell you what's going wrong with the query, not necessarily how to make it better. (eg, are 'while var=1 or var=2' and 'while var in (1,2)' considered the same when generating an execution plan? What if I'm doing it with 10 permutations? When are multi-column indexes used? Are there ways to get the planner to optimize for fastest start vs. fastest finish? What sort of 'gotchas' might I run into when moving from mySQL, Oracle or some other RDBMS?) I could write any complex query dozens if not hundreds of ways, and I'm hoping to not have to try them all and find which one works best through trial and error. I've already found that 'SELECT count(*)' won't use an index, but 'SELECT count(primary_key)' will ... maybe a 'PostgreSQL for experienced SQL users' sort of document that explained sorts of queries to avoid, and how best to re-write them, or how to get the planner to handle them better. update 2: I found a Comparison of different SQL Implementations which covers PostgreSQL, DB2, MS-SQL, mySQL, Oracle and Informix, and explains if, how, and gotchas on things you might try to do, and his references section linked to Oracle / SQL Server / DB2 / Mckoi /MySQL Database Equivalents (which is what its title suggests) and to the wikibook SQL Dialects Reference which covers whatever people contribute (includes some DB2, SQLite, mySQL, PostgreSQL, Firebird, Vituoso, Oracle, MS-SQL, Ingres, and Linter).

    Read the article

  • dynamically drawing polylines on googlemaps using php/mysql

    - by arc
    Hi. I am new to the googlemaps API. I have written a small app for my mobile phone that periodically updates its location to an SQL databse. I would like to display this information on a googlemap in my browser. Ideally i'd like to then poll the database periodically and if any new co-ords have arrived, add them to the line. Best way of describing it is this; http://tiny.cc/HEIa0 In a quest to get to there, i've started on the documents on google and been modifying them to try and acheive what I want. It doesn't work - and i don't know enough to know why. I would love some advice as to why, and any pointers towards my ultimate goal would be very much welcomed. Google Maps AJAX + MySQL/PHP Example <script type="text/javascript"> //<![CDATA[ function load() { if (GBrowserIsCompatible()) { var map = new GMap2(document.getElementById("map")); map.addControl(new GSmallMapControl()); map.addControl(new GMapTypeControl()); map.setCenter(new GLatLng(47.614495, -122.341861), 13); GDownloadUrl("phpsqlajax_genxml.php", function(data) { var xml = GXml.parse(data); var line = []; var markers = xml.documentElement.getElementsByTagName("points"); for (var i = 0; i < points.length; i++) { var point = points.item(i); var lat = point.getAttribute("lat"); var lng = point.getAttribute("lng"); var latlng = new GLatLng(lat, lng); line.push(latlng); if (point.firstChild) { var station = point.firstChild.nodeValue; var marker = createMarker(latlng, station); map.addOverlay(marker); } } var polyline = new GPolyline(line, "#ff0000", 3, 1); map.addOverlay(polyline); }); } //]]> My php file is generating the following XML; <?xml version="1.0" encoding="UTF-8" ?> <points> <point lng="-122.340141" lat="47.608940"/> <point lng="-122.344391" lat="47.613590"/> <point lng="-122.356445" lat="47.624561"/> <point lng="-122.337654" lat="47.606365"/> <point lng="-122.345673" lat="47.612823"/> <point lng="-122.340363" lat="47.605961"/> <point lng="-122.345467" lat="47.613976"/> <point lng="-122.326584" lat="47.617214"/> <point lng="-122.342834" lat="47.610126"/> </points> I have successfully worked through this; http://code.google.com/apis/maps/articles/phpsqlajax.html before attempting to customise the code. Any pointers? Where am I go wrong?

    Read the article

  • Unable to execute stored Procedure using Java and JDBC

    - by jwmajors81
    I have been trying to execute a MS SQL Server stored procedure via JDBC today and have been unsuccessful thus far. The stored procedure has 1 input and 1 output parameter. With every combination I use when setting up the stored procedure call in code I get an error stating that the stored procedure couldn't be found. I have provided the stored procedure I'm executing below (NOTE: this is vendor code, so I cannot change it). set ANSI_NULLS ON set QUOTED_IDENTIFIER ON GO ALTER PROC [dbo].[spWCoTaskIdGen] @OutIdentifier int OUTPUT AS BEGIN DECLARE @HoldPolicyId int DECLARE @PolicyId char(14) IF NOT EXISTS ( SELECT * FROM UniqueIdentifierGen (UPDLOCK) ) INSERT INTO UniqueIdentifierGen VALUES (0) UPDATE UniqueIdentifierGen SET CurIdentifier = CurIdentifier + 1 SELECT @OutIdentifier = (SELECT CurIdentifier FROM UniqueIdentifierGen) END The code looks like: CallableStatement statement = connection .prepareCall("{call dbo.spWCoTaskIdGen(?)}"); statement.setInt(1, 0); ResultSet result = statement.executeQuery(); I get the following error: SEVERE: Could not find stored procedure 'dbo.spWCoTaskIdGen'. I have also tried CallableStatement statement = connection .prepareCall("{? = call dbo.spWCoTaskIdGen(?)}"); statement.registerOutParameter(1, java.sql.Types.INTEGER); statement.registerOutParameter(2, java.sql.Types.INTEGER); statement.executeQuery(); The above results in: SEVERE: Could not find stored procedure 'dbo.spWCoTaskIdGen'. I have also tried: CallableStatement statement = connection .prepareCall("{? = call spWCoTaskIdGen(?)}"); statement.registerOutParameter(1, java.sql.Types.INTEGER); statement.registerOutParameter(2, java.sql.Types.INTEGER); statement.executeQuery(); The code above resulted in the following error: Could not find stored procedure 'spWCoTaskIdGen'. Finally, I should also point out the following: I have used the MS SQL Server Management Studio tool and have been able to successfully run the stored procedure. The sql generated to execute the stored procedure is provided below: GO DECLARE @return_value int, @OutIdentifier int EXEC @return_value = [dbo].[spWCoTaskIdGen] @OutIdentifier = @OutIdentifier OUTPUT SELECT @OutIdentifier as N'@OutIdentifier ' SELECT 'Return Value' = @return_value GO The code being executed runs with the same user id that was used in point #1 above. In the code that creates the Connection object I log which database I'm connecting to and the code is connecting to the correct database. Any ideas? Thank you very much in advance.

    Read the article

  • Flex XMLListCollection sort on nested tags

    - by gauravgr8
    Hi all, I have a requirement of sorting the <ename> in the XML with in the branch. The XML goes like this: <company> <branch> <name>finance</name> <emp> <ename>rahul</ename> <phno>123456</phno> </emp> <emp> <ename>sunil</ename> <phno>123456</phno> </emp> <emp> <ename>akash</ename> <phno>123456</phno> </emp> <emp> <ename>alok</ename> <phno>123456</phno> </emp> </branch> <branch> <name>finance</name> <emp> <ename>sameer</ename> <phno>123456</phno> </emp> <emp> <ename>rahul</ename> <phno>123456</phno> </emp> <emp> <ename>anand</ename> <phno>123456</phno> </emp> <emp> <ename>sandeep</ename> <phno>123456</phno> </emp> </branch> </company> I tried it with taking XML in XMLList: var xl:XMLList = new XMLList(branch.ename) var xlc:XMLListCollection = new XMLListCollection(xl); then applied sort to the <ename>. I am able to get the sorted but XMLListCollection but the problem is I got the <ename> collection sorted but I need the sorted <ename> in the XML. I tried with deleting the the item in Collection then adding the sorted list but in that case the <name> is lost. Please help me out in sorting <ename> or is there any way to specify nested tags in SortField name? Thanks in advance.

    Read the article

  • XForms and multiple inputs for same model tag

    - by iHeartGreek
    Hi! I apologize ahead of time if I am not asking this properly.. it is hard to put into words what I am asking.. I have XForms model such as: <file> <criteria> <criterion></criterion> </criteria> </file> I want to have multiple input text boxes that create a new criterion tag. user interface such as: <xf:input ref="/file/criteria/criterion" model="select_data"> <xf:label>Select</xf:label> </xf:input> <xf:input ref="/file/criteria/criterion" model="select_data"> <xf:label>Select</xf:label> </xf:input> <xf:input ref="/file/criteria/criterion" model="select_data"> <xf:label>Select</xf:label> </xf:input> And I would like the XML output to look like this (once user has entered in info): <file> <criteria> <criterion>AAA</criterion> <criterion>BBB</criterion> <criterion>CCC</criterion> </criteria> </file> The way I have it doesn't work, as it sees the 3 input fields to be referring all to the same criterion tag. How do I differentiate? Thanks! I hope that made some sense! BEGIN FIRST EDIT Thanks for the responses for the basic text box! However, I now need to do this with a listbox. But for the life of me, I can't figure out how. I read somewhere to use with the xforms:select and deselect events.. but I didn't know where to place them, and the places I tried gave me very weird behaviour. I am currently implementing the following: <xf:select ref="instance('criteria_data')/criteria/criterion" selection="" appearance="compact" > <xf:label>Choose criteria</xf:label> <xf:itemset nodeset="instance('criteria_choices')/choice"> <xf:label ref="@label"></xf:label> <xf:value ref="."></xf:value> </xf:itemset> </xf:select> However when multiple choices are submitted, all selection values are inserted into the same node, separated by spaces. For example: If AAA and BBB and FFF were selected from listbox, it would result in the following XML: <criterion>AAA BBB FFF</criterion> How do I change my code to have each selection be in a separate node? i.e. I want it to look like this: <criterion>AAA</criterion> <criterion>BBB</criterion> <criterion>FFF</criterion> Thanks! END FIRST EDIT BEGIN SECOND EDIT: For the listboxes (ie xf:select appearance="compact") I ended up allowing the spaces to occur in the same node and then just transformed that xml using xsl to generate a properly formatted new xml doc (with separate individual nodes). Unfortunately, I did not find a less cumbersome solution by inserting them originally into separate nodes. The selected answer works very well for text boxes however, hence why I selected it as the answer. END SECOND EDIT

    Read the article

  • Symantec Protection Suite and System Recovery 2011 Desktop Edition

    - by rihatum
    I am re-posting this as my previous question was being treated as if I am "Shopping or seeking Product Recommendations" even though I was NOT - BTW they have deleted my comments too which were not offensive in nature. anyway - I have re-phrased some parts of my question and I hope SF Admins "Do Not Modify / Edit" this one - will be most grateful for that. I have a lot of respect for the People who visit this SITE and help others ! Just To clarify : Just to go by SF rules - I am not seeking someone to Design this solution, I am simply seeking real world examples, experiences, technical expert opinions / suggestions, any tips or tricks they may have or any problems they may have faced while doing something similar above with these products. I am also not asking for Capacity Planning for Storage, We have done some research and I am seeking Expert Assurance / Suggestions. We (our company) are planning to deploy Symantec Endpoint Protection and Symantec Desktop Recovery 2011 Desktop Edition to our 3000 - 4000 workstations (Windows7 32 and 64) with a few 100s with Windows XP 32/64 Bit. I have read the implementation guide for SEP and have read tech-notes for Desktop Recovery 2011. Our team have planned to deploy this as follows : 1 x dedicated SQL 2008R2 for Symantec Endpoint Protection (Instead of using the Embedded Database) 1 x Dedicated SQL 2008R2 for Symantec Desktop Recovery 2011 (Instead of using the Embedded Database) 1 x Dedicated W2K8 R2 Box for the SEPM (Symantec Endpoint Protection Manager - Mgmt. APP) 1 x Dedicated W2K8 R2 Box for the Symantec Desktop Recovery 2011 Management Application Agent Deployment : As per Symantec Documentation for both of the above, an agent can be pushed via the Mgmt. Application (provided no firewalls are blocking ports required etc. - we have Windows firewall disabled already). Server Hardware : Per SQL Server : 16GB RAM + SAS DISKS + Dual XEON, RAID-10 for the SQL DB or I can always mount a LUN from our existing Hitachi or EMC SAN. SEPM Server : 16GB RAM + SAS DISKS + DUAL XEON System Recovery MGMT SERVER : 16GB RAM + SAS DISKS + DUAL XEON Above is the initial plan we have for 3000 - 4000 client workstation (Windows) Now my Questions :-) a) If we had these users distributed amongst two sites with AD DC / GC in each site, How would I restrict SEPM and Desktop Mgmt. solution to only check for users in their respective site ? b) At present all users are under one building but we are going to move some dept. to a new location (with dedicated connectivity), How would we control which SEPM / MGMT Server is responsible for which site ? c) We have netbackup in our environment backing up other servers, I am planning to protect these 4 (2 x SQL, 1 x SEPM, 1 x System Recovery Mgmt. Server) via netbackup or I can use System recovery 2011 server edition on all 4 of these boxes as well. (License is not an issue as we have the complete symantec portfolio included in our license). d) Now - Saving Desktop backups - What strategies have you implemented ? Any best practice recommendation for a large user base ? I was thinking to either mount a LUN from our Hitachi SAN on the Symantec Recovery Server itself or backup to the users hard drive locally and then copy it over to a network location ? Suggestions welcome :-) If you have anything to add / correct - that will be really helpful before diving into the actual implementation phase. Will be most grateful with your suggestions, recommendations and corrections with above - Many Thanks !

    Read the article

  • Distributed and/or Parallel SSIS processing

    - by Jeff
    Background: Our company hosts SaaS DSS applications, where clients provide us data Daily and/or Weekly, which we process & merge into their existing database. During business hours, load in the servers are pretty minimal as it's mostly users running simple pre-defined queries via the website, or running drill-through reports that mostly hit the SSAS OLAP cube. I manage the IT Operations Team, and so far this has presented an interesting "scaling" issue for us. For our daily-refreshed clients, the server is only "busy" for about 4-6 hrs at night. For our weekly-refresh clients, the server is only "busy" for maybe 8-10 hrs per week! We've done our best to use some simple methods of distributing the load by spreading the daily clients evenly among the servers such that we're not trying to process daily clients back-to-back over night. But long-term this scaling strategy creates two notable issues. First, it's going to consume a pretty immense amount of hardware that sits idle for large periods of time. Second, it takes significant Production Support over-head to basically "schedule" the ETL such that they don't over-lap, and move clients/schedules around if they out-grow the resources on a particular server or allocated time-slot. As the title would imply, one option we've tried is running multiple SSIS packages in parallel, but in most cases this has yielded VERY inconsistent results. The most common failures are DTExec, SQL, and SSAS fighting for physical memory and throwing out-of-memory errors, and ETLs running 3,4,5x longer than expected. So from my practical experience thus far, it seems like running multiple ETL packages on the same hardware isn't a good idea, but I can't be the first person that doesn't want to scale multiple ETLs around manual scheduling, and sequential processing. One option we've considered is virtualizing the servers, which obviously doesn't give you any additional resources, but moves the resource contention onto the hypervisor, which (from my experience) seems to manage simultaneous CPU/RAM/Disk I/O a little more gracefully than letting DTExec, SQL, and SSAS battle it out within Windows. Question to the forum: So my question to the forum is, are we missing something obvious here? Are there tools out there that can help manage running multiple SSIS packages on the same hardware? Would it be more "efficient" in terms of parallel execution if instead of running DTExec, SQL, and SSAS same machine (with every machine running that configuration), we run in pairs of three machines with SSIS running on one machine, SQL on another, and SSAS on a third? Obviously that would only make sense if we could process more than the three ETL we were able to process on the machine independently. Another option we've considered is completely re-architecting our SSIS package to have one "master" package for all clients that attempts to intelligently chose a server based off how "busy" it already is in terms of CPU/Memory/Disk utilization, but that would be a herculean effort, and seems like we're trying to reinvent something that you would think someone would sell (although I haven't had any luck finding it). So in summary, are we missing an obvious solution for this, and does anyone know if any tools (for free or for purchase, doesn't matter) that facilitate running multiple SSIS ETL packages in parallel and on multiple servers? (What I would call a "queue & node based" system, but that's not an official term). Ultimately VMWare's Distributed Resource Scheduler addresses this as you simply run a consistent number of clients per VM that you know will never conflict scheduleing-wise, then leave it up to VMWare to move the VMs around to balance out hardware usage. I'm definitely not against using VMWare to do this, but since we're a 100% Microsoft app stack, it seems like -someone- out there would have solved this problem at the application layer instead of the hypervisor layer by checking on resource utilization at the OS, SQL, SSAS levels. I'm open to ANY discussion on this, and remember no suggestion is too crazy or radical! :-) Right now, VMWare is the only option we've found to get away from "manually" balancing our resources, so any suggestions that leave us on a pure Microsoft stack would be great. Thanks guys, Jeff

    Read the article

  • Beware Sneaky Reads with Unique Indexes

    - by Paul White NZ
    A few days ago, Sandra Mueller (twitter | blog) asked a question using twitter’s #sqlhelp hash tag: “Might SQL Server retrieve (out-of-row) LOB data from a table, even if the column isn’t referenced in the query?” Leaving aside trivial cases (like selecting a computed column that does reference the LOB data), one might be tempted to say that no, SQL Server does not read data you haven’t asked for.  In general, that’s quite correct; however there are cases where SQL Server might sneakily retrieve a LOB column… Example Table Here’s a T-SQL script to create that table and populate it with 1,000 rows: CREATE TABLE dbo.LOBtest ( pk INTEGER IDENTITY NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( some_value, lob_data ) SELECT TOP (1000) N.n, @Data FROM Numbers N WHERE N.n <= 1000; Test 1: A Simple Update Let’s run a query to subtract one from every value in the some_value column: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; As you might expect, modifying this integer column in 1,000 rows doesn’t take very long, or use many resources.  The STATITICS IO and TIME output shows a total of 9 logical reads, and 25ms elapsed time.  The query plan is also very simple: Looking at the Clustered Index Scan, we can see that SQL Server only retrieves the pk and some_value columns during the scan: The pk column is needed by the Clustered Index Update operator to uniquely identify the row that is being changed.  The some_value column is used by the Compute Scalar to calculate the new value.  (In case you are wondering what the Top operator is for, it is used to enforce SET ROWCOUNT). Test 2: Simple Update with an Index Now let’s create a nonclustered index keyed on the some_value column, with lob_data as an included column: CREATE NONCLUSTERED INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); This is not a useful index for our simple update query; imagine that someone else created it for a different purpose.  Let’s run our update query again: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; We find that it now requires 4,014 logical reads and the elapsed query time has increased to around 100ms.  The extra logical reads (4 per row) are an expected consequence of maintaining the nonclustered index. The query plan is very similar to before (click to enlarge): The Clustered Index Update operator picks up the extra work of maintaining the nonclustered index. The new Compute Scalar operators detect whether the value in the some_value column has actually been changed by the update.  SQL Server may be able to skip maintaining the nonclustered index if the value hasn’t changed (see my previous post on non-updating updates for details).  Our simple query does change the value of some_data in every row, so this optimization doesn’t add any value in this specific case. The output list of columns from the Clustered Index Scan hasn’t changed from the one shown previously: SQL Server still just reads the pk and some_data columns.  Cool. Overall then, adding the nonclustered index hasn’t had any startling effects, and the LOB column data still isn’t being read from the table.  Let’s see what happens if we make the nonclustered index unique. Test 3: Simple Update with a Unique Index Here’s the script to create a new unique index, and drop the old one: CREATE UNIQUE NONCLUSTERED INDEX [UQ dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); GO DROP INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest; Remember that SQL Server only enforces uniqueness on index keys (the some_data column).  The lob_data column is simply stored at the leaf-level of the non-clustered index.  With that in mind, we might expect this change to make very little difference.  Let’s see: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; Whoa!  Now look at the elapsed time and logical reads: Scan count 1, logical reads 2016, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   CPU time = 172 ms, elapsed time = 16172 ms. Even with all the data and index pages in memory, the query took over 16 seconds to update just 1,000 rows, performing over 52,000 LOB logical reads (nearly 16,000 of those using read-ahead). Why on earth is SQL Server reading LOB data in a query that only updates a single integer column? The Query Plan The query plan for test 3 looks a bit more complex than before: In fact, the bottom level is exactly the same as we saw with the non-unique index.  The top level has heaps of new stuff though, which I’ll come to in a moment. You might be expecting to find that the Clustered Index Scan is now reading the lob_data column (for some reason).  After all, we need to explain where all the LOB logical reads are coming from.  Sadly, when we look at the properties of the Clustered Index Scan, we see exactly the same as before: SQL Server is still only reading the pk and some_value columns – so what’s doing the LOB reads? Updates that Sneakily Read Data We have to go as far as the Clustered Index Update operator before we see LOB data in the output list: [Expr1020] is a bit flag added by an earlier Compute Scalar.  It is set true if the some_value column has not been changed (part of the non-updating updates optimization I mentioned earlier). The Clustered Index Update operator adds two new columns: the lob_data column, and some_value_OLD.  The some_value_OLD column, as the name suggests, is the pre-update value of the some_value column.  At this point, the clustered index has already been updated with the new value, but we haven’t touched the nonclustered index yet. An interesting observation here is that the Clustered Index Update operator can read a column into the data flow as part of its update operation.  SQL Server could have read the LOB data as part of the initial Clustered Index Scan, but that would mean carrying the data through all the operations that occur prior to the Clustered Index Update.  The server knows it will have to go back to the clustered index row to update it, so it delays reading the LOB data until then.  Sneaky! Why the LOB Data Is Needed This is all very interesting (I hope), but why is SQL Server reading the LOB data?  For that matter, why does it need to pass the pre-update value of the some_value column out of the Clustered Index Update? The answer relates to the top row of the query plan for test 3.  I’ll reproduce it here for convenience: Notice that this is a wide (per-index) update plan.  SQL Server used a narrow (per-row) update plan in test 2, where the Clustered Index Update took care of maintaining the nonclustered index too.  I’ll talk more about this difference shortly. The Split/Sort/Collapse combination is an optimization, which aims to make per-index update plans more efficient.  It does this by breaking each update into a delete/insert pair, reordering the operations, removing any redundant operations, and finally applying the net effect of all the changes to the nonclustered index. Imagine we had a unique index which currently holds three rows with the values 1, 2, and 3.  If we run a query that adds 1 to each row value, we would end up with values 2, 3, and 4.  The net effect of all the changes is the same as if we simply deleted the value 1, and added a new value 4. By applying net changes, SQL Server can also avoid false unique-key violations.  If we tried to immediately update the value 1 to a 2, it would conflict with the existing value 2 (which would soon be updated to 3 of course) and the query would fail.  You might argue that SQL Server could avoid the uniqueness violation by starting with the highest value (3) and working down.  That’s fine, but it’s not possible to generalize this logic to work with every possible update query. SQL Server has to use a wide update plan if it sees any risk of false uniqueness violations.  It’s worth noting that the logic SQL Server uses to detect whether these violations are possible has definite limits.  As a result, you will often receive a wide update plan, even when you can see that no violations are possible. Another benefit of this optimization is that it includes a sort on the index key as part of its work.  Processing the index changes in index key order promotes sequential I/O against the nonclustered index. A side-effect of all this is that the net changes might include one or more inserts.  In order to insert a new row in the index, SQL Server obviously needs all the columns – the key column and the included LOB column.  This is the reason SQL Server reads the LOB data as part of the Clustered Index Update. In addition, the some_value_OLD column is required by the Split operator (it turns updates into delete/insert pairs).  In order to generate the correct index key delete operation, it needs the old key value. The irony is that in this case the Split/Sort/Collapse optimization is anything but.  Reading all that LOB data is extremely expensive, so it is sad that the current version of SQL Server has no way to avoid it. Finally, for completeness, I should mention that the Filter operator is there to filter out the non-updating updates. Beating the Set-Based Update with a Cursor One situation where SQL Server can see that false unique-key violations aren’t possible is where it can guarantee that only one row is being updated.  Armed with this knowledge, we can write a cursor (or the WHILE-loop equivalent) that updates one row at a time, and so avoids reading the LOB data: SET NOCOUNT ON; SET STATISTICS XML, IO, TIME OFF;   DECLARE @PK INTEGER, @StartTime DATETIME; SET @StartTime = GETUTCDATE();   DECLARE curUpdate CURSOR LOCAL FORWARD_ONLY KEYSET SCROLL_LOCKS FOR SELECT L.pk FROM LOBtest L ORDER BY L.pk ASC;   OPEN curUpdate;   WHILE (1 = 1) BEGIN FETCH NEXT FROM curUpdate INTO @PK;   IF @@FETCH_STATUS = -1 BREAK; IF @@FETCH_STATUS = -2 CONTINUE;   UPDATE dbo.LOBtest SET some_value = some_value - 1 WHERE CURRENT OF curUpdate; END;   CLOSE curUpdate; DEALLOCATE curUpdate;   SELECT DATEDIFF(MILLISECOND, @StartTime, GETUTCDATE()); That completes the update in 1280 milliseconds (remember test 3 took over 16 seconds!) I used the WHERE CURRENT OF syntax there and a KEYSET cursor, just for the fun of it.  One could just as well use a WHERE clause that specified the primary key value instead. Clustered Indexes A clustered index is the ultimate index with included columns: all non-key columns are included columns in a clustered index.  Let’s re-create the test table and data with an updatable primary key, and without any non-clustered indexes: IF OBJECT_ID(N'dbo.LOBtest', N'U') IS NOT NULL DROP TABLE dbo.LOBtest; GO CREATE TABLE dbo.LOBtest ( pk INTEGER NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( pk, some_value, lob_data ) SELECT TOP (1000) N.n, N.n, @Data FROM Numbers N WHERE N.n <= 1000; Now here’s a query to modify the cluster keys: UPDATE dbo.LOBtest SET pk = pk + 1; The query plan is: As you can see, the Split/Sort/Collapse optimization is present, and we also gain an Eager Table Spool, for Halloween protection.  In addition, SQL Server now has no choice but to read the LOB data in the Clustered Index Scan: The performance is not great, as you might expect (even though there is no non-clustered index to maintain): Table 'LOBtest'. Scan count 1, logical reads 2011, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   Table 'Worktable'. Scan count 1, logical reads 2040, physical reads 0, read-ahead reads 0, lob logical reads 34000, lob physical reads 0, lob read-ahead reads 8000.   SQL Server Execution Times: CPU time = 483 ms, elapsed time = 17884 ms. Notice how the LOB data is read twice: once from the Clustered Index Scan, and again from the work table in tempdb used by the Eager Spool. If you try the same test with a non-unique clustered index (rather than a primary key), you’ll get a much more efficient plan that just passes the cluster key (including uniqueifier) around (no LOB data or other non-key columns): A unique non-clustered index (on a heap) works well too: Both those queries complete in a few tens of milliseconds, with no LOB reads, and just a few thousand logical reads.  (In fact the heap is rather more efficient). There are lots more fun combinations to try that I don’t have space for here. Final Thoughts The behaviour shown in this post is not limited to LOB data by any means.  If the conditions are met, any unique index that has included columns can produce similar behaviour – something to bear in mind when adding large INCLUDE columns to achieve covering queries, perhaps. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • Jquery Datepicker with XML file

    - by matt
    an extension of my last question, http://stackoverflow.com/questions/2562986/getdate-with-jquery-datepicker , I am trying to use the jquery datepicker to load specific info from xml file dependent on the date selected by the user. Similar code but i am trying to load and parse an xml file to read contents of the file for the particular date. In a perfect world the user would tap a date and below the datepicker html output would give the user specific times for the selected date instead of my last project of an image. my probelm is nothing is loading, so my question is what am i doing wrong? my code is as follows <!DOCTYPE html> <link type="text/css" href="css/ui-darkness/jquery-ui-1.8.custom.css" rel="stylesheet" /> <script type="text/javascript" src="js/jquery-1.4.2.min.js"></script> <script type="text/javascript" src="js/jquery-ui-1.8.custom.min.js"></script> <script type="text/javascript"> $(function(){ // Datepicker $('#datepicker').datepicker({ dateFormat: 'yy-mm-dd', inline: true, minDate: new Date(2010, 1 - 1, 1), maxDate:new Date(2010, 12 - 1, 31), altField: '#datepicker_value', onSelect: function(){ var day1 = $("#datepicker").datepicker('getDate').getDate(); var month1 = $("#datepicker").datepicker('getDate').getMonth() + 1; var year1 = $("#datepicker").datepicker('getDate').getFullYear(); var fullDate = year1 + "" + month1 + "" + day1; //var str_output ="<img src=\"http://69.89.20.27/images/a" + fullDate +".png\" width=\"100%\"/>"; //"<h1>"+fullDate+"</h1>"; //"<img src=\"http://69.*.*.*/images/a" + fullDate +".png\"/>"; //$('#page_output').html(str_output); var doc = loadXMLDoc('date.xml'); // loading the XML file var el = doc.getElementsByTagName('_'+date); // retrieving the elements corrsponding to a date, eg: _20100103 var page_output = document.getElementById('page_output'); if(el.length >= 1) { // matched XML data found for the specified date var dt = el[0].getElementsByTagName('date'); var great_times = el[0].getElementsByTagName('great_times'); var good_times = el[0].getElementsByTagName('good_times'); var str_output = "<h1><center>" + dt[0].childNodes[0].nodeValue + "</center></h1><br/><br>"; str_output += "<b>Excellent Times:</b><br> " + great_times[0].childNodes[0].nodeValue + "<br/><br>"; str_output += "<b>Good Times:</b><br> " + good_times[0].childNodes[0].nodeValue + "<br/><br>"; $('#page_output').html(str_output);// writing the results to the div element (page_out) } else { alert("Sorry","Action not allowed on this page"); page_output.innerHTML = ''; // No XML data found for the selected date reloadmainwDate(); return false; } return true; } }); //hover states on the static widgets $('#dialog_link, ul#icons li').hover( function() { $(this).addClass('ui-state-hover'); }, function() { $(this).removeClass('ui-state-hover'); } ); }); //var img_date = .datepicker('getDate'); //var day1 = $("#datepicker").datepicker('getDate').getDate(); //var month1 = $("#datepicker").datepicker('getDate').getMonth() + 1; //var year1 = $("#datepicker").datepicker('getDate').getFullYear(); //var fullDate = year1 + "-" + month1 + "-" + day1; //var date = $('#datepicker').datepicker({ dateFormat: 'dd-mm-yy' }); //var str_output = "<h1><center><p>"+ date + "</p></center></h1>"; //$('#page_output')[0].innerHTML = str_output; // writing the results to the div element (page_out) </script> <script> function loadXMLDoc(dname) { var xmlDoc; // IE 5 and IE 6 if(typeof ActiveXObject != 'undefined') { xmlDoc=new ActiveXObject("Microsoft.XMLDOM"); xmlDoc.async=false; xmlDoc.load(dname); return xmlDoc; } else if (window.XMLHttpRequest) // firefox { xmlDoc=new window.XMLHttpRequest(); xmlDoc.open("GET",dname,false); xmlDoc.send(""); return xmlDoc.responseXML; } alert("Error loading document"); return null; } <!-- Datepicker --> <div id="datepicker"></div> <!-- Highlight / Error --> <div id="page_output"></div> </body>

    Read the article

  • XML Reading org.xml.sax.SAXParseException: Expecting end of file.

    - by vivekbirdi
    Hi, I am getting problem while parsing XML File using JDE 4.6. FileConnection fconn = (FileConnection)Connector.open ("file:///SDCard/Dictionary.xml",Connector.READ_WRITE); InputStream din= fconn.openInputStream(); DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); DocumentBuilder builder = factory.newDocumentBuilder(); Document document = builder.parse(din); here I am getting Exception at Document document = builder.parse(din); org.xml.sax.SAXParseException: Expecting end of file. please give me some solution. Thanks

    Read the article

  • NetBeans behaves differently if project is run via "Run Project" or build.xml>run

    - by Rogach
    I slightly modified the build-impl.xml file of my NetBeans project. (Specifically, I made it to insert build time into program code). If I run project via build.xml "run" target, I get behavior I expect - the program displays build time and date. But if I run project using standard (and most obvious, used it always) button "Run Main Project", I get totally another result (no build date). Moreover, if I insert any code into build.xml, I still get result if I run the target explicitly and no result if it is run simply by NetBeans. And this leads me to conclusion, that this button uses another method to run my application. My question is: what does that button do? What method does it call? And can it be configured to run the needed target of make file?

    Read the article

  • How can I turn the structure of an XML file into a folder structure using ANT

    - by 1ndivisible
    I would like to be able to pass an XML file to an ANT build script and have it create a folder structure mimicking the nodal structure of the XML, using the build files parent directory as the root. For Example using: <root> <folder1> <folder1-1/> </folder1> <folder2/> <folder3> <folder3-1/> </folder3> </root> ant would create: folder1 -folder1-1 folder2 folder3 -folder3-1 I know how to create a directory, but i'm not sure how to have ANT parse the XML.

    Read the article

  • PHP - XML Feed get print values

    - by danit
    Here is my feed: <entry> <id>http://api.visitmix.com/OData.svc/Sessions(guid'816995df-b09a-447a-9391-019512f643a0')</id> <title type="text">Building Web Applications with Microsoft SQL Azure</title> <summary type="text">SQL Azure provides a highly available and scalable relational database engine in the cloud. In this demo-intensive and interactive session, learn how to quickly build web applications with SQL Azure Databases and familiar web technologies. We demonstrate how you can quickly provision, build and populate a new SQL Azure database directly from your web browser. Also, see firsthand several new enhancements we are adding to SQL Azure based on the feedback we&#x2019;ve received from the community since launching the service earlier this year.</summary> <published>2010-01-25T00:00:00-05:00</published> <updated>2010-03-05T01:07:05-05:00</updated> <author> <name /> </author> <link rel="edit" title="Session" href="Sessions(guid'816995df-b09a-447a-9391-019512f643a0')" /> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/Speakers" type="application/atom+xml;type=feed" title="Speakers" href="Sessions(guid'816995df-b09a-447a-9391-019512f643a0')/Speakers"> <m:inline> <feed> <title type="text">Speakers</title> <id>http://api.visitmix.com/OData.svc/Sessions(guid'816995df-b09a-447a-9391-019512f643a0')/Speakers</id> <updated>2010-03-25T11:56:06Z</updated> <link rel="self" title="Speakers" href="Sessions(guid'816995df-b09a-447a-9391-019512f643a0')/Speakers" /> <entry> <id>http://api.visitmix.com/OData.svc/Speakers(guid'3395ee85-d994-423c-a726-76b60a896d2a')</id> <title type="text">David-Robinson</title> <summary type="text"></summary> <updated>2010-03-25T11:56:06Z</updated> <author> <name>David Robinson</name> </author> <link rel="edit-media" title="Speaker" href="Speakers(guid'3395ee85-d994-423c-a726-76b60a896d2a')/$value" /> <link rel="edit" title="Speaker" href="Speakers(guid'3395ee85-d994-423c-a726-76b60a896d2a')" /> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/Sessions" type="application/atom+xml;type=feed" title="Sessions" href="Speakers(guid'3395ee85-d994-423c-a726-76b60a896d2a')/Sessions" /> <category term="EventModel.Speaker" scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" /> <content type="image/jpeg" src="http://live.visitmix.com/Content/images/speakers/lrg/default.jpg" /> <m:properties xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices"> <d:SpeakerID m:type="Edm.Guid">3395ee85-d994-423c-a726-76b60a896d2a</d:SpeakerID> <d:SpeakerFirstName>David</d:SpeakerFirstName> <d:SpeakerLastName>Robinson</d:SpeakerLastName> <d:LargeImage m:null="true"></d:LargeImage> <d:SmallImage m:null="true"></d:SmallImage> <d:Twitter m:null="true"></d:Twitter> </m:properties> </entry> </feed> </m:inline> </link> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/Tags" type="application/atom+xml;type=feed" title="Tags" href="Sessions(guid'816995df-b09a-447a-9391-019512f643a0')/Tags" /> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/Files" type="application/atom+xml;type=feed" title="Files" href="Sessions(guid'816995df-b09a-447a-9391-019512f643a0')/Files" /> <category term="EventModel.Session" scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" /> <content type="application/xml"> <m:properties> <d:SessionID m:type="Edm.Guid">816995df-b09a-447a-9391-019512f643a0</d:SessionID> <d:Location>Breakers L</d:Location> <d:Type>Seminar</d:Type> <d:Code>SVC07</d:Code> <d:StartTime m:type="Edm.DateTime">2010-03-17T12:00:00</d:StartTime> <d:EndTime m:type="Edm.DateTime">2010-03-17T13:00:00</d:EndTime> <d:Slug>SVC07</d:Slug> <d:CreatedDate m:type="Edm.DateTime">2010-01-26T18:14:24.687</d:CreatedDate> <d:SourceID m:type="Edm.Guid">cddca9b7-6830-4d06-af93-5fd87afb67b0</d:SourceID> </m:properties> </content> </entry> I want to print the: Session Title (Building Web Applications with Microsoft SQL Azure) The Author (David Robinson) The Location (Breakers L) And display the speakers image (http://live.visitmix.com/Content/images/speakers/lrg/default.jpg) I presume I can use filegetcontents and then transform to simplexmlstring, but I dont know how to get the deeper items in I want, like Author, and image. Any chance of a bit of coding genius here?

    Read the article

  • Issue parsing RSS xml

    - by cw
    Hello, I'm having an issue using Linq to XML parsing the following XML. What I am doing is getting the element checking if it's what I want, then moving to the next. I am pretty sure it has to do with the xmlns, but I need this code to work with both this style and normal style RSS feeds (no xmlns). Any ideas? <?xml version="1.0" encoding="UTF-8"?> <rdf:RDF xmlns:rdf="http://someurl" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"> <channel rdf:about="http://someurl"> XElement currentLocation = startElementParameter; foreach (string x in ("channel\\Title").Split('\\')) { if (condition1 == false) { continue; } else if (condition2 == false) { break; } else { // This is returning null. currentLocation = currentLocation.Element(x); } } Thanks!

    Read the article

  • Reusing XSL template to be invoked with different relative XPaths

    - by meomaxy
    Here is my contrived example that illustrates what I am attempting to accomplish. I have an input XML file that I wish to flatten for further processing. Input file: <BICYCLES> <BICYCLE> <COLOR>BLUE</COLOR> <WHEELS> <WHEEL> <WHEEL_TYPE>FRONT</WHEEL_TYPE> <FLAT>NO</FLAT> <REFLECTORS> <REFLECTOR> <REFLECTOR_NUM>1</REFLECTOR_NUM> <COLOR>RED</COLOR> <SHAPE>SQUARE</SHAPE> </REFLECTOR> <REFLECTOR> <REFLECTOR_NUM>2</REFLECTOR_NUM> <COLOR>WHITE</COLOR> <SHAPE>ROUND</SHAPE> </REFLECTOR> </REFLECTORS> </WHEEL> <WHEEL> <WHEEL_TYPE>REAR</WHEEL_TYPE> <FLAT>NO</FLAT> </WHEEL> </WHEELS> </BICYCLE> </BICYCLES> The input is a list of <BICYCLE> nodes. Each <BICYCLE> has a <COLOR> and optionally has <WHEELS>. <WHEELS> is a list of <WHEEL> nodes, each of which has a few attributes, and optionally has <REFLECTORS>. <REFLECTORS> is a list of <REFLECTOR> nodes, each of which has a few attributes. The goal is to flatten this XML. This is the XSL I'm using: <xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:fo="http://www.w3.org/1999/XSL/Format" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:fn="http://www.w3.org/2005/xpath-functions"> <xsl:output method="xml" encoding="UTF-8" indent="yes" omit-xml-declaration="yes" xml:space="preserve"/> <xsl:template match="/"> <BICYCLES> <xsl:apply-templates/> </BICYCLES> </xsl:template> <xsl:template match="BICYCLE"> <xsl:choose> <xsl:when test="WHEELS"> <xsl:apply-templates select="WHEELS"/> </xsl:when> <xsl:otherwise> <BICYCLE> <COLOR><xsl:value-of select="COLOR"/></COLOR> <WHEEL_TYPE/> <FLAT/> <REFLECTOR_NUM/> <COLOR/> <SHAPE/> </BICYCLE> </xsl:otherwise> </xsl:choose> </xsl:template> <xsl:template match="WHEELS"> <xsl:apply-templates select="WHEEL"/> </xsl:template> <xsl:template match="WHEEL"> <xsl:choose> <xsl:when test="REFLECTORS"> <xsl:apply-templates select="REFLECTORS"/> </xsl:when> <xsl:otherwise> <BICYCLE> <COLOR><xsl:value-of select="../../COLOR"/></COLOR> <WHEEL_TYPE><xsl:value-of select="WHEEL_TYPE"/></WHEEL_TYPE> <FLAT><xsl:value-of select="FLAT"/></FLAT> <REFLECTOR_NUM/> <COLOR/> <SHAPE/> </BICYCLE> </xsl:otherwise> </xsl:choose> </xsl:template> <xsl:template match="REFLECTORS"> <xsl:apply-templates select="REFLECTOR"/> </xsl:template> <xsl:template match="REFLECTOR"> <BICYCLE> <COLOR><xsl:value-of select="../../../../COLOR"/></COLOR> <WHEEL_TYPE><xsl:value-of select="../../WHEEL_TYPE"/></WHEEL_TYPE> <FLAT><xsl:value-of select="../../FLAT"/></FLAT> <REFLECTOR_NUM><xsl:value-of select="REFLECTOR_NUM"/></REFLECTOR_NUM> <COLOR><xsl:value-of select="COLOR"/></COLOR> <SHAPE><xsl:value-of select="SHAPE"/></SHAPE> </BICYCLE> </xsl:template> </xsl:stylesheet> The output is: <BICYCLES xmlns:fn="http://www.w3.org/2005/xpath-functions" xmlns:fo="http://www.w3.org/1999/XSL/Format" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <BICYCLE> <COLOR>BLUE</COLOR> <WHEEL_TYPE>FRONT</WHEEL_TYPE> <FLAT>NO</FLAT> <REFLECTOR_NUM>1</REFLECTOR_NUM> <COLOR>RED</COLOR> <SHAPE>SQUARE</SHAPE> </BICYCLE> <BICYCLE> <COLOR>BLUE</COLOR> <WHEEL_TYPE>FRONT</WHEEL_TYPE> <FLAT>NO</FLAT> <REFLECTOR_NUM>2</REFLECTOR_NUM> <COLOR>WHITE</COLOR> <SHAPE>ROUND</SHAPE> </BICYCLE> <BICYCLE> <COLOR>BLUE</COLOR> <WHEEL_TYPE>REAR</WHEEL_TYPE> <FLAT>NO</FLAT> <REFLECTOR_NUM/> <COLOR/> <SHAPE/> </BICYCLE> </BICYCLES> What I don't like about this is that I'm outputting the color attribute in several forms: <COLOR><xsl:value-of select="../../../../COLOR"/></COLOR> <COLOR><xsl:value-of select="../../COLOR"/></COLOR> <COLOR><xsl:value-of select="COLOR"/></COLOR> <COLOR/> It seems like there ought to be a way to make a named template and invoke it from the various places where it is needed and pass some parameter that represents the path back to the <BICYCLE> node to which it refers. Is there a way to clean this up, say with a named template for bicycle fields, for wheel fields and for reflector fields? In the real world example this is based on, there are many more attributes to a "bicycle" than just color, and I want to make this XSL easy to change to include or exclude fields without having to change the XSL in multiple places.

    Read the article

  • Initialize preferences from XML in main Activity

    - by pixel
    My problem is that when I start application and user didn't open my PreferenceActivity so when I retrieve them don't get any default values defined in my preference.xml file. preference.xml file: <?xml version="1.0" encoding="utf-8"?> <PreferenceScreen xmlns:android="http://schemas.android.com/apk/res/android" android:key="applicationPreference" android:title="@string/config" > <ListPreference android:key="pref1" android:defaultValue="default" android:title="Title" android:summary="Summary" android:entries="@array/entry_names" android:entryValues="@array/entry_values" android:dialogTitle="@string/dialog_title" /> </PreferenceScreen> Snippet from my main Activity (onCreate method): SharedPreferences appPreferences = PreferenceManager.getDefaultSharedPreferences(this); String pref1 = appPreferences.getString("pref1", null); In result I end up with a null value.

    Read the article

  • Python: Pretty printing a xml file directly from a tar.gz package

    - by EddyR
    This is the first Python script I've tried to create. I'm reading a xml file from a tar.gz package and then I want to pretty print it. However I can't seem to turn it from a file-like object to a string. I've tried to do it a few different ways including str(), tostring(), etc but nothing is working for me. For testing I just tried to print the string at "print myfile[0:200]" and it always generates "<tarfile.ExFileObject object at 0x10053df10>" import os import sys import tarfile from xml.dom.minidom import parseString tar = tarfile.open("data/ucd.all.flat.tar.gz", "r") getfile = tar.extractfile("ucd.all.flat.xml") myfile = str(getfile) print myfile[0:200] output = parseString(getfile).toprettyxml() print output tar.close()

    Read the article

< Previous Page | 572 573 574 575 576 577 578 579 580 581 582 583  | Next Page >