Search Results

Search found 98885 results on 3956 pages for 'server xml'.

Page 123/3956 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • Command line or library "compare tables" utility for SQL server with comprehensive diff output to a

    - by MicMit
    I can't find anything like that. Commercial or free ( XSQL Lite is suitable for my case and ) tools show diffs in grids with possibility to export to CSV. Also they generate sync SQL scripts when run from command line. What I need is an output as a comprehensive report ( XML , HTML ) suitable for parsing so that I would be able to show similar diff grid in my application ( updated old/new values for each column , added - all values for row , deleted - all values for row and etc... ) .

    Read the article

  • SQL SERVER – Step by Step Guide to Beginning Data Quality Services in SQL Server 2012 – Introduction to DQS

    - by pinaldave
    Data Quality Services is a very important concept of SQL Server. I have recently started to explore the same and I am really learning some good concepts. Here are two very important blog posts which one should go over before continuing this blog post. Installing Data Quality Services (DQS) on SQL Server 2012 Connecting Error to Data Quality Services (DQS) on SQL Server 2012 This article is introduction to Data Quality Services for beginners. We will be using an Excel file Click on the image to enlarge the it. In the first article we learned to install DQS. In this article we will see how we can learn about building Knowledge Base and using it to help us identify the quality of the data as well help correct the bad quality of the data. Here are the two very important steps we will be learning in this tutorial. Building a New Knowledge Base  Creating a New Data Quality Project Let us start the building the Knowledge Base. Click on New Knowledge Base. In our project we will be using the Excel as a knowledge base. Here is the Excel which we will be using. There are two columns. One is Colors and another is Shade. They are independent columns and not related to each other. The point which I am trying to show is that in Column A there are unique data and in Column B there are duplicate records. Clicking on New Knowledge Base will bring up the following screen. Enter the name of the new knowledge base. Clicking NEXT will bring up following screen where it will allow to select the EXCE file and it will also let users select the source column. I have selected Colors and Shade both as a source column. Creating a domain is very important. Here you can create a unique domain or domain which is compositely build from Colors and Shade. As this is the first example, I will create unique domain – for Colors I will create domain Colors and for Shade I will create domain Shade. Here is the screen which will demonstrate how the screen will look after creating domains. Clicking NEXT it will bring you to following screen where you can do the data discovery. Clicking on the START will start the processing of the source data provided. Pre-processed data will show various information related to the source data. In our case it shows that Colors column have unique data whereas Shade have non-unique data and unique data rows are only two. In the next screen you can actually add more rows as well see the frequency of the data as the values are listed unique. Clicking next will publish the knowledge base which is just created. Now the knowledge base is created. We will try to take any random data and attempt to do DQS implementation over it. I am using another excel sheet here for simplicity purpose. In reality you can easily use SQL Server table for the same. Click on New Data Quality Project to see start DQS Project. In the next screen it will ask which knowledge base to use. We will be using our Colors knowledge base which we have recently created. In the Colors knowledge base we had two columns – 1) Colors and 2) Shade. In our case we will be using both of the mappings here. User can select one or multiple column mapping over here. Now the most important phase of the complete project. Click on Start and it will make the cleaning process and shows various results. In our case there were two columns to be processed and it completed the task with necessary information. It demonstrated that in Colors columns it has not corrected any value by itself but in Shade value there is a suggestion it has. We can train the DQS to correct values but let us keep that subject for future blog posts. Now click next and keep the domain Colors selected left side. It will demonstrate that there are two incorrect columns which it needs to be corrected. Here is the place where once corrected value will be auto-corrected in future. I manually corrected the value here and clicked on Approve radio buttons. As soon as I click on Approve buttons the rows will be disappeared from this tab and will move to Corrected Tab. If I had rejected tab it would have moved the rows to Invalid tab as well. In this screen you can see how the corrected 2 rows are demonstrated. You can click on Correct tab and see previously validated 6 rows which passed the DQS process. Now let us click on the Shade domain on the left side of the screen. This domain shows very interesting details as there DQS system guessed the correct answer as Dark with the confidence level of 77%. It is quite a high confidence level and manual observation also demonstrate that Dark is the correct answer. I clicked on Approve and the row moved to corrected tab. On the next screen DQS shows the summary of all the activities. It also demonstrates how the correction of the quality of the data was performed. The user can explore their data to a SQL Server Table, CSV file or Excel. The user also has an option to either explore data and all the associated cleansing info or data only. I will select Data only for demonstration purpose. Clicking explore will generate the files. Let us open the generated file. It will look as following and it looks pretty complete and corrected. Well, we have successfully completed DQS Process. The process is indeed very easy. I suggest you try this out yourself and you will find it very easy to learn. In future we will go over advanced concepts. Are you using this feature on your production server? If yes, would you please leave a comment with your environment and business need. It will be indeed interesting to see where it is implemented. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Data Warehousing, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Data Quality Services, DQS

    Read the article

  • Should custom data elements be stored as XML or database entries?

    - by meteorainer
    There are a ton of questions like this, but they are mostly very generalized, so I'd like to get some views on my specific usage. General: I'm building a new project on my own in Django. It's focus will be on small businesses. I'd like to make it somewhat customizble for my clients so they can add to their customer/invoice/employee/whatever items. My models would reflect boilerplate items that all ModelX might have. For example: first name last name email address ... Then my user's would be able to add fields for whatever data they might like. I'm still in design phase and am building this myself, so I've got some options. Working on... Right now the 'extra items' models have a FK to the generic model (Customer and CustomerDataPoints for example). All values in the extra data points are stored as char and will be coerced/parced into their actual format at view building. In this build the user could theoretically add whatever values they want, group them in sets and generally access them at will from the views relavent to that model. Pros: Low storage overhead, very extensible, searchable Cons: More sql joins My other option is to use some type of markup, or key-value pairing stored directly onto the boilerplate models. This coul essentially just be any low-overhead method weather XML or literal strings. The view and form generated from the stored data would be taking control of validation and reoganizing on updates. Then it would just dump the data back in as a char/blob/whatever. Something like: <datapoint type='char' value='something' required='true' /> <datapoint type='date' value='01/01/2001' required='false' /> ... Pros: No joins needed, Updates for validation and views are decoupled from data Cons: Much higher storage overhead, limited capacity to search on extra content So my question is: If you didn't live in the contraints impose by your company what method would you use? Why? What benefits or pitfalls do you see down the road for me as a small business trying to help other small businesses? Just to clarify, I am not asking about custom UI elements, those I can handle with forms and template snippets. I'm asking primarily about data storage and retreival of non standardized data relative to a boilerplate model.

    Read the article

  • Would this be a good web application architecture?

    - by Gustav Bertram
    My problem Our MVC based framework does not allow us to cache only part of our output. Ideally we want to cahce static and semi-static bits, and run dynamic bits. In addition, we need to consider data caching that reacts to database changes. My idea The concept I came up with was to represent a page as a tree of XML fragment objects. (I say XML, but I mean XHTML). Some of the fragments are dynamic, and can pull their data directly from models or other sources, but most of the fragments are static scaffolding. If a subtree of fragments is completely static, then I imagine that they could unfold into pure XML that would then be cached as the text representation of their parent element. This process would ideally continue until we are left with a root element that contains all of the static XML, and has a couple of dynamic XML fragments that are resolved and attached to the relevant nodes of the XML tree just before the page is displayed. In addition to separating content into dynamic and static fragments, some fragments could be dynamic and cached. A simple expiry time which propagates up through the XML fragment tree would indicate that a specific fragment should periodically be refreshed. A newspaper section or front page does not need to be updated each second. Minutes or sometimes even longer is sufficient. Other fragments would be dynamic and uncached. Typically too many articles are viewed for them to be cached - the cache would overflow. Some individual articles may be cached if they are extremely popular. Functional notes The folding mechanism could be to be smart enough to judge when it would be more profitable to fold a dynamic cached fragment and propagate the expiry date to the parent fragment, or to keep it separate and simple attach to the XML tree when resolving the page. If some dynamic cached fragments are associated to database objects through mechanisms like a globally unique content id, then changes to the database could trigger changes to the output cache. If fragments store the identifiers of parent fragments, then they could trigger a refolding process that would then include the updated data. A set of pure XML with an ordered array of fragment objects (that each store the identifying information of the node to which they should be attached), can be resolved in a fairly simple way by walking the XML tree, and merging the data from the fragments. Because it is not necessary to parse and construct the entire tree in memory before attaching nodes, processing should be fairly fast. The identifiers of each fragment would be a combination of relevant identity data and the type of fragment object. Cached parent fragments would contain references to these identifiers, in order to then either pull them from the fragment cache, or to run their code. The controller's responsibility is reduced to making changes to the database, and telling the root XML fragment object to render itself. The Question My question has two parts: Is this a good design? Are there any obvious flaws I'm missing? Has somebody else thought of this before? References? Is there an existing alternative that I should consider? A cool templating engine maybe?

    Read the article

  • How to execute msdb.dbo.sp_start_job from a stored procedure in user database in sql server 2005

    - by Ram
    Hi Everyone, I am trying to execute a msdb.dbo.sp_start_Job from MyDB.dbo.MyStoredProc in order to execute MyJob 1) I Know that if i give the user a SqlAgentUser role he will be able to run the jobs that he owns (BUT THIS IS WHAT I OBSERVED : THE USER WAS ABLE TO START/STOP/RESTART THE SQL AGENT SO I DO NOT WANT TO GO THIS ROUTE) - Let me know if i am wrong , but i do not understand why would such a under privileged user be able to start/stop agents . 2)I know that if i give execute permissions on executing user to msdb.dbo.Sp_Start_job and Enable Ownership chaining or enable Trustworthy on the user database it would work (BUT I DO NOT WANT TO ENABLE OWNERSHIP CHAINING NOR TRUSTWORTHY ON THE USER DATABASE) 3)I this this can be done by code signing User Database i)create a stored proc MyDB.dbo.MyStoredProc ii)Create a certificae job_exec iii)sign MyDB.dbo.MyStoredProc with certificate job_exec iv)export certificate msdb i)Import Certificate ii)create a derived user from this certificate iii)grant authenticate for this derived user iv)grant execute on msdb.dbo.sp_start_job to the derived user v)grant execute on msdb.dbo.sp_start_job to the user executing the MyDB.dbo.MyStoredProc but i tried it and it did not work for me -i dont know which piece i am missing or doing wrong so please provide me with a simple example (with scripts) for executing msdb.dbo.sp_start_job from user stored prod MyDB.dbo.MyStoredProc using code signing Many Many Many Thanks in Advance Thanks Ram

    Read the article

  • Having troubles connectiong Magento to external Windows Database Server using Windows Azure

    - by Kevin H
    "I tried to make this easy to read through" I am using Ubuntu 12.04 LTS for Magento and installed these commands onto the system: sudo apt-get install apache2 sudo apt-get install php5 libapache2-mod-php5 sudo apt-get install php5-mysql sudo apt-get install php5-curl php5-mcrypt php5-gd php5-common sudo apt-get install php5-gd I used Windows Server 2008 R2 August 2012 for Mysql Server For a reference, I used http://www.windowsazure.com/en-us/manage/windows/common-tasks/install-mysql/ When the server was setup, I added an empty disk to it Then, I added endpoints 3306 Next I accessed the server remotely After that, I formatted the empty disk and was inserted as F: Next I downloaded Mysql from http://*.mysql.com version Windows (x86, 64-bit), MSI Installer 5.5.28 In the installation process, I used these settings: Typical Setup - Clicked Next, install, next Chose Detailed Configuration - Clicked next Chose Dedicated MySQL Server Machine - Clicked Next Chose Transactional Database Only - Clicked Next Chose the "F:" Drive - Clicked Next Chose Online Transactional Processing (OLTP) - Clicked Next For Networking Options, I checkmarked 'Enable TCP/IP Networking" 'Add firewall exception for this port' 'Enable Strict Mode' - Clicked Next Chose Standard Character Set - Clicked Next For Windows Options, I checkedmarked 'Install as Window Service" 'Launch the MySQL Server automatically' 'Include Bin Directory in Windows PATH - Clicked Next For Security Options, I checkmarked 'Modify Security Settings' and set root password - Clicked Next Finally clicked Execute and Finish These are the Firewall Setting that I set I clicked inbound rules Properties Scope Allow IP Address and used the internal Address for Magento Server Clicked Apply and exited Next, I opened up MySQL 5.x Command Line Client Entered Root Password Then entered these commands mysql create database magento; mysql Create user magentouser identified by 'password'; mysql Grant select, insert, create, alter, update, delete, lock tables on magento.* to magentouser mysql exit Finally, I opened up the Magento Downloader Magento validation has approved all PHP version is right. Your version is 5.3.10-1ubuntu3.4. PHP Extension curl is loaded PHP Extension dom is loaded PHP Extension gd is loaded PHP Extension hash is loaded PHP Extension iconv is loaded PHP Extension mcrypt is loaded PHP Extension pcre is loaded PHP Extension pdo is loaded PHP Extension pdo_mysql is loaded PHP Extension simplexml is loaded These are all installed on Magento Server For the Database Connection, I used: The Database server only has MySQL 5.5 Server installed on it Host - Internal IP address User Name - The User I created when setting up database Password - The Password I created when setting up database For the password, I did some research and found out that Magento only accepts alphanumeric, so I went and set it up again and used only alphanumeric for the User password Now, I am still getting Accessed denied for database Connection. Also, I have tryed to setup mysql on independant Linux Server but kept getting errors. When, I found the solution. Wouldn't work, so I decided to try Windows. These is the questions, I have been asking and researching to debug this issue Is it because I am using Linux for magento and Windows for Database. I have had no luck in finding a reason why this wouldn't work There must be something, I am missing I also researched the difference between linux sql databases and windows sql databases but have not come to conclusion, if installing Mysql on windows would make a difference in syntax and coding. I have spent a lot of time looking into this and need some help with direction on how to complete my project. Any type of help would be appreciated.

    Read the article

  • How to configure/save layout of SQL Server's Log File Viewer?

    - by gernblandston
    When I'm viewing the job history of a particular SQL Agent Job, I typically want to see whether it succeeded, its duration and maybe the duration of the individual steps of the job. When I open the history in the Log File Viewer, I always need to scroll over and shrink the 'Message' column and drag the 'Duration' column over next to the 'Step Name' column. Is there a way to configure the layout of the Log File Viewer (e.g. reposition columns, resize columns) and save it for future sessions? Thanks!

    Read the article

  • secure data transport between web server and database server

    - by atypicalgeek
    I asked this question in stackoverflow and it was suggested to try here so here goes... I'm planning on provisioning a web server and database server in a server farm environment. They will be in the same network but not in the same domain, both windows server 2008 and the database server is sql server 2008. My question being, what is the best way to secure data in transport between the servers? I've looked into IPSEC and SSL but not sure how to go about implementing either.

    Read the article

  • How to stop access to file while it is being uploded by FTP on Windows server 2008

    - by Mr. Flibble
    I'm using FTP 7.5 on Windows 2008 R2. When I upload a file and it is partially uploaded I'm able to move it before it has completed. Is there a way to stop this? I see an option under Advanced settings-Behaviour-File Handling-Allow reading files while uploading but this doesn't seem to do the trick. I guess it's write access that I need to stop. It seemed to have this functionality by default on IIS6.

    Read the article

  • legitimacy of the tasks in the task scheduler

    - by Eyad
    Is there a way to know the source and legitimacy of the tasks in the task scheduler in windows server 2008 and 2003? Can I check if the task was added by Microsoft (ie: from sccm) or by a 3rd party application? For each task in the task scheduler, I want to verify that the task has not been created by a third party application. I only want to allow standards Microsoft Tasks and disable all other non-standards tasks. I have created a PowerShell script that goes through all the xml files in the C:\Windows\System32\Tasks directory and I was able to read all the xml task files successfully but I am stuck on how to validate the tasks. Here is the script for your reference: Function TaskSniper() { #Getting all the fils in the Tasks folder $files = Get-ChildItem "C:\Windows\System32\Tasks" -Recurse | Where-Object {!$_.PSIsContainer}; [Xml] $StandardXmlFile = Get-Content "Edit Me"; foreach($file in $files) { #constructing the file path $path = $file.DirectoryName + "\" + $file.Name #reading the file as an XML doc [Xml] $xmlFile = Get-Content $path #DS SEE: http://social.technet.microsoft.com/Forums/en-US/w7itprogeneral/thread/caa8422f-6397-4510-ba6e-e28f2d2ee0d2/ #(get-authenticodesignature C:\Windows\System32\appidpolicyconverter.exe).status -eq "valid" #Display something $xmlFile.Task.Settings.Hidden } } Thank you

    Read the article

  • SQL Server 22005: Top 1 * for a unique column?

    - by Echilon
    I have data in a table (below), and I need to select the most recent update from each user. Here the data has been sorted by date, so the 'SomeData' column of the most recent unique value of each user. Top 1 SomeData isn't going to work because it will only return for one user. Is this even possible using only SQL? Date SomeData User ... 8/5/2010 2.2 UserC 4/5/2010 1.1 UserA 3/5/2010 9.4 UserB 1/5/2010 3.7 UserA 1/5/2010 6.1 UserB

    Read the article

  • VPN Connection Causes Internal LAN Connection Loss with Server

    - by sleepisfortheweak
    I've tried configuring basic PPTP VPN at my small business using a number of different tutorials. As far as I can tell, the actual VPN connection worked fine, but upon connecting a client, the Server 'disappears' from the internal LAN. The RRAS service must be stopped before the connection is restored. My Setup: The network is simply a DSL Gateway/Router to the outside functioning as NAT/Firewall/DHCP. The server is a Win Server 2008 machine at fixed IP 192.168.1.200. The server has 1 NIC, so I used the 'custom' option when configuring RRAS. The RRAS settings should be default except that I've disabled ports for connection types I'm not using and reduced PPTP ports to 10. I've also created an address pool and disabled DHCP packet forwarding. The server only functions as a File Share and now a VPN Server. Local LAN computers all have mapped network shares to the server authenticated based on Local User/Group setup on the server. The Problem: The moment a client connects through VPN, the server 'disappears' from the local network. All mapped drives disconnect and there is no response to a ping 192.168.1.200. Even if the client disconnects, the server does not re-appear at that address until the RRAS service is stopped. I've Tried: Using an Address Pool inside and outside the local subnet. Using DCHP Relay Checking Inbound/Outbound filters (none enabled) The fact that nothing I've tried has had any effect, and that I can connect and successfully obtain an IP tells me that it's something more fundamental I'm missing. My gut tells me that it's something to do with the second IP address added by the VPN client somehow taking over the interface or traffic from the local LAN accidently getting routed to the VPN client instead of handled at the server once RRAS has become 'active' when a client connects. Hopefully this may be obvious to someone with real IT experience. I've been doing this a while and almost never been stumped. I'm starting to think it might actually be something tricky since my setup is pretty basic yet refuses to work. I'll be happy to include more info if this doesn't ring any bells right away for anyone. Thanks

    Read the article

  • JavaScript and XML Dom - Nested Loop

    - by BSteck
    So I'm a beginner in XML DOM and JavaScript but I've run into an issue. I'm using the script to display my XML data in a table on an existing site. The problem comes in nesting a loop in my JavaScript code. Here is my XML: <?xml version="1.0" encoding="utf-8"?> <book_list> <author> <first_name>Mary</first_name> <last_name>Abbott Hess</last_name> <books> <title>The Healthy Gourmet Cookbook</title> </books> </author> <author> <first_name>Beverly</first_name> <last_name>Bare Bueher</last_name> <books> <title>Cary Grant: A Bio-Bibliography</title> <title>Japanese Films</title> </books> </author> <author> <first_name>James P.</first_name> <last_name>Bateman</last_name> <books> <title>Illinois Land Use Law</title> </books> </author> </book_list> I then use this JavaScript code to read and display the data: > <script type="text/javascript"> if > (window.XMLHttpRequest) { > xhttp=new XMLHttpRequest(); } else > // Internet Explorer 5/6 { > xhttp=new > ActiveXObject("Microsoft.XMLHTTP"); > } xhttp.open("GET","books.xml",false); > xhttp.send(""); > xmlDoc=xhttp.responseXML; > > document.write("<table>"); var > x=xmlDoc.getElementsByTagName("author"); > for (i=0;i<x.length;i++) { > document.write("<tr><td>"); > document.write(x[i].getElementsByTagName("first_name")[0].childNodes[0].nodeValue); > document.write("&nbsp;"); > document.write(x[i].getElementsByTagName("last_name")[0].childNodes[0].nodeValue); > document.write("</td><td>"); > document.write(x[i].getElementsByTagName("title")[0].childNodes[0].nodeValue); > document.write("</td></tr>"); } > document.write("</table>"); </script> The code works well except it only returns the first title element of each author. I somewhat understand why it's doing that, but I don't know how to nest another loop so when the script runs it displays all the titles for an author, not just the first. Whenever I try to nest a loop it breaks the entire script.

    Read the article

  • Error Loading Simple XML

    - by Rooneyl
    I have an xml document (generated from msword 2010), which I am trying to process using simple xml. Sample of xml: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <w:document xmlns:wpc="http://schemas.microsoft.com/office/word/2010/wordprocessingCanvas" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math" xmlns:v="urn:schemas-microsoft-com:vml" xmlns:wp14="http://schemas.microsoft.com/office/word/2010/wordprocessingDrawing" xmlns:wp="http://schemas.openxmlformats.org/drawingml/2006/wordprocessingDrawing" xmlns:w10="urn:schemas-microsoft-com:office:word" xmlns:w="http://schemas.openxmlformats.org/wordprocessingml/2006/main" xmlns:w14="http://schemas.microsoft.com/office/word/2010/wordml" xmlns:wpg="http://schemas.microsoft.com/office/word/2010/wordprocessingGroup" xmlns:wpi="http://schemas.microsoft.com/office/word/2010/wordprocessingInk" xmlns:wne="http://schemas.microsoft.com/office/word/2006/wordml" xmlns:wps="http://schemas.microsoft.com/office/word/2010/wordprocessingShape" mc:Ignorable="w14 wp14"> <w:body> <w:p w:rsidR="005B1098" w:rsidRDefault="005B1098"/> <w:p w:rsidR="00F254A4" w:rsidRDefault="00F254A4"/> <w:p w:rsidR="00F254A4" w:rsidRPr="008475A1" w:rsidRDefault="00C15492" w:rsidP="008475A1"> <w:pPr> <w:jc w:val="center"/> <w:rPr> <w:b/> <w:sz w:val="44"/> <w:szCs w:val="44"/> </w:rPr> </w:pPr> <w:r w:rsidRPr="008475A1"> <w:rPr> <w:b/> <w:sz w:val="44"/> <w:szCs w:val="44"/> </w:rPr> <w:t>Test file</w:t> </w:r> </w:p> <w:p w:rsidR="00C15492" w:rsidRPr="008475A1" w:rsidRDefault="00C15492" w:rsidP="008475A1"> <w:pPr> <w:jc w:val="center"/> <w:rPr> <w:sz w:val="20"/> <w:szCs w:val="20"/> </w:rPr> </w:pPr> <w:r w:rsidRPr="008475A1"> <w:rPr> <w:sz w:val="20"/> <w:szCs w:val="20"/> </w:rPr> <w:t>another paragraph</w:t> </w:r> </w:p> </w:body> </w:document> I am trying to open it using: if(!$simple_xml = simplexml_load_file($content)){ trigger_error('Error reading XML file',E_USER_ERROR); } else { echo 'loaded'; } And get the error: Message: simplexml_load_file(): I/O warning : failed to load external entity " Test file another paragraph " Any ideas?

    Read the article

  • Parse Nested XML tags with the same name

    - by footose
    Let's take a simple XML document: <x> <e> <e> <e>Whatever 1</e> </e> </e> <e> <e> <e>Whatever 2</e> </e> </e> <e> <e> <e>Whatever 3</e> </e> </e> </x> Using the standard org.w3c.dom, I can get the nodes in X by doing.. NodeList fullnodelist = doc.getElementsByTagName("x"); But if I want to return the next set of "e" I try to use something like .. Element element = (Element) fullnodelist.item(0); NodeList nodes = pelement.getElementsByTagName("e"); Expecting it to return "3" nodes (because there are 3 sets of "e"), but instead, it returns "9" - becuase it gets all entries with "e" apperently. This would be fine in the above case, because I could probably iterate through and find what I'm looking for. The problem I'm having is that when the XML file looks like the following: <x> <e> <pattern>whatever</pattern> <blanks> <e>Something Else</e> </blanks> </e> <e> <pattern>whatever</pattern> <blanks> <e>Something Else</e> </blanks> </e> </x> When I request the "e" value, it returns 4, instead of (what i expect) 2. Am I just not understanding how the DOM parsing works? Typically in the past I have used my own XML documents so I would never name the items like this, but unfortunately this is not my XML file and I have no choice to work like this. What I thought I would do is write a loop that "drills down" nodes so that I could group each node together... public static NodeList getNodeList(Element pelement, String find) { String[] nodesfind = Utilities.Split(find, "/"); NodeList nodeList = null; for (int i = 0 ; i <= nodesfind.length - 1; i++ ) { nodeList = pelement.getElementsByTagName( nodesfind[i] ); pelement = (Element)nodeList.item(i); } // value of the nod we are looking for return nodeList; } .. So that if you passed "s/e" into the function, it would return the 2 nodes that I'm looking for (or elements, maybe I'm using the terminology incorrect?). instead it returns all of the "e" nodes within that node. I'm using J2SE for this, so options are rather limited. I can't use any third party XML Parsers. Anyway, if anyone is still with me and has a suggestion, it would be appreciated.

    Read the article

  • Parsing xml file that comes in as one object per line

    - by Casey
    I haven't been here in so long, I forgot my prior account! Anyways, I am working on parsing an xml document that comes in ugly. It is for banking statements. Each line is a <statement>all tags</statement>. Now, what I need to do is read this file in, and parse the XML document at the same time, while formatting it more human readable too. Point beeing, Original input looks like this: <statement><accountHeader><fiAddress></fiAddress><accountNumber></accountNumber><startDate>20140101</startDate><endDate>20140228</endDate><statementGroup>1</statementGroup><sortOption>0</sortOption><memberBranchCode>1</memberBranchCode><memberName></memberName><jointOwner1Name></jointOwner1Name><jointOwner2Name></jointOwner2Name></summary></statement> <statement><accountHeader><fiAddress></fiAddress><accountNumber></accountNumber><startDate>20140101</startDate><endDate>20140228</endDate><statementGroup>1</statementGroup><sortOption>0</sortOption><memberBranchCode>1</memberBranchCode><memberName></memberName><jointOwner1Name></jointOwner1Name><jointOwner2Name></jointOwner2Name></summary></statement> <statement><accountHeader><fiAddress></fiAddress><accountNumber></accountNumber><startDate>20140101</startDate><endDate>20140228</endDate><statementGroup>1</statementGroup><sortOption>0</sortOption><memberBranchCode>1</memberBranchCode><memberName></memberName><jointOwner1Name></jointOwner1Name><jointOwner2Name></jointOwner2Name></summary></statement> I need the final output to be as follows: <statement> <name></name> <address></address> </statement> This is fine and dandy. I am using the following "very slow considering 5.1 million lines, 254k data file, and about 60k statements takes around 8 minutes". foreach(String item in lines) { XElement xElement = XElement.Parse(item); sr.WriteLine(xElement.ToString().Trim()); } Then when the file is formatted this is what sucks. I need to check every single tag in transaction elements, and if a tag is missing that could be there, I have to fill it in. Our designer software will default prior values in if a tag is possible, and the current objects does not have. It defaults in the value of a prior one that was not Null. "I know, and they swear up and down it is not a bug... ok?" So, that is also taking about 5 to 10 minutes. I need to break all this down, and find a faster method for working with the initial XML. This is a preprocess action, and cannot take that long if not necessary. It just seems redundant. Is there a better way to parse the XML, or is this the best I can do? I parse the XML, write to a temp file, and then read that file in, to the output file inserting the missing tags. 2 IO runs for one process. Yuck.

    Read the article

  • simple XML question for perl - how to retrieve specific elements

    - by Jeff
    I'm trying to figure out how to loop through XML but I've read a lot and I'm still getting stuck. Here's the info: I'm using the wordnik api to retrieve XML with XML::Simple: $content = get($url); $r = $xml->XMLin("$content"); The actual XML looks like this: <definitions> - <definition sequence="0" id="0"> - <text> To withdraw one's support or help from, especially in spite of duty, allegiance, or responsibility; desert: abandon a friend in trouble. </text> <headword>abandon</headword> <partOfSpeech>verb-transitive</partOfSpeech> </definition> - <definition sequence="1" id="0"> - <text> To give up by leaving or ceasing to operate or inhabit, especially as a result of danger or other impending threat: abandoned the ship. </text> <headword>abandon</headword> <partOfSpeech>verb-transitive</partOfSpeech> </definition> - <definition sequence="2" id="0"> - <text> To surrender one's claim to, right to, or interest in; give up entirely. See Synonyms at relinquish. </text> <headword>abandon</headword> <partOfSpeech>verb-transitive</partOfSpeech> </definition> - <definition sequence="3" id="0"> ... What I want is simply the FIRST definition's part of speech. I'm using this code but it's getting the LAST definition's POS: if($r->{definition}->{0}->{partOfSpeech}) { $pos = $r->{definition}->{0}->{partOfSpeech}; } else { $pos = $r->{definition}->{partOfSpeech}; } I am pretty embarrassed by this since I know there's an obviously better way to do it. I would love to get something as simple as this working so I could more generally loop through the elements. BUt it just isn't working for me (no idea what to reference). I've tried many variations of the following - this is just my last attempt: while (my ($k, $v) = each %{$r->{definitions}->{definition}[0]->{sequence}->{partOfSpeech}}) { $v =~ s/'/'"'"'/g; $v = "'$v'"; print "export $k=$v\n"; } Lastly, when I do "print Dumper($r)" it gives me this: $VAR1 = { 'definition' => { '0' => { 'partOfSpeech' => 'noun', 'sequence' => '6', 'text' => 'A complete surrender of inhibitions.', 'headword' => 'abandon' } } }; (And that "noun" you see is the last (6th) definition/partofspeech element).

    Read the article

  • reorder XML elements or set an explicit template with XSLT

    - by Sash
    I tried the solution in my previous question (flattening XML to load via SSIS package), however this isn't working. I now know what I need to do, however I need some guidance on how to do it. So say I have the following XML structure: <person id="1"> <name>John</name> <surname>Smith</surname> <age>25</age> <comment> <comment_id>1</comment_id> <comment_text>Hello</comment_text> </comment> <comment> <comment_id>2</comment_id> <comment_text>Hello again!</comment_text> </comment> <somethingelse> <id>1</id> </somethingelse> <comment> <comment_id>3</comment_id> <comment_text>Third Item</comment_text> </comment> </person> <person id="2"> <name>John</name> <surname>Smith</surname> <age>25</age> <somethingelse> <id>1</id> </somethingelse> </person> ... ... If I am to load this into a SSIS package, as an XML source, what I will essentially get is a table created for each element, as opposed to get a structured table output such as person table (name, surname, age) somethingelse table (id) comment table (comment_id, comment_text) What I end up getting is: person table (person_Id <-- internal SSIS id) name table surname table age table person_name table person_surname table person_comment_comment_id table etc... What I found was that if each element and all inner elements are not in the same format and consistency, i will get the above anomaly which makes it rather complex especially if I am dealing with 80 - 100+ columns. Unfortunately I have no way of modifying the system (Lotus Notes) that produces these reports, so I was wondering whether I may be able to explicitly have an XSLT template that will be able to align each person sub elements (and the sub collection elements such as comments ? Unless there is a quicker way to realign all inner elements. Seems that SSIS XML source requires a very consistent XML file in the sense of: if the name element is in position 1, then all subsequent name elements within person parent have to be in position 1. SSIS seems to pickup the inconsistencies if there are certain elements missing from one parent to another, however, if their ordering is not right (A, B, C)(A, B, C)(A,C,B), it will chuck a massive fuss! All help is appreciated! Thank you in advance.

    Read the article

  • XML Outputting - PHP vs JS vs Anything Else?

    - by itsphil
    Hi everyone, I am working on developing a Travel website which uses XML API's to get the data. However i am relatively new to XML and outputting it. I have been experimenting with using PHP to output a test XML file, but currently the furthest iv got is to only output a few records. As it the questions states i need to know which technology will be best for this project. Below iv included some points to take into consideration. The website is going to be a large sized, heavy traffic site (expedia/lastminute size) My skillset is PHP (intermediate/high skilled) & Javascript (intermediate/high skilled) Below is an example of the XML that the API is outputting: <?xml version="1.0"?> <response method="###" success="Y"> <errors> </errors> <request> <auth password="test" username="test" /> <method action="###" sitename="###" /> </request> <results> <line id="6" logourl="###" name="Line 1" smalllogourl="###"> <ships> <ship id="16" name="Ship 1" /> <ship id="453" name="Ship 2" /> <ship id="468" name="Ship 3" /> <ship id="356" name="Ship 4" /> </ships> </line> <line id="63" logourl="###" name="Line 2" smalllogourl="###"> <ships> <ship id="492" name="Ship 1" /> <ship id="454" name="Ship 2" /> <ship id="455" name="Ship 3" /> <ship id="421" name="Ship 4" /> <ship id="401" name="Ship 5" /> <ship id="404" name="Ship 6" /> <ship id="405" name="Ship 7" /> <ship id="406" name="Ship 8" /> <ship id="407" name="Ship 9" /> <ship id="408" name="Ship 10" /> </ships> </line> <line id="41" logourl="###"> <ships> <ship id="229" name="Ship 1" /> <ship id="230" name="Ship 2" /> <ship id="231" name="Ship 3" /> <ship id="445" name="Ship 4" /> <ship id="570" name="Ship 5" /> <ship id="571" name="Ship 6" /> </ships> </line> </results> </response> If possible when suggesting which technlogy is best for this project, if you could provide some getting started guides or any information would be very much appreciated. Thank you for taking the time to read this.

    Read the article

  • parsing xml with php, children

    - by moustafa
    Hello I successfully created my parser Everything is working great except one thing since my xml is formated a little different and I am totally lost on how to assign variable to the children of . xml portion <item> <url /> <name /> - <photos> <photo>1020944_0.jpg</photo> <photo>1020944_1.jpg</photo> <photo>1020944_2.jpg</photo> </photos> <user_id /> </item> PHP code <? global $insideitem, $tag, $name, $photos, $user_id; global $count,$db; $db = mysql_connect("localhost", "user","pass"); mysql_select_db("db_name",$db); $result = mysql_query("SELECT user_id FROM table,$db); while ($myrow = mysql_fetch_array($result)){ $uid=$myrow['user_id']; $UN_ID[$uid]=$uid; } $count=1; $count2=1; // ########################################################## // ************* START ELEMENT FUNCTION ********************* // ########################################################## function startElement($parser, $name, $attrs) { global $insideitem, $tag, $name, $photos, $user_id; if ($insideitem) { $tag = $name; } elseif($name == "ITEM"){ $insideitem = true; } } function endElement($parser, $name) { global $insideitem, $tag, $name, $photos, $user_id; global $count,$count2,$db,$UN_ID; if ($name == "ITEM") { if(!$UN_ID[$unique_id]){ $name=addslashes($name); $photo1=addslashes($photo); $photo2=addslashes($photo); $photo3=addslashes($photo); $photo4=addslashes($photo); $user_id=addslashes($category); $sql = "INSERT INTO table ( name, photo1, photo2, photo3, photo4, user_id ) VALUES ( '$name', '$photo', '$photo', '$photo', '$photo', '$user_id', )"; $resultupdate = mysql_query($sql); } $name=''; $photos=''; $user_id=''; } } function characterData($parser, $data) { global $insideitem, $tag, $name, $photos, $user_id; if ($insideitem) { switch ($tag) { case "NAME": $name .= $data; break; case "PHOTOS": $photos .= $data; break; case "USER_ID": $user_id .= $data; break; } } } $xml_parser = xml_parser_create(); xml_set_element_handler($xml_parser, "startElement", "endElement"); xml_set_character_data_handler($xml_parser, "characterData"); $fp = fopen("../myfile.xml","r") or die("Error reading RSS data."); while ($data = fread($fp, 4096)) // Parse each 4KB chunk with the XML parser created above xml_parse($xml_parser, $data, feof($fp)) // Handle errors in parsing or die(sprintf("XML error: %s at line %d", xml_error_string(xml_get_error_code($xml_parser)), xml_get_current_line_number($xml_parser))); fclose($fp); // ########################################################## // *********************** FREE MEMORY ********************** // ########################################################## xml_parser_free($xml_parser); ?> The number of tags can range between 1-4. I have tried searching everywhere for info on how to do this and tried everything but I just cant get it. After several days of this giving me headaches I really hope some one can enlighten me.

    Read the article

  • Issue Creating SQL Login for AppPoolIdentity on Windows Server 2008

    - by Ben Griswold
    IIS7 introduced the option to run your application pool as AppPoolIdentity. With the release of IIS7.5, AppPoolIdentity was promoted to the default option.  You see this change if you’re running Windows 7 or Windows Server 2008 R2.  On my Windows 7 machine, I’m able to define my Application Pool Identity and then create an associated database login via the SQL Server Management Studio interface.  No problem.  However, I ran into some troubles when recently installing my web application onto a Windows Server 2008 R2 64-bit machine.  Strange, but the same approach failed as SSMS couldn’t find the AppPoolIdentity user.  Instead of using the tools, I created and executed the login via script and it worked fine.  Here’s the script, based off of the DefaultAppPool identity, if the same happens to you: CREATE LOGIN [IIS APPPOOL\DefaultAppPool] FROM WINDOWS WITH DEFAULT_DATABASE=[master] USE [Chinook] CREATE USER [IIS APPPOOL\DefaultAppPool] FOR LOGIN [IIS APPPOOL\DefaultAppPool]

    Read the article

  • SQL Server 2005 - Syncing development/production databases

    - by hamlin11
    I've got a rather large SQL Server 2005 database that is under constant development. Every so often, I either get a new developer or need to deploy wide-scale schema changes to the production server. My main concern is deploying schema + data updates to developer machines from the "master" development copy. Is there some built-in functionality or tools for publishing schema + data in such a fashion? I'd like it to take as little time as possible. Can it be done from within SSMS? Thanks in advance for your time

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >