Search Results

Search found 91557 results on 3663 pages for 'server 2008'.

Page 335/3663 | < Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >

  • Sql Server Compact Edition 3.5: Does the data persist in the database file?

    - by jerbersoft
    Hi guys, I have this question in which I have a SQL Server Compact Edition database for a desktop application. I am completely new to Sql Server Compact Edition. The question is, does the data inserted in the database persist even if the application is shut down or restarted? Coz I cant seem to find my data when using Sql Server Management Studio to manage the database. Am I missing anything/something? EDIT: Is SQL Server Compact Edition used for caching local data only? We cant use it like what we normally do on Sql Server Express for example managing data using Sql Server Management Studio?

    Read the article

  • SQL SERVER – Update Statistics are Sampled By Default

    - by pinaldave
    After reading my earlier post SQL SERVER – Create Primary Key with Specific Name when Creating Table on Statistics, I have received another question by a blog reader. The question is as follows: Question: Are the statistics sampled by default? Answer: Yes. The sampling rate can be specified by the user and it can be anywhere between a very low value to 100%. Let us do a small experiment to verify if the auto update on statistics is left on. Also, let’s examine a very large table that is created and statistics by default- whether the statistics are sampled or not. USE [AdventureWorks] GO -- Create Table CREATE TABLE [dbo].[StatsTest]( [ID] [int] IDENTITY(1,1) NOT NULL, [FirstName] [varchar](100) NULL, [LastName] [varchar](100) NULL, [City] [varchar](100) NULL, CONSTRAINT [PK_StatsTest] PRIMARY KEY CLUSTERED ([ID] ASC) ) ON [PRIMARY] GO -- Insert 1 Million Rows INSERT INTO [dbo].[StatsTest] (FirstName,LastName,City) SELECT TOP 1000000 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Update the statistics UPDATE STATISTICS [dbo].[StatsTest] GO -- Shows the statistics DBCC SHOW_STATISTICS ("StatsTest"PK_StatsTest) GO -- Clean up DROP TABLE [dbo].[StatsTest] GO Now let us observe the result of the DBCC SHOW_STATISTICS. The result shows that Resultset is for sure sampling for a large dataset. The percentage of sampling is based on data distribution as well as the kind of data in the table. Before dropping the table, let us check first the size of the table. The size of the table is 35 MB. Now, let us run the above code with lesser number of the rows. USE [AdventureWorks] GO -- Create Table CREATE TABLE [dbo].[StatsTest]( [ID] [int] IDENTITY(1,1) NOT NULL, [FirstName] [varchar](100) NULL, [LastName] [varchar](100) NULL, [City] [varchar](100) NULL, CONSTRAINT [PK_StatsTest] PRIMARY KEY CLUSTERED ([ID] ASC) ) ON [PRIMARY] GO -- Insert 1 Hundred Thousand Rows INSERT INTO [dbo].[StatsTest] (FirstName,LastName,City) SELECT TOP 100000 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Update the statistics UPDATE STATISTICS [dbo].[StatsTest] GO -- Shows the statistics DBCC SHOW_STATISTICS ("StatsTest"PK_StatsTest) GO -- Clean up DROP TABLE [dbo].[StatsTest] GO You can see that Rows Sampled is just the same as Rows of the table. In this case, the sample rate is 100%. Before dropping the table, let us also check the size of the table. The size of the table is less than 4 MB. Let us compare the Result set just for a valid reference. Test 1: Total Rows: 1000000, Rows Sampled: 255420, Size of the Table: 35.516 MB Test 2: Total Rows: 100000, Rows Sampled: 100000, Size of the Table: 3.555 MB The reason behind the sample in the Test1 is that the data space is larger than 8 MB, and therefore it uses more than 1024 data pages. If the data space is smaller than 8 MB and uses less than 1024 data pages, then the sampling does not happen. Sampling aids in reducing excessive data scan; however, sometimes it reduces the accuracy of the data as well. Please note that this is just a sample test and there is no way it can be claimed as a benchmark test. The result can be dissimilar on different machines. There are lots of other information can be included when talking about this subject. I will write detail post covering all the subject very soon. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics

    Read the article

  • SQL SERVER – What is a Technology Evangelist?

    - by pinaldave
    When you hear that someone is an “evangelist” the first thing that might pop into your mind is the Christian church.  In fact, the term did come from Christianity, and basically means someone who spreads the news about their faith.  In the technology world, the same definition is true. Technology evangelists are individuals who, professionally or in their spare time, spread the news about the latest new products.  Sounds like a salesperson, right?  No they are absolutely different. Salespeople also keep up to date with a large number of people, and like to convince others to buy their product – and some will go to any lengths to sell!  An evangelist, on the other hand, is brutally honest about the product, even if sometimes it means not making a sale.  An evangelist is out there to tell the TRUTH.  A salesperson needs to make sales. An Evangelist offers a Solution independent of Technology used – a Salesperson offers Particular Technology. With this definition in mind, you can probably think of a few technology evangelists you already know.  Maybe it’s a relative or a neighbor, someone who loves keeping up with the latest trends and is always willing to tell you about them if you ask even the simplest question.  And, in fact, they probably are evangelists and don’t even know it.  For a long time, the work of technology evangelism was in the hands of community and community technology leaders. Luckily now various organizations have understood the importance of the community and helping community to reach their goals. This has lead them to create role of “Technology Evangelists”. Let me talk about one of the most famous Evangelist of the SQL Server technology. Technology Evangelist only belongs to technology and above any country, race, location or any other thing. They are dedicated to the technology. Vinod Kumar is such a man, who have given a lot to community. For years he was a Technology Evangelist for Microsoft, and maintained a blog that was dedicated to spreading his enthusiasm for his favorite products.  He is one of the most respected Evangelists in the field, and has done a lot of work to define the job for other professionals. Vinod’s career has since progressed to the Microsoft Technology Center (read his post), but he is continuing to be a strong presence in the evangelism community.  I have a lot of respect for Vinod.  He has done a lot for the community and technology evangelism.  Everybody has dream to serve community the way he does, and he is a great role model for evangelists everywhere. On his blog, Vinod created one of the best descriptions of a Technology Evangelist.  It defined the position and also made the distinction between evangelist and salesperson extremely clear.  I will include the highlights of that list here, because no one can say it better than Vinod: Bundle of energy – Passion is their middle name Wonderful Story tellers Empathy, Trust, Loyalty, Openness, Accessibility and Warmth Technology Enthusiast – Doers Love people, people and more people – Community oriented Unique Style and Leadership qualities !!! Self-Confident, Self-Motivated but a student (To read the full list, see: Evangelism Beyond Borders with Evangelists) His blog is a must-read for anyone interested in technology evangelism as a career or simply a hobby.  His advice about how to gain an audience and become a trusted advisor is the best in the business. I think there is an evangelist in everyone. I, too, consider myself a technology evangelist.  Regular readers of this blog will recognize that I am dedicated to bringing information to the masses, and that I pride myself on being both brutally and honest and giving every product fair consideration. I think there is no better way of saying following subject. “Once an Evangelist – Always an Evangelist!” Reference: Pinal Dave (http://blog.SQLAuthority.com)     Filed under: About Me, Database, MVP, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: Evangelist

    Read the article

  • SQL SERVER – Error: Fix – Msg 208 – Invalid object name ‘dbo.backupset’ – Invalid object name ‘dbo.backupfile’

    - by pinaldave
    Just a day before I got a very interesting email. Here is the email (modified a bit to make it relevant to this blog post). “Pinal, We are facing a very strange issue. One of our query  related to backup files and backup set has stopped working suddenly in SSMS. It works fine in application where we have and in the stored procedure but when we have it in our SSMS it gives following error. Msg 208, Level 16, State 1, Line 1 Invalid object name ‘dbo.backupfile’. Here are our queries which we are trying to execute. SELECT name, database_name, backup_size, TYPE, compatibility_level, backup_set_id FROM dbo.backupset; SELECT logical_name, backup_size, file_type FROM dbo.backupfile; This query gives us details related to backupset and backup files when the backup was taken.” When I receive this kind of email, usually I have no answers directly. The claim that it works in stored procedure and in application but not in SSMS gives me no real data. I have requested him to very first check following two things: If he is connected to correct server? His answer was yes. If he has enough permissions? His answer was he was logged in as an admin. This means there was something more to it and I requested him to send me a screenshot of the his SSMS. He promptly sends that to me and as soon as I receive the screen shot I knew what was going on. Before I say anything take a look at the screenshot yourself and see if you can figure out why his queries are not working in SSMS. Just to make your life a bit easy, I have already given a hint in the image. The answer is very simple, the context of the database is master database. To execute above two queries the context of the database has to be msdb. Tables backupset and backupfile belong to the database msdb only. Here are two workaround or solution to above problem: 1) Change context to MSDB Above two queries when they will run as following they will not error out and will give the accurate desired result. USE msdb GO SELECT name, database_name, backup_size, TYPE, compatibility_level, backup_set_id FROM dbo.backupset; SELECT logical_name, backup_size, file_type FROM dbo.backupfile; 2) Prefix the query with msdb There are cases above script used in stored procedure or part of big query, it is not possible to change the context of the whole query to any specific database. Use three part naming convention and prefix them with msdb. SELECT name, database_name, backup_size, TYPE, compatibility_level, backup_set_id FROM msdb.dbo.backupset; SELECT logical_name, backup_size, file_type FROM msdb.dbo.backupfile; Very simple solution but sometime keeps people wondering for an answer. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Using Find/Replace with regular expressions inside a SSIS package

    - by jamiet
    Another one of those might-be-useful-again-one-day-so-I’ll-share-it-in-a-blog-post blog posts I am currently working on a SQL Server Integration Services (SSIS) 2012 implementation where each package contains a parameter called ETLIfcHist_ID: During normal execution this will get altered when the package is executed from the Execute Package Task however we want to make sure that at deployment-time they all have a default value of –1. Of course, they tend to get changed during development so I wanted a way of easily changing them all back to the default value. Opening up each package in turn and editing them was an option but given that we have over 40 packages and we might want to carry out this reset fairly frequently I needed a more automated method so I turned to Visual Studio’s Find/Replace… feature Of course, we don’t know what value will be in that parameter so I can’t simply search for a particular value; hence I opted to use a regular expression to identify the value to be change. In the rest of this blog post I’ll explain how to do that. For demonstration purposes I have taken the contents of a .dtsx file and stripped out everything except the element containing the parameters (<DTS:PackageParameters>), if you want to play along at home you can copy-paste the XML document below into a new XML file and open it up in Visual Studio: <?xml version="1.0"?> <DTS:Executable xmlns:DTS="www.microsoft.com/SqlServer/Dts">   <DTS:PackageParameters>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="InterfaceHistory_ID: used for Lineage"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25846C7E}"       DTS:ObjectName="ETLIfcHist_ID">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">VALUE_TO_BE_CHANGED</DTS:Property>     </DTS:PackageParameter>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="Some other description"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25845C7E}"       DTS:ObjectName="SomeOtherObjectName">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">SomeOtherValue</DTS:Property>     </DTS:PackageParameter>   </DTS:PackageParameters> </DTS:Executable> We are trying to identify the value of the parameter whose name is ETLIfcHist_ID – notice that in the XML document above that value is VALUE_TO_BE_CHANGED. The following regular expression will find the appropriate portion of the XML document: {\<DTS\:PackageParameter[\n ]*DTS\:CreationName="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:DataType="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:Description="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:DTSID="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:ObjectName="ETLIfcHist_ID"\>[\n ]*\<DTS\:Property[\n ]*DTS\:DataType="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:Name="ParameterValue"\>}[A-Za-z0-9\:_\{\}- ]*{\<\/DTS\:Property\>} I have highlighted the name of the parameter that we’re looking for. I have also highlighted two portions identified by pairs of curly braces “{…}”; these are important because they pick out the two portions either side of the value I want to replace, in other words the portions highlighted here: <DTS:PackageParameters>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="InterfaceHistory_ID: used for Lineage"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25846C7E}"       DTS:ObjectName="ETLIfcHist_ID">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">VALUE_TO_BE_CHANGED</DTS:Property>     </DTS:PackageParameter> Those sections in the curly braces are termed tag expressions and can be identified in the replace expression using a backslash and a number identifying which tag expression you’re referring to according to its ordinal position. Hence, our replace expression is simply: \1-1\2 We’re saying the portion of our file identified by the regular expression should be replaced by the first curly brace section, then the literal –1, then the second curly brace section. Make sense? Give it a go yourself by plugging those two expressions into Visual Studio’s Find and Replace dialog. If you set it to look in “All Open Documents” then you can open up the code-behind of all your packages and change all of them at once. The Find and Replace dialog will look like this: That’s it! I realise that not everyone will be looking to change the value of a parameter but hopefully I have shown you a technique that you can modify to work for your own scenario. Given that this blog post is, y’know, on the web I have no doubt that someone is going to find a fault with my find regex expression and if that person is you….that’s OK. Let me know about it in the comments below and perhaps we can work together to come up with something better! Note that some parameters may have a different set of properties (for example some, but not all, of my parameters have a DTS:Required attribute) so your find regular expression may have to change accordingly. When researching this I found the following article to be invaluable: Visual Studio Find/Replace Regular Expression Usage @Jamiet

    Read the article

  • SQL SERVER – Shard No More – An Innovative Look at Distributed Peer-to-peer SQL Database

    - by pinaldave
    There is no doubt that SQL databases play an important role in modern applications. In an ideal world, a single database can handle hundreds of incoming connections from multiple clients and scale to accommodate the related transactions. However the world is not ideal and databases are often a cause of major headaches when applications need to scale to accommodate more connections, transactions, or both. In order to overcome scaling issues, application developers often resort to administrative acrobatics, also known as database sharding. Sharding helps to improve application performance and throughput by splitting the database into two or more shards. Unfortunately, this practice also requires application developers to code transactional consistency into their applications. Getting transactional consistency across multiple SQL database shards can prove to be very difficult. Sharding requires developers to think about things like rollbacks, constraints, and referential integrity across tables within their applications when these types of concerns are best handled by the database. It also makes other common operations such as joins, searches, and memory management very difficult. In short, the very solution implemented to overcome throughput issues becomes a bottleneck in and of itself. What if database sharding was no longer required to scale your application? Let me explain. For the past several months I have been following and writing about NuoDB, a hot new SQL database technology out of Cambridge, MA. NuoDB is officially out of beta and they have recently released their first release candidate so I decided to dig into the database in a little more detail. Their architecture is very interesting and exciting because it completely eliminates the need to shard a database to achieve higher throughput. Each NuoDB database consists of at least three or more processes that enable a single database to run across multiple hosts. These processes include a Broker, a Transaction Engine and a Storage Manager.  Brokers are responsible for connecting client applications to Transaction Engines and maintain a global view of the network to keep track of the multiple Transaction Engines available at any time. Transaction Engines are in-memory processes that client applications connect to for processing SQL transactions. Storage Managers are responsible for persisting data to disk and serving up records to the Transaction Managers if they don’t exist in memory. The secret to NuoDB’s approach to solving the sharding problem is that it is a truly distributed, peer-to-peer, SQL database. Each of its processes can be deployed across multiple hosts. When client applications need to connect to a Transaction Engine, the Broker will automatically route the request to the most available process. Since multiple Transaction Engines and Storage Managers running across multiple host machines represent a single logical database, you never have to resort to sharding to get the throughput your application requires. NuoDB is a new pioneer in the SQL database world. They are making database scalability simple by eliminating the need for acrobatics such as sharding, and they are also making general administration of the database simpler as well.  Their distributed database appears to you as a user like a single SQL Server database.  With their RC1 release they have also provided a web based administrative console that they call NuoConsole. This tool makes it extremely easy to deploy and manage NuoDB processes across one or multiple hosts with the click of a mouse button. See for yourself by downloading NuoDB here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: CodeProject, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: NuoDB

    Read the article

  • SQL SERVER – Puzzle #1 – Querying Pattern Ranges and Wild Cards

    - by Pinal Dave
    Note: Read at the end of the blog post how you can get five Joes 2 Pros Book #1 and a surprise gift. I have been blogging for almost 7 years and every other day I receive questions about Querying Pattern Ranges. The most common way to solve the problem is to use Wild Cards. However, not everyone knows how to use wild card properly. SQL Queries 2012 Joes 2 Pros Volume 1 – The SQL Queries 2012 Hands-On Tutorial for Beginners Book On Amazon | Book On Flipkart Learn SQL Server get all the five parts combo kit Kit on Amazon | Kit on Flipkart Many people know wildcards are great for finding patterns in character data. There are also some special sequences with wildcards that can give you even more power. This series from SQL Queries 2012 Joes 2 Pros® Volume 1 will show you some of these cool tricks. All supporting files are available with a free download from the www.Joes2Pros.com web site. This example is from the SQL 2012 series Volume 1 in the file SQLQueries2012Vol1Chapter2.2Setup.sql. If you need help setting up then look in the “Free Videos” section on Joes2Pros under “Getting Started” called “How to install your labs” Querying Pattern Ranges The % wildcard character represents any number of characters of any length. Let’s find all first names that end in the letter ‘A’. By using the percentage ‘%’ sign with the letter ‘A’, we achieve this goal using the code sample below: SELECT * FROM Employee WHERE FirstName LIKE '%A' To find all FirstName values beginning with the letters ‘A’ or ‘B’ we can use two predicates in our WHERE clause, by separating them with the OR statement. Finding names beginning with an ‘A’ or ‘B’ is easy and this works fine until we want a larger range of letters as in the example below for ‘A’ thru ‘K’: SELECT * FROM Employee WHERE FirstName LIKE 'A%' OR FirstName LIKE 'B%' OR FirstName LIKE 'C%' OR FirstName LIKE 'D%' OR FirstName LIKE 'E%' OR FirstName LIKE 'F%' OR FirstName LIKE 'G%' OR FirstName LIKE 'H%' OR FirstName LIKE 'I%' OR FirstName LIKE 'J%' OR FirstName LIKE 'K%' The previous query does find FirstName values beginning with the letters ‘A’ thru ‘K’. However, when a query requires a large range of letters, the LIKE operator has an even better option. Since the first letter of the FirstName field can be ‘A’, ‘B’, ‘C’, ‘D’, ‘E’, ‘F’, ‘G’, ‘H’, ‘I’, ‘J’ or ‘K’, simply list all these choices inside a set of square brackets followed by the ‘%’ wildcard, as in the example below: SELECT * FROM Employee WHERE FirstName LIKE '[ABCDEFGHIJK]%' A more elegant example of this technique recognizes that all these letters are in a continuous range, so we really only need to list the first and last letter of the range inside the square brackets, followed by the ‘%’ wildcard allowing for any number of characters after the first letter in the range. Note: A predicate that uses a range will not work with the ‘=’ operator (equals sign). It will neither raise an error, nor produce a result set. --Bad query (will not error or return any records) SELECT * FROM Employee WHERE FirstName = '[A-K]%' Question: You want to find all first names that start with the letters A-M in your Customer table and end with the letter Z. Which SQL code would you use? a. SELECT * FROM Customer WHERE FirstName LIKE 'm%z' b. SELECT * FROM Customer WHERE FirstName LIKE 'a-m%z' c. SELECT * FROM Customer WHERE FirstName LIKE 'a-m%z' d. SELECT * FROM Customer WHERE FirstName LIKE '[a-m]%z' e. SELECT * FROM Customer WHERE FirstName LIKE '[a-m]z%' f. SELECT * FROM Customer WHERE FirstName LIKE '[a-m]%z' g. SELECT * FROM Customer WHERE FirstName LIKE '[a-m]z%' Contest Leave a valid answer before June 18, 2013 in the comment section. 5 winners will be selected from all the valid answers and will receive Joes 2 Pros Book #1. 1 Lucky person will get a surprise gift from Joes 2 Pros. The contest is open for all the countries where Amazon ships the book (USA, UK, Canada, India and many others). Special Note: Read all the options before you provide valid answer as there is a small trick hidden in answers. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • pptpd not working externally on Ubuntu Server 11.10

    - by Brendan
    I am trying to set up a pptpd vpn on our newly installed Ubuntu 11.10 64 bit server, but am not having success having a client connect via an iPhone to the VPN. Note that no clients have been able to connect to this VPN from outside of the network. The system is up to date with patches. Here is the output of /var/log/syslog. Please note that 222.153.x.y is my remote IP address. Mar 30 22:07:47 server pptpd[9546]: CTRL: Client 222.153.x.y control connection started Mar 30 22:07:47 server pptpd[9546]: CTRL: Starting call (launching pppd, opening GRE) Mar 30 22:07:47 server pppd[9555]: Plugin /usr/lib/pptpd/pptpd-logwtmp.so loaded. Mar 30 22:07:47 server pppd[9555]: pppd 2.4.5 started by root, uid 0 Mar 30 22:07:47 server pppd[9555]: Using interface ppp0 Mar 30 22:07:47 server pppd[9555]: Connect: ppp0 <--> /dev/pts/3 Mar 30 22:07:47 server pptpd[9546]: GRE: Bad checksum from pppd. Mar 30 22:08:17 server pppd[9555]: LCP: timeout sending Config-Requests Mar 30 22:08:17 server pppd[9555]: Connection terminated. Mar 30 22:08:17 server pppd[9555]: Modem hangup Mar 30 22:08:17 server pppd[9555]: Exit. Mar 30 22:08:17 server pptpd[9546]: GRE: read(fd=6,buffer=6075a0,len=8196) from PTY failed: status = -1 error = Input/output error, usually caused by unexpected termination of pppd, check option syntax and pppd logs Mar 30 22:08:17 server pptpd[9546]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Mar 30 22:08:17 server pptpd[9546]: CTRL: Reaping child PPP[9555] Mar 30 22:08:17 server pptpd[9546]: CTRL: Client 222.153.x.y control connection finished As you can see, the problem seems to be the connection timing out after 30 seconds ("Mar 30 22:08:17 server pppd[9555]: LCP: timeout sending Config-Requests". Over Wifi however (inside the local network) there are no issues: Mar 30 22:12:33 unreal-server pptpd[12406]: CTRL: Client 192.168.0.100 control connection started Mar 30 22:12:33 unreal-server pptpd[12406]: CTRL: Starting call (launching pppd, opening GRE) Mar 30 22:12:33 unreal-server pppd[12407]: Plugin /usr/lib/pptpd/pptpd-logwtmp.so loaded. Mar 30 22:12:33 unreal-server pppd[12407]: pppd 2.4.5 started by root, uid 0 Mar 30 22:12:33 unreal-server pppd[12407]: Using interface ppp0 Mar 30 22:12:33 unreal-server pppd[12407]: Connect: ppp0 <--> /dev/pts/3 Mar 30 22:12:33 unreal-server pptpd[12406]: GRE: Bad checksum from pppd. Mar 30 22:12:36 unreal-server pppd[12407]: peer from calling number 192.168.0.100 authorized Mar 30 22:12:36 unreal-server pppd[12407]: MPPE 128-bit stateless compression enabled Mar 30 22:12:36 unreal-server pppd[12407]: Cannot determine ethernet address for proxy ARP Mar 30 22:12:36 unreal-server pppd[12407]: local IP address 192.168.0.10 Mar 30 22:12:36 unreal-server pppd[12407]: remote IP address 192.168.1.1 I have set up an iptables config for the server; to check this isn't the problem I allowed all traffic temporarily, but this does NOT change the symptoms in the first example. Here is the output from /etc/iptables.rules.save *filter :FORWARD ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] COMMIT Even with these rules applied, the output from /var/log/syslog is LINE FOR LINE what I saw in the the first block of code. Please note that before running this Ubuntu server; an old SME Server box was running in place of it, that had a pptpd server on it just like we are using, and we experienced no issues.

    Read the article

  • Unable to install SQL 2008 on Windows 7

    - by Axel
    SQL 2008 install hangs on Windows 7 The story: Trying to install SQL2008 on Windows 7 hangs on SqlEngineDBStartconfigAction_install_configrc_Cpu32. What I Tried: Uninstall hangs on validation Manual uninstall using msiinv.exe and msiexec /x works Added SQL service accounts to local admins no help Turn of UAC no help Last lines in setup log: 2010-04-01 16:18:05 SQLEngine: : Checking Engine checkpoint 'GetSqlServerProcessHandle' 2010-04-01 16:18:05 SQLEngine: --SqlServerServiceSCM: Waiting for nt event 'Global\sqlserverRecComplete' to be created 2010-04-01 16:18:07 SQLEngine: --SqlServerServiceSCM: Waiting for nt event 'Global\sqlserverRecComplete' or sql process handle to be signaled 2010-04-01 16:18:07 SQLEngine: : Checking Engine checkpoint 'WaitSqlServerStartEvents' 2010-04-01 16:18:53 Slp: Sco: Attempting to initialize script 2010-04-01 16:18:53 Slp: Sco: Attempting to initialize default connection string 2010-04-01 16:18:53 Slp: Sco: Attempting to set script connection protocol to NotSpecified 2010-04-01 16:18:53 Slp: Sco: Attempting to set script connection protocol to NamedPipes 2010-04-01 16:18:53 SQLEngine: --SqlDatabaseServiceConfig: Connection String: Data Source=\.\pipe\SQLLocal\MSSQLSERVER;Initial Catalog=master;Integrated Security=True;Pooling=False;Network Library=dbnmpntw;Application Name=SqlSetup 2010-04-01 16:18:53 SQLEngine: : Checking Engine checkpoint 'ServiceConfigConnect' 2010-04-01 16:18:53 SQLEngine: --SqlDatabaseServiceConfig: Connecting to SQL.... 2010-04-01 16:18:53 Slp: Sco: Attempting to connect script 2010-04-01 16:18:53 Slp: Connection string: Data Source=\.\pipe\SQLLocal\MSSQLSERVER;Initial Catalog=master;Integrated Security=True;Pooling=False;Network Library=dbnmpntw;Application Name=SqlSetup And now comes the fun part: When I open conf mgr I can see the service running, I enabled named pipes and TCP/IP, restarted the service I'm able to connect to the server using an OLE DB connection but not with the Native Client. And what I find suspicious is the following error in my app log: .NET Runtime Optimization Service (clr_optimization_v2.0.50727_32) - Failed to compile: C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\Tools\VDT\DataProjects.dll . Error code = 0x8007000b In MS connect this is reported as a bug but MS is unable to reproduce the problem altough when you search the fora I'm not the only one with this problem. So any help is appreciated.

    Read the article

  • Using parameters in reports for VIsual Studio 2008

    - by Jim Thomas
    This is my first attempt to create a Visual Studio 2008 report using parameters. I have created the dataset and the report. If I run it with a hard-coded filter on a column the report runs fine. When I change the filter to '?' I keep getting this error: No overload for method 'Fill' takes '1' argument Obviously I am missing some way to connect the parameter on the dataset to a report parameter. I have defined a report parameter using the Report/Report Parameter screen. But how does that report parameter get tied to the dataset table parameter? Is there a special naming convention for the parameter? I have Googled this a half dozen times and read the msdn documentation but the examples all seem to use a different approach (like creating a SQL query rather then a table based dataset) or entering the parameter name as "=Parameters!name.value" but I can't figure out where to do that. One msdn example suggestted I needed to create some C# code using a SetParameters() method to make the connection. Is that how it is done? If anyone can recommend a good walk-through I'd appreciate it. Edit: After more reading it appears I don't need report parameters at all. I am simply trying to add a parameter to the database query. So I would create a text box on the form, get the user's input, then apply that parameter programmatically to the fill() argument list. The report parameter on the other hand is an ad-hoc value generally entered by a user that you want to appear on the report. But there is no relationship between report parameters and query/dataset parameters. Is that correct?

    Read the article

  • Visual Studio 2008 Web Project error: Unable to start program http://localhost:port

    - by JookyDFW
    I am re-hashing this question because I have looked at over 50 threads in different forums and have not been able to get a resolution to my problem. Here are the specs: Windows XP SP3, Visual Studio 2008 SP1, .NET 3.5, ASP. NET MVC 2 project, IE 7 (was IE 8) Up until a few days ago I was not having any issues. It is now happening on any solution that I try to debug. I start a debug session (F5), the solution rebuilds, a VS development web server starts and then I get this error: Unable to start program http://localhost:2012/ If I open a web browser and enter the URL the application loads up. I had upgraded to IE 8 a few weeks ago and read there may be some issues so it has been uninstalled and I am currently on IE 7. Also, while IE 8 was installed I had switched my default browser to Firefox but my current default broweser is now IE7. I have reveiewed the threads on this site and others and have not been able to fix the issue. Any help would be appreciated.

    Read the article

  • C# - NetUseAdd from NetApi32.dll on Windows Server 2008 and IIS7

    - by Jack Ryan
    I am attemping to use NetUseAdd to add a share that is needed by an application. My code looks like this. [DllImport("NetApi32.dll", SetLastError = true, CharSet = CharSet.Unicode)] internal static extern uint NetUseAdd( string UncServerName, uint Level, IntPtr Buf, out uint ParmError); ... USE_INFO_2 info = new USE_INFO_2(); info.ui2_local = null; info.ui2_asg_type = 0xFFFFFFFF; info.ui2_remote = remoteUNC; info.ui2_username = username; info.ui2_password = Marshal.StringToHGlobalAuto(password); info.ui2_domainname = domainName; IntPtr buf = Marshal.AllocHGlobal(Marshal.SizeOf(info)); try { Marshal.StructureToPtr(info, buf, true); uint paramErrorIndex; uint returnCode = NetUseAdd(null, 2, buf, out paramErrorIndex); if (returnCode != 0) { throw new Win32Exception((int)returnCode); } } finally { Marshal.FreeHGlobal(buf); } This works fine on our server 2003 boxes. But in attempting to move over to Server 2008 and IIS7 this doesnt work any more. Through liberal logging i have found that it hangs on the line Marshal.StructureToPtr(info, buf, true); I have absolutely no idea why this is can anyone shed any light on it for tell me where i might look for more information?

    Read the article

  • Exporting from SSRS 2008 ReportViewer to Excel Causes Duplicate Columns

    - by Daniel Coffman
    I have a report that groups months by quarters, so each quarter has three months and the display of the months under the quarter is toggled by the quarter header. It looks just fine in the ReportViewer, but when exporting to Excel the first month in the quarter with data is duplicated and appended to the end of the quarter group. Here is what it looks like in the ReportViewer (with Quarters 2 and 4 expanded, note May and June do not have any data and show blank columns by design): http://i.imgur.com/MykZE.png This is how it looks when exported to Excel: http://i.imgur.com/zfLuk.png The collapsed Quarter should only show the LAST month in the quarter. You can see that in the Excel export July is inserted in Q1 even though it should be hidden entirely since that quarter is collapsed, December is appended to Q2, January is inserted into Q3, and April is duplicated and appended to Q4. Exporting the any format OTHER than Excel works correctly and does not insert these columns. A similar bug for rows was filed and marked as "by design": http://connect.microsoft.com/SQLServer/feedback/details/508823/reporting-services-2008-group-by-export-to-excel-duplicate-rows-csv-ok-pdf-ok How do I stop the export to Excel feature from inserting these duplicate columns?

    Read the article

  • Hosting Mercurial on IIS7

    - by Lasse V. Karlsen
    Note, this might perhaps be best suited on serverfault.com, but since it is about hosting a programmer source code repository, I am not entirely sure. I'm posting here first, trusting that it'll be migrated if necessary. I'm attempting to host clones of my Mercurial repositories on my own server (I have the main repo somewhere else), and I'm attempting to set up Mercurial under IIS. I followed the guide here, but I get an error message. Solved: See bottom of this question for details. The error message is: mercurial.error.RepoError: repository /path/to/repo/or/config not found Here's what I did. I installed Mercurial 1.5.2 I created c:\inetpub\hg I downloaded the hg source as per the instructions of the webpage, and copied the hgweb.cgi file into c:\inetpub\hg (note, the webpage says hgwebdir.cgi, but this particular file does not exist, hgweb.cgi does, however, can this be the source of the problem?) I added a hgweb.config, with the following contents: [paths] repo1 = C:/hg/** [web] style = monoblue I created c:\hg, created a sub-directory test, and created a repository inside it I installed python 2.6.5, latest 2.6 version from the website (the webpage mentions I need to install the correct version or I'll get a specific error message, since I don't get an error message that looks remotely like the one mentioned, I assume that 2.6.5 is not the problem) I added a new virtual host hg.vkarlsen.no, pointing it to c:\inetpub\hg For this host, I added a script mapping under the Handler Mappings section, mapping *.cgi to c:\python26\python.exe -u %s %s as per the instructions on the website. I then tested it by navigating to http://hg.vkarlsen.no/hgweb.cgi, but I get an error message. To make it easier to test, I dropped to a command prompt, navigated to c:\inetpub\hg, and executed the following command (error message is part of the text below): C:\inetpub\hg>c:\python26\python.exe -u hgweb.cgi Traceback (most recent call last): File "hgweb.cgi", line 16, in <module> application = hgweb(config) File "mercurial\hgweb\__init__.pyc", line 12, in hgweb File "mercurial\hgweb\hgweb_mod.pyc", line 30, in __init__ File "mercurial\hg.pyc", line 82, in repository File "mercurial\localrepo.pyc", line 2221, in instance File "mercurial\localrepo.pyc", line 62, in __init__ mercurial.error.RepoError: repository /path/to/repo/or/config not found Does anyone know what I need to look at in order to fix this? Edit: Ok, I think I managed to get one step closer to the solution, but I'm still stumped. I realized the .cgi file is a python script file, and not something compile, so I opened it for editing, and these lines was sitting in it: # Path to repo or hgweb config to serve (see 'hg help hgweb') config = "/path/to/repo/or/config" So this was my source for the specific error message. If I change the line to this: config = "c:\\hg\\test" Then I can navigate the empty repository through the Mercurial web interface. However, I want to host multiple repositories, and seeing as the line says that I can also link to a hgweb config file, I tried this: config = "c:\\inetpub\\hg\\hgweb.config" But then I get the following error message: mercurial.error.Abort: c:\inetpub\hg\hgweb.config: not a Mercurial bundle file Exception ImportError: 'No module named shutil' in <bound method bundlerepository.__del__ of <mercurial.bundlerepo.bundlerepository object at 0x0260A110>> ignored Nothing I've tried for the config variable seems to work: config = "hgweb.config" config = "c:\\hg\\hgweb.config" various other variations I don't remember. So, still stumped, pointers anyone? Solved: I ended up having to edit the hgweb.cgi file: from: from mercurial.hgweb import hgweb, wsgicgi application = hgweb(config) to: from mercurial.hgweb import hgweb, hgwebdir, wsgicgi application = hgwebdir(config) Note the added hgwebdir parts there. Here's my hgweb.config file, located in the same directory as hgweb.cgi file: [collections] C:/hg/ = C:/hg/ [web] style = gitweb This now serves my repositories successfully. Hopefully this question will give others some information if they're stumped as I was.

    Read the article

  • Deploying an HttpHandler web service

    - by baron
    I am trying to build a webservice that manipulates http requests POST and GET. Here is a sample: public class CodebookHttpHandler: IHttpHandler { public void ProcessRequest(HttpContext context) { if (context.Request.HttpMethod == "POST") { //DoHttpPostLogic(); } else if (context.Request.HttpMethod == "GET") { //DoHttpGetLogic(); } } ... public void DoHttpPostLogic() { ... } public void DoHttpGetLogic() { ... } I need to deploy this but I am struggling how to start. Most online references show making a website, but really, all I want to do is respond when an HttpPost is sent. I don't know what to put in the website, just want that code to run. Edit: I am following this site as its exactly what I'm trying to do. I have the website set up, I have the code for the handler in a .cs file, i have edited the web.config to add the handler for the file extension I need. Now I am at the step 3 where you tell IIS about this extension and map it to ASP.NET. Also I am using IIS 7 so interface is slightly different than the screenshots. This is the problem I have: 1) Go to website 2) Go to handler mappings 3) Go Add Script Map 4) request path - put the extension I want to handle 5) Executable- it seems i am told to set aspnet_isapi.dll here. Maybe this is incorrect? 6) Give name 7) Hit OK button: Add Script Map Do you want to allow this ISAPI extension? Click "Yes" to add the extension with an "Allowed" entry to the ISAPI and CGI Restrictions list or to update an existing extension entry to "Allowed" in the ISAPI and CGI Restrictions list. Yes No Cancel 8) Hit Yes Add Script Map The specified module required by this handler is not in the modules list. If you are adding a script map handler mapping, the IsapiModule or the CgiModule must be in the modules list. OK edit 2: Have just figured out that that managed handler had something to do with handlers witten in managed code, script map was to help configuring an executable and module mapping to work with http Modules. So I should be using option 1 - Add Managed Handler. See: http://yfrog.com/11managedhandlerp I know what my request path is for the file extension... and I know name (can call it whatever I like), so it must be the Type field I am struggling with. In the applications folder (in IIS) so far I just have the MyHandler.cs and web.config (Of course also a file with the extension I am trying to create the handler for!) edit3: progress So now I have the code and the web.config set up I test to see If I can browse to the filename.CustomExtension file: HTTP Error 404.3 - Not Found The page you are requesting cannot be served because of the extension configuration. If the page is a script, add a handler. If the file should be downloaded, add a MIME map. So in IIS7 I go to Handler Mappings and add it in. See this MSDN example, it is exactly what I am trying to follow The class looks like this: using System.Web; namespace HandlerAttempt2 { public class MyHandler : IHttpHandler { public MyHandler() { //TODO: Add constructor logic here } public void ProcessRequest(HttpContext context) { var objResponse = context.Response; objResponse.Write("<html><body><h1>It just worked"); objResponse.Write("</body></html>"); } public bool IsReusable { get { return true; } } } } I add the Handler in as follows: Request path: *.whatever Type: MyHandler (class name - this appears correct as per example!) Name: whatever Try to browse to the custom file again (this is in app pool as Integrated): HTTP Error 500.21 - Internal Server Error Handler "whatever" has a bad module "ManagedPipelineHandler" in its module list Try to browse to the custom file again (this is in app pool as CLASSIC): HTTP Error 404.17 - Not Found The requested content appears to be script and will not be served by the static file handler.

    Read the article

  • TSQL Shred XML - Working with namespaces (newbie @ shredding XML)

    - by drachenstern
    Here's a link to my previous question on this same block of code with a working shred example Ok, I'm a C# ASP.NET dev following orders: The orders are to take a given dataset, shred the XML and return columns. I've argued that it's easier to do the shredding on the ASP.NET side where we already have access to things like deserializers, etc, and the entire complex of known types, but no, the boss says "shred it on the server, return a dataset, bind the dataset to the columns of the gridview" so for now, I'm doing what I was told. This is all to head off the folks who will come along and say "bad requirements". Task at hand: Current code that doesn't work: And if we modify the previous post to include namespaces on the XML elements, we lose the functionality that the previous post has... DECLARE @table1 AS TABLE ( ProductID VARCHAR(10) , Name VARCHAR(20) , Color VARCHAR(20) , UserEntered VARCHAR(20) , XmlField XML ) INSERT INTO @table1 SELECT '12345','ball','red','john','<sizes xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><size xmlns="http://example.com/ns" name="medium"><price>10</price></size><size xmlns="http://example.com/ns" name="large"><price>20</price></size></sizes>' INSERT INTO @table1 SELECT '12346','ball','blue','adam','<sizes xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><size xmlns="http://example.com/ns" name="medium"><price>12</price></size><size xmlns="http://example.com/ns" name="large"><price>25</price></size></sizes>' INSERT INTO @table1 SELECT '12347','ring','red','john','<sizes xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><size xmlns="http://example.com/ns" name="medium"><price>5</price></size><size xmlns="http://example.com/ns" name="large"><price>8</price></size></sizes>' INSERT INTO @table1 SELECT '12348','ring','blue','adam','<sizes xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><size xmlns="http://example.com/ns" name="medium"><price>8</price></size><size xmlns="http://example.com/ns" name="large"><price>10</price></size></sizes>' INSERT INTO @table1 SELECT '23456','auto','black','ann','<auto xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><type xmlns="http://example.com/ns">car</type><wheels xmlns="http://example.com/ns">4</wheels><doors xmlns="http://example.com/ns">4</doors><cylinders xmlns="http://example.com/ns">3</cylinders></auto>' INSERT INTO @table1 SELECT '23457','auto','black','ann','<auto xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><type xmlns="http://example.com/ns">truck</type><wheels xmlns="http://example.com/ns">4</wheels><doors xmlns="http://example.com/ns">2</doors><cylinders xmlns="http://example.com/ns">8</cylinders></auto><auto xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><type xmlns="http://example.com/ns">car</type><wheels xmlns="http://example.com/ns">4</wheels><doors xmlns="http://example.com/ns">4</doors><cylinders xmlns="http://example.com/ns">6</cylinders></auto>' DECLARE @x XML -- I think I'm supposed to use WITH XMLNAMESPACES(...) here but I don't know how SELECT @x = ( SELECT ProductID , Name , Color , UserEntered , XmlField.query(' for $vehicle in //auto return <auto type = "{$vehicle/type}" wheels = "{$vehicle/wheels}" doors = "{$vehicle/doors}" cylinders = "{$vehicle/cylinders}" />') FROM @table1 table1 WHERE Name = 'auto' FOR XML AUTO ) SELECT @x SELECT ProductID = T.Item.value('../@ProductID', 'varchar(10)') , Name = T.Item.value('../@Name', 'varchar(20)') , Color = T.Item.value('../@Color', 'varchar(20)') , UserEntered = T.Item.value('../@UserEntered', 'varchar(20)') , VType = T.Item.value('@type' , 'varchar(10)') , Wheels = T.Item.value('@wheels', 'varchar(2)') , Doors = T.Item.value('@doors', 'varchar(2)') , Cylinders = T.Item.value('@cylinders', 'varchar(2)') FROM @x.nodes('//table1/auto') AS T(Item) If my previous post shows there's a much better way to do this, then I really need to revise this question as well, but on the off chance this coding-style is good, I can probably go ahead with this as-is... Any takers?

    Read the article

  • Visual Studio 2008 project organization for executable and assembly

    - by user304582
    Hi - I am having a problem setting up the following in Visual Studio 2008: a parent project which includes the entrypoint Main() method class and which declares an interface, and a child project which has classes that implement the interface declared in the parent project. I have specified that Parent's Output type is a Console application, and Child's Output type is a Class library. In Child I have add a reference to the Parent as a project, and specified that Child depends on Parent and that the build order should be Parent, then Child. The build succeeds, and as far I can tell, the right things show up in the Child/bin/debug directory: Parent.exe and Child.dll. However, if I run Parent.exe, then at the point when it should load a class from the Child.dll, it fails with the error message: exception executing operation System.TypeLoadException: Could not load type 'Child.some.class' from assembly 'Parent, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'. I guess I'm confused as to how to get the Parent and Child projects to play together. I plan on having more child projects that use the same framework that is set up in the Parent, and so I do not want to move the entrypoint class down into the Child project. If I try to specify that the Child project is also a Console application, then the build process fails because there is no Main() entrypoint class in the child (even though the Parent project is included as a reference). Any help would be welcome! Thanks, Martin

    Read the article

  • Missing UAC shield overlay on desktop shortcut icon when created by msi created from VS 2008

    - by Alain Hogue
    I created a setup program to deploy my VBNet program using Visual Studio 2008. Inside this setup program I created a shortcut to the "primary output" to be installed on the user desktop. Now, everything is working correctly. The program is installed under "C:\Program Files" and the shortcut is created on the desktop. Also, when I use this shortcut I am prompted by UAC to autorize running this program as administrator. So far, so good... But! My desktop icon does not have the UAC shield overlay even if the program is compiled with the manifest stating that it must run as administrator. Also, if I manually create a new shortcut on the desktop to the same executable after the installation, this new shortcut WILL have the shield overlay! I have tried to reboot and delete the iconCache.db file but it did not work. So my question is: How can I have my desktop shortcut appear WITH the UAC shield overlay when installed initially. Thanks!

    Read the article

  • Installing mongrel service on Windows 2008

    - by akirekadu
    We use InstallAnywhere to install our product. One of the components that it needs to install is mongrel. IA invokes the following command line during installation: mongrel_rails service::install -N service-1 -D "Service 1" -c "C:\app_dir\\rails\rails_apps\service-1" -p 19000 -e production Apprently under the hoods "sc create..." is used. The installation works great on Windows 2003. On Windows 2008 though this operation requires elevated privileges. When I login as local administrator (ie 'local-machine\administrator' user), the installation works just fine. However, when I login as a domain user that is part of local administrators group, the services fails to install with error "access is denied". How can I make it possible to install the product without having to login as local administrator? Thanks! Couple of notes I would like to add. One solution I tried is to execute the installer as administrator. The service does get installed. However, it creates another problem. An embedded 3rd party product and its files get installed with admin only rights. So we do need to run the installer as logged in user.

    Read the article

  • TSQL Shred XML - Is this right or is there a better way (newbie @ shredding XML)

    - by drachenstern
    Ok, I'm a C# ASP.NET dev following orders: The orders are to take a given dataset, shred the XML and return columns. I've argued that it's easier to do the shredding on the ASP.NET side where we already have access to things like deserializers, etc, and the entire complex of known types, but no, the boss says "shred it on the server, return a dataset, bind the dataset to the columns of the gridview" so for now, I'm doing what I was told. This is all to head off the folks who will come along and say "bad requirements". Task at hand: Here's my code that works and does what I want it to: DECLARE @table1 AS TABLE ( ProductID VARCHAR(10) , Name VARCHAR(20) , Color VARCHAR(20) , UserEntered VARCHAR(20) , XmlField XML ) INSERT INTO @table1 SELECT '12345','ball','red','john','<sizes><size name="medium"><price>10</price></size><size name="large"><price>20</price></size></sizes>' INSERT INTO @table1 SELECT '12346','ball','blue','adam','<sizes><size name="medium"><price>12</price></size><size name="large"><price>25</price></size></sizes>' INSERT INTO @table1 SELECT '12347','ring','red','john','<sizes><size name="medium"><price>5</price></size><size name="large"><price>8</price></size></sizes>' INSERT INTO @table1 SELECT '12348','ring','blue','adam','<sizes><size name="medium"><price>8</price></size><size name="large"><price>10</price></size></sizes>' INSERT INTO @table1 SELECT '23456','auto','black','ann','<auto><type>car</type><wheels>4</wheels><doors>4</doors><cylinders>3</cylinders></auto>' INSERT INTO @table1 SELECT '23457','auto','black','ann','<auto><type>truck</type><wheels>4</wheels><doors>2</doors><cylinders>8</cylinders></auto><auto><type>car</type><wheels>4</wheels><doors>4</doors><cylinders>6</cylinders></auto>' DECLARE @x XML SELECT @x = ( SELECT ProductID , Name , Color , UserEntered , XmlField.query(' for $vehicle in //auto return <auto type = "{$vehicle/type}" wheels = "{$vehicle/wheels}" doors = "{$vehicle/doors}" cylinders = "{$vehicle/cylinders}" />') FROM @table1 table1 WHERE Name = 'auto' FOR XML AUTO ) SELECT @x SELECT ProductID = T.Item.value('../@ProductID', 'varchar(10)') , Name = T.Item.value('../@Name', 'varchar(20)') , Color = T.Item.value('../@Color', 'varchar(20)') , UserEntered = T.Item.value('../@UserEntered', 'varchar(20)') , VType = T.Item.value('@type' , 'varchar(10)') , Wheels = T.Item.value('@wheels', 'varchar(2)') , Doors = T.Item.value('@doors', 'varchar(2)') , Cylinders = T.Item.value('@cylinders', 'varchar(2)') FROM @x.nodes('//table1/auto') AS T(Item) SELECT @x = ( SELECT ProductID , Name , Color , UserEntered , XmlField.query(' for $object in //sizes/size return <size name = "{$object/@name}" price = "{$object/price}" />') FROM @table1 table1 WHERE Name IN ('ring', 'ball') FOR XML AUTO ) SELECT @x SELECT ProductID = T.Item.value('../@ProductID', 'varchar(10)') , Name = T.Item.value('../@Name', 'varchar(20)') , Color = T.Item.value('../@Color', 'varchar(20)') , UserEntered = T.Item.value('../@UserEntered', 'varchar(20)') , SubName = T.Item.value('@name' , 'varchar(10)') , Price = T.Item.value('@price', 'varchar(2)') FROM @x.nodes('//table1/size') AS T(Item) So for now, I'm trying to figure out if there's a better way to write the code than what I'm doing now... (I have a part 2 I'm about to go key in)

    Read the article

  • Capturing window image in windows server 2008

    - by Sergey Osypchuk
    I am capturing output of windows program using following function: public static Bitmap Get(IntPtr hWnd, int X1, int Y1, int width, int height) { WINDOWINFO winInfo = new WINDOWINFO(); bool ret = GetWindowInfo(hWnd, ref winInfo); if (!ret) { return null; } int curheight = height; if (curheight <= 0 || curheight > winInfo.rcWindow.Height) curheight = winInfo.rcWindow.Height; int curwidth = width; if (curwidth <= 0 || curwidth > winInfo.rcWindow.Width) curwidth = winInfo.rcWindow.Width; if (curheight == 0 || curwidth == 0) return null; Graphics frmGraphics = Graphics.FromHwnd(hWnd); IntPtr hDC = GetWindowDC(hWnd); //gets the entire window //IntPtr hDC = frmGraphics.GetHdc(); -- gets the client area, no menu bars, etc.. System.Drawing.Bitmap tmpBitmap = new System.Drawing.Bitmap(curwidth, curheight, frmGraphics); Graphics bmGraphics = Graphics.FromImage(tmpBitmap); IntPtr bmHdc = bmGraphics.GetHdc(); BitBlt(bmHdc, 0, 0, curwidth, curheight, hDC, X1, Y1, TernaryRasterOperations.SRCCOPY); bmGraphics.ReleaseHdc(bmHdc); ReleaseDC(hWnd, hDC); return tmpBitmap; } On Development environment everything is excellent, but on windows server 2008 I have following issues: 1) When there is other window in front my - it is getting captured as well 2) When there is no user connected to RDC - image is black On other hand, I am able to render webpage images using IE. How I can change behaviour of windows rendering process to get proper results?

    Read the article

  • (500) Internal Server Error with C# and Web Dev 2008 Express

    - by user32848
    The code below is generic, found in a variety of places, including a book I have. I have used it as a base for a working program in VS 2005. Now I've resurrected it with my current Visual Web Developer 2008 Express Edition and I seem to have problems connecting it to my default development server (I don't have IIS on my XP). The error is: (500) Internal Server Error. Is this saying what I thought it did (above) or something else, and how do I solve this problem? using System; using System.Collections; using System.Collections.Generic; using System.Configuration; using System.IO; using System.Net; using System.Linq; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Web.UI.HtmlControls; using System.Text.RegularExpressions; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { string strResult = ""; string url = "http://weather.unisys.com"; WebResponse objResponse; WebRequest objRequest; try { objRequest = System.Net.HttpWebRequest.Create(url); } catch { objRequest = System.Net.HttpWebRequest.Create("http://"+ url); } objResponse = objRequest.GetResponse(); using (StreamReader sr = new StreamReader(objResponse.GetResponseStream())) { strResult = sr.ReadToEnd(); sr.Close(); } } }

    Read the article

  • Using an ActiveX control without a form/dialog/window in C++, VS 2008

    - by younevertell
    an ActiveX control generated by Visual Basic 6, very old stuff, it has a couple of methods and one event. now I have to use the control in C++, visual studio 2008, but I don't like to generate a form/dialog/window as constainer. I add some MFC class From ActiveX Control. Now the wrap class has all the methods defined in the ActiveX control. I definitely need add event handler to take care of the event fired by ActiveX control. With a form/dialgue/window, it could be done by below BEGIN_EVENTSINK_MAP ON_EVENT. My questions are 1. how to add the event handler without a form/dialgue/window. Is this impossible? 2. Could I manually add constructor in the ActiveX control wrap class, so I could use new operator to get an instance on the heap? Thanks Add the Event Sinks Map near the start of the definition (.cpp) file. The BEGIN_EVENTSINK_MAP macro takes the container's Class name, then the base class name as parameters. The container's class name is used again in the AFX_EVENTSINK_MAP macro and the ON_EVENT macro.

    Read the article

  • IP address shows as a hyphen for failed remote desktop connections in Event Log

    - by PsychoDad
    I am trying to figure out why failed remote desktop connections (from Windows remote desktop) show the client ip address as a hyphen. Here is the event log I get when I type the wrong password for an account (the server is completely external to my home computer): <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-Security-Auditing" Guid="{54849625-5478-4994-A5BA-3E3B0328C30D}" /> <EventID>4625</EventID> <Version>0</Version> <Level>0</Level> <Task>12544</Task> <Opcode>0</Opcode> <Keywords>0x8010000000000000</Keywords> <TimeCreated SystemTime="2012-03-25T19:22:14.694177500Z" /> <EventRecordID>1658501</EventRecordID> <Correlation /> <Execution ProcessID="544" ThreadID="12880" /> <Channel>Security</Channel> <Computer>[Delete for Security Purposes]</Computer> <Security /> </System> <EventData> <Data Name="SubjectUserSid">S-1-0-0</Data> <Data Name="SubjectUserName">-</Data> <Data Name="SubjectDomainName">-</Data> <Data Name="SubjectLogonId">0x0</Data> <Data Name="TargetUserSid">S-1-0-0</Data> <Data Name="TargetUserName">[Delete for Security Purposes]</Data> <Data Name="TargetDomainName">[Delete for Security Purposes]</Data> <Data Name="Status">0xc000006d</Data> <Data Name="FailureReason">%%2313</Data> <Data Name="SubStatus">0xc000006a</Data> <Data Name="LogonType">3</Data> <Data Name="LogonProcessName">NtLmSsp </Data> <Data Name="AuthenticationPackageName">NTLM</Data> <Data Name="WorkstationName">MyComputer</Data> <Data Name="TransmittedServices">-</Data> <Data Name="LmPackageName">-</Data> <Data Name="KeyLength">0</Data> <Data Name="ProcessId">0x0</Data> <Data Name="ProcessName">-</Data> <Data Name="IpAddress">-</Data> <Data Name="IpPort">-</Data> </EventData> </Event> Have found nothing online and am trying to stop terminal services attacks. Any insight is appreciated, I have found nothing online after several hours of seraching...

    Read the article

  • Quality of TFS 2008 merged code

    - by paologios
    Does the quality of code merged by TFS 2008 depend on the used programming language? I know merging in Java / Subversion, and merging a branch to its trunk usually does not create much conflicts. Now in my company, we use VB.NET. When I merge two files TFS does not always get code blocks right, e.g. does not find the right If..then / end if lines. To give you an example, I mean: File 2 is created as a branch of File 1. Both files were changed later, now I'm going to merge those files and am recieving conficts: The marked end-if lines (1) are detected as corresponding, meaning the added event handler Button1_Click is being deleted. Now I wonder if this behavior is by language (C# vs. VB.NET) or are other source control solutions just better than TFS? (And I really liked TFS up to now :) ) File 1: Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load If Not Page.IsPostBack Then Label1.Text = "Hello" Label2.Text = "World" End If End Sub Protected Sub Button2_Click(ByVal sender, ByVal e as System.EventArgs) Handles Button2.Click // .... If Page.IsValid Then Label3.Text = "Hello Button 2" End If // .... End Sub File 2 (Branch of File 1): Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load If Not Page.IsPostBack Then fillTableFromDatabase() End If // (1) End Sub Protected Sub Button1_Click(ByVal sender, ByVal e as System.EventArgs) Handles Button1.Click // do something here End Sub Protected Sub Button2_Click(ByVal sender, ByVal e as System.EventArgs) Handles Button2.Click // .... If Page.IsValid Then End If // (1) // .... End Sub

    Read the article

< Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >