Search Results

Search found 130394 results on 5216 pages for 'run a sql script file in c'.

Page 119/5216 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • Recursive only 2 level records using common sql

    - by Tony
    I have an Employee table, it's a self-referencing table, with managerId refers to primary key empID. I want to find 2 level records by a given empId. For example: if given empId=5, if empId=5 has children records, display them, as well as the children in children records. You may also provide a recursive suggestion without level limitation. The database is SQL Server 2005.

    Read the article

  • Parameterizing a SQL IN clause?

    - by Jeff Atwood
    How do I parameterize a query containing an IN clause with a variable number of arguments, like this one? select * from Tags where Name in ('ruby','rails','scruffy','rubyonrails') order by Count desc In this query, the number of arguments could be anywhere from 1 to 5. I would prefer not to use a dedicated stored procedure for this (or XML), but if there is some fancy SQL Server 2008 specific way of doing it elegantly, I am open to that.

    Read the article

  • Conversion failed when converting the varchar value to data type int

    - by desi
    *I have a varchar(1000) column declared as field that contains all numbers, as shown below. And I want to execute the following script. I need this to work please Declare @PostalCode varchar(1000)=0 set @PostalCode ='7005036,7004168,7002314,7001188,6998955' Select hl.* From CountryLocation cl INNER JOIN refPostalCodes pc ON pc.PostalCode = hl.PostalCode where pc.Postalcode in (@PostalCode) and pc.notDeleted = 1

    Read the article

  • rewards the products qualify for

    - by Rod
    products purchased -------------------------- bana bana bana stra kiwi reward requirements table (related to a rewards table) reward id, products ---------------------- 1,bana 1,bana 1,bana 2,stra 2,bana 3,stra 4,cart 5,bana 5,bana 5,oliv Can you help me with sql to get rewards the products purchased qualifies for? In the case above the reward ids would be: 1 2 3 If there is a better design that would make the solution easier I welcome those as well. I'm using product names for the sake of easier explaining, I hope. (I'll replace with product ids later)

    Read the article

  • SQl server 2008 permission and encryption

    - by paranjai
    i have made columns in some of the tables encrypted in sql server 2008. Now as i am a db owner i have the access to encode and decode the data using the symmetric key and certificate. But some other users have only currently datareader and datawriter rights ,and when they execute any SP referring the logic which uses the key and certificate "User does has not right on the certificate to execute". What rights / exact permission should i grant them just to solve this problem

    Read the article

  • Exceptions handling in SQL?

    - by Vineet
    Is there any way to handle exceptions in sql(ORACLE 9i)? Since I was trying to divide values of a column that contains both numbers and literals ,I need to fetch out only numbers from it ,as if it divisible by any number then its number else if contains literals it would not get divided it will generate error. how to handle those errors? Please suggest!!

    Read the article

  • SQL query to select records from three Tables seperatly

    - by Azhar
    SQL query which select the record from three tables and there is no relation between these tables. Actually I want to make it a VIEW. suppose there are three tales Table1, Table2, Table3 I want to show records of Table1 first with some filter criteria and then the records from Table2 and in last from Table3 as when we execute the view it show like the records like a Table. There can be any number of rows but the records must be in this sequence.

    Read the article

  • SQL Server replication question.

    - by schar
    I had inherited this SQL Server where we put data in a table (Call TableA) on a database (DB-A). I can see the tableA in another database on the same server ( DB-B) gets the same data right away. Any ideas how this is implemented? I am trying to see the trace but so far no luck. Any one has an idea? At this stage I am not sure if its replication. This is a guess

    Read the article

  • The Data Scientist

    - by BuckWoody
    A new term - well, perhaps not that new - has come up and I’m actually very excited about it. The term is Data Scientist, and since it’s new, it’s fairly undefined. I’ll explain what I think it means, and why I’m excited about it. In general, I’ve found the term deals at its most basic with analyzing data. Of course, we all do that, and the term itself in that definition is redundant. There is no science that I know of that does not work with analyzing lots of data. But the term seems to refer to more than the common practices of looking at data visually, putting it in a spreadsheet or report, or even using simple coding to examine data sets. The term Data Scientist (as far as I can make out this early in it’s use) is someone who has a strong understanding of data sources, relevance (statistical and otherwise) and processing methods as well as front-end displays of large sets of complicated data. Some - but not all - Business Intelligence professionals have these skills. In other cases, senior developers, database architects or others fill these needs, but in my experience, many lack the strong mathematical skills needed to make these choices properly. I’ve divided the knowledge base for someone that would wear this title into three large segments. It remains to be seen if a given Data Scientist would be responsible for knowing all these areas or would specialize. There are pretty high requirements on the math side, specifically in graduate-degree level statistics, but in my experience a company will only have a few of these folks, so they are expected to know quite a bit in each of these areas. Persistence The first area is finding, cleaning and storing the data. In some cases, no cleaning is done prior to storage - it’s just identified and the cleansing is done in a later step. This area is where the professional would be able to tell if a particular data set should be stored in a Relational Database Management System (RDBMS), across a set of key/value pair storage (NoSQL) or in a file system like HDFS (part of the Hadoop landscape) or other methods. Or do you examine the stream of data without storing it in another system at all? This is an important decision - it’s a foundation choice that deals not only with a lot of expense of purchasing systems or even using Cloud Computing (PaaS, SaaS or IaaS) to source it, but also the skillsets and other resources needed to care and feed the system for a long time. The Data Scientist sets something into motion that will probably outlast his or her career at a company or organization. Often these choices are made by senior developers, database administrators or architects in a company. But sometimes each of these has a certain bias towards making a decision one way or another. The Data Scientist would examine these choices in light of the data itself, starting perhaps even before the business requirements are created. The business may not even be aware of all the strategic and tactical data sources that they have access to. Processing Once the decision is made to store the data, the next set of decisions are based around how to process the data. An RDBMS scales well to a certain level, and provides a high degree of ACID compliance as well as offering a well-known set-based language to work with this data. In other cases, scale should be spread among multiple nodes (as in the case of Hadoop landscapes or NoSQL offerings) or even across a Cloud provider like Windows Azure Table Storage. In fact, in many cases - most of the ones I’m dealing with lately - the data should be split among multiple types of processing environments. This is a newer idea. Many data professionals simply pick a methodology (RDBMS with Star Schemas, NoSQL, etc.) and put all data there, regardless of its shape, processing needs and so on. A Data Scientist is familiar not only with the various processing methods, but how they work, so that they can choose the right one for a given need. This is a huge time commitment, hence the need for a dedicated title like this one. Presentation This is where the need for a Data Scientist is most often already being filled, sometimes with more or less success. The latest Business Intelligence systems are quite good at allowing you to create amazing graphics - but it’s the data behind the graphics that are the most important component of truly effective displays. This is where the mathematics requirement of the Data Scientist title is the most unforgiving. In fact, someone without a good foundation in statistics is not a good candidate for creating reports. Even a basic level of statistics can be dangerous. Anyone who works in analyzing data will tell you that there are multiple errors possible when data just seems right - and basic statistics bears out that you’re on the right track - that are only solvable when you understanding why the statistical formula works the way it does. And there are lots of ways of presenting data. Sometimes all you need is a “yes” or “no” answer that can only come after heavy analysis work. In that case, a simple e-mail might be all the reporting you need. In others, complex relationships and multiple components require a deep understanding of the various graphical methods of presenting data. Knowing which kind of chart, color, graphic or shape conveys a particular datum best is essential knowledge for the Data Scientist. Why I’m excited I love this area of study. I like math, stats, and computing technologies, but it goes beyond that. I love what data can do - how it can help an organization. I’ve been fortunate enough in my professional career these past two decades to work with lots of folks who perform this role at companies from aerospace to medical firms, from manufacturing to retail. Interestingly, the size of the company really isn’t germane here. I worked with one very small bio-tech (cryogenics) company that worked deeply with analysis of complex interrelated data. So  watch this space. No, I’m not leaving Azure or distributed computing or Microsoft. In fact, I think I’m perfectly situated to investigate this role further. We have a huge set of tools, from RDBMS to Hadoop to allow me to explore. And I’m happy to share what I learn along the way.

    Read the article

  • xml file save/read error (making a highscore system for XNA game)

    - by Eddy
    i get an error after i write player name to the file for second or third time (An unhandled exception of type 'System.InvalidOperationException' occurred in System.Xml.dll Additional information: There is an error in XML document (18, 17).) (in highscores load method In data = (HighScoreData)serializer.Deserialize(stream); it stops) the problem is that some how it adds additional "" at the end of my .dat file could anyone tell me how to fix this? the file before save looks: <?xml version="1.0"?> <HighScoreData xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <PlayerName> <string>neil</string> <string>shawn</string> <string>mark</string> <string>cindy</string> <string>sam</string> </PlayerName> <Score> <int>200</int> <int>180</int> <int>150</int> <int>100</int> <int>50</int> </Score> <Count>5</Count> </HighScoreData> the file after save looks: <?xml version="1.0"?> <HighScoreData xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <PlayerName> <string>Nick</string> <string>Nick</string> <string>neil</string> <string>shawn</string> <string>mark</string> </PlayerName> <Score> <int>210</int> <int>210</int> <int>200</int> <int>180</int> <int>150</int> </Score> <Count>5</Count> </HighScoreData>> the part of my code that does all of save load to xml is: DECLARATIONS PART [Serializable] public struct HighScoreData { public string[] PlayerName; public int[] Score; public int Count; public HighScoreData(int count) { PlayerName = new string[count]; Score = new int[count]; Count = count; } } IAsyncResult result = null; bool inputName; HighScoreData data; int Score = 0; public string NAME; public string HighScoresFilename = "highscores.dat"; Game1 constructor public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; Width = graphics.PreferredBackBufferWidth = 960; Height = graphics.PreferredBackBufferHeight =640; GamerServicesComponent GSC = new GamerServicesComponent(this); Components.Add(GSC); } Inicialize function (end of it) protected override void Initialize() { //other game code base.Initialize(); string fullpath =Path.Combine(HighScoresFilename); if (!File.Exists(fullpath)) { //If the file doesn't exist, make a fake one... // Create the data to save data = new HighScoreData(5); data.PlayerName[0] = "neil"; data.Score[0] = 200; data.PlayerName[1] = "shawn"; data.Score[1] = 180; data.PlayerName[2] = "mark"; data.Score[2] = 150; data.PlayerName[3] = "cindy"; data.Score[3] = 100; data.PlayerName[4] = "sam"; data.Score[4] = 50; SaveHighScores(data, HighScoresFilename); } } all methods for loading saving and output public static void SaveHighScores(HighScoreData data, string filename) { // Get the path of the save game string fullpath = Path.Combine("highscores.dat"); // Open the file, creating it if necessary FileStream stream = File.Open(fullpath, FileMode.OpenOrCreate); try { // Convert the object to XML data and put it in the stream XmlSerializer serializer = new XmlSerializer(typeof(HighScoreData)); serializer.Serialize(stream, data); } finally { // Close the file stream.Close(); } } /* Load highscores */ public static HighScoreData LoadHighScores(string filename) { HighScoreData data; // Get the path of the save game string fullpath = Path.Combine("highscores.dat"); // Open the file FileStream stream = File.Open(fullpath, FileMode.OpenOrCreate, FileAccess.Read); try { // Read the data from the file XmlSerializer serializer = new XmlSerializer(typeof(HighScoreData)); data = (HighScoreData)serializer.Deserialize(stream);//this is the line // where program gives an error } finally { // Close the file stream.Close(); } return (data); } /* Save player highscore when game ends */ private void SaveHighScore() { // Create the data to saved HighScoreData data = LoadHighScores(HighScoresFilename); int scoreIndex = -1; for (int i = 0; i < data.Count ; i++) { if (Score > data.Score[i]) { scoreIndex = i; break; } } if (scoreIndex > -1) { //New high score found ... do swaps for (int i = data.Count - 1; i > scoreIndex; i--) { data.PlayerName[i] = data.PlayerName[i - 1]; data.Score[i] = data.Score[i - 1]; } data.PlayerName[scoreIndex] = NAME; //Retrieve User Name Here data.Score[scoreIndex] = Score; // Retrieve score here SaveHighScores(data, HighScoresFilename); } } /* Iterate through data if highscore is called and make the string to be saved*/ public string makeHighScoreString() { // Create the data to save HighScoreData data2 = LoadHighScores(HighScoresFilename); // Create scoreBoardString string scoreBoardString = "Highscores:\n\n"; for (int i = 0; i<5;i++) { scoreBoardString = scoreBoardString + data2.PlayerName[i] + "-" + data2.Score[i] + "\n"; } return scoreBoardString; } when ill make this work i will start this code when i call game over (now i start it when i press some buttons, so i could test it faster) public void InputYourName() { if (result == null && !Guide.IsVisible) { string title = "Name"; string description = "Write your name in order to save your Score"; string defaultText = "Nick"; PlayerIndex playerIndex = new PlayerIndex(); result= Guide.BeginShowKeyboardInput(playerIndex, title, description, defaultText, null, null); // NAME = result.ToString(); } if (result != null && result.IsCompleted) { NAME = Guide.EndShowKeyboardInput(result); result = null; inputName = false; SaveHighScore(); } } this where i call output to the screen (ill call this in highscores meniu section when i am done with debugging) spriteBatch.DrawString(Font1, "" + makeHighScoreString(),new Vector2(500,200), Color.White); }

    Read the article

  • Cisco Configuration backup with Windows Script.

    - by Jeff
    We have a client with a lot of Cisco Devices and we would like to automate the backups of these devices through telnet. We have both 2003 and 2008 servers and ideally use tftp to back it up. I wrote this: Set WshShell = WScript.CreateObject("WScript.Shell") Dim fso Set fso = CreateObject("Scripting.FileSystemObject") Dim ciscoList ciscoList = "D:\Scripts\SwitchList.txt" Set theSwitchList = fso.OpenTextFile(ciscoList, 1) Do While theSwitchList.AtEndOfStream <> True cisco = theSwitchList.ReadLine Run "cmd.exe" SendKeys "telnet " SendKeys cisco SendKeys "{ENTER}" SendKeys "USERNAME" SendKeys "{ENTER}" SendKeys "PASSWORD" SendKeys "{ENTER}" SendKeys "en" SendKeys "{ENTER}" SendKeys "PASSWORD" SendKeys "{ENTER}" SendKeys "copy startup-config tftp{ENTER}" SendKeys "(TFTP IP){ENTER}" SendKeys "FileName.txt{ENTER}" SendKeys "exit{ENTER}" 'close telnet session' SendKeys "{ENTER}" 'get command prompt back SendKeys "{ENTER}" SendKeys "exit{ENTER}" 'close cmd.exe On Error Resume Next WScript.Sleep 3000 Loop Sub SendKeys(s) WshShell.SendKeys s WScript.Sleep 300 End Sub Sub Run(command) WshShell.Run command WScript.Sleep 100 WshShell.AppActivate command WScript.Sleep 300 End Sub But the problem with this is the sendkeys are sent to the console session, I'm trying to find a solution that would not require a user to be logged in. Does anyone have any ideas? I have some knowlage of VBS, PowerShell and a pretty good grasp on batching.

    Read the article

  • Generating the query plan takes 5 minutes, the query itself runs in milliseconds. What's up?

    - by TheImirOfGroofunkistan
    I have a fairly complex (or ugly depending on how you look at it) stored procedure running on SQL Server 2008. It bases a lot of the logic on a view that has a pk table and a fk table. The fk table is left joined to the pk table slightly more than 30 times (the fk table has a poor design - it uses name value pairs that I need to flatten out. Unfortunately, it's 3rd party and I cannot change it). Anyway, it had been running fine for weeks until I periodically noticed a run that would take 3-5 minutes. It turns out that this is the time it takes to generate the query plan. Once the query plan exists and is cached, the stored procedure itself runs very efficiently. Things run smoothly until there is a reason to regenerate and cache the query plan again. Has anyone seen this? Why does it take so long to generate the plan? Are there ways to make it come up with a plan faster?

    Read the article

  • slow SQL command

    - by Retrocoder
    I need to take some data from one table (and expand some XML on the way) and put it in another table. As the source table can have thousands or records which caused a timeout I decided to do it in batches of 100 records. The code is run on a schedule so doing it in batches works ok for the customer. If I have say 200 records in the source database the sproc runs very fast but if there are thousands it takes several minutes. I'm guessing that the "TOP 100" only takes the top 100 after it has gone through all the records. I need to change the whole code and sproc at some point as it doesn't scale but for now is there a quick fix to make this run quicker ? INSERT INTO [deviceManager].[TransactionLogStores] SELECT TOP 100 [EventId], [message].value('(/interface/mac)[1]', 'nvarchar(100)') AS mac, [message].value('(/interface/device) [1]', 'nvarchar(100)') AS device_type, [message].value('(/interface/id) [1]', 'nvarchar(100)') AS device_id, [message].value('substring(string((/interface/id)[1]), 1, 6)', 'nvarchar(100)') AS store_id, [message].value('(/interface/terminal/unit)[1]', 'nvarchar(100)') AS unit, [message].value('(/interface/terminal/trans/event)[1]', 'nvarchar(100)') AS event_id, [message].value('(/interface/terminal/trans/data)[1]', 'nvarchar(100)') AS event_data, [message].value('substring(string((/interface/terminal/trans/data)[1]), 9, 11)', 'nvarchar(100)') AS badge, [message].value('(/interface/terminal/trans/time)[1]', 'nvarchar(100)') AS terminal_time, MessageRecievedAt_UTC AS db_time FROM [deviceManager].[TransactionLog] WHERE EventId > @EventId --WHERE MessageRecievedAt_UTC > @StartTime AND MessageRecievedAt_UTC < @EndTime ORDER BY terminal_time DESC

    Read the article

  • HTML5 Drag n Drop File Upload

    - by Paris
    I'm running a website, where I'd like yo upload files with Drag 'n Drop, using HTML5's File Api and FileReader API. I have successfully managed to create a new FileReader, but I don't know how to upload the file. My code (javascript) is the following : holder = document.getElementById('uploader'); holder.ondragover = function () { $("#uploader").addClass('dragover'); return false; }; holder.ondragend = function () { $("#uploader").removeClass('dragover'); return false; }; holder.ondrop = function (e) { $("#uploader").removeClass('dragover'); e.preventDefault(); var file = e.dataTransfer.files[0], reader = new FileReader(); reader.onload = function (event) { //I shoud upload the file now... }; reader.readAsDataURL(file); return false; }; I also have a form (id : upload-form) and an input file field (id : upload-input). Do you have any ideas? P.S. I use jQuery, that's why there is $("#uploader") and others..

    Read the article

  • What file format can I use to output a formatted text file straight from a program without having the markup be too complicated?

    - by Matt
    Premise: I am parsing a file that is quite nearly XML, but not quite. From this file I would like to extract data and output in a file that a user could open up in some program and read. To make the data reasonable, I would almost certainly need to format the text. In case it matters, I will probably be using Java to write the program. Problem: I cannot find a file format that supports formatting without having terribly complex rules and encoding problems. Attempts: I looked into a basic .txt extension first, but it does not have enough formatting advantage. I then tried a .rtf extension, but the rules for outputting text seem to be terribly complicated. It was then suggested that I used XML, but I do not understand how this file would be viewed. This appears to be probably the best solution, but I don't understand much about it. Perhaps somebody could shed some light here. In Other Words: Could somebody suggest and easy to use file format and/or shed some light on how to use XML for text formatting and viewing?

    Read the article

  • How to convert DATETIME to FILETIME value in T-SQL?

    - by Alek Davis
    I need to convert a SQL Server DATETIME value to FILETIME in a T-SQL SELECT statement (on SQL Server 2000). Is there a built-in function to do this? If not, can someone help me figure out how to implement this conversion routine as a UDF (or just plain Transact-SQL)? Here is what I know: FILETIME is 64-bit value representing the number of 100-nanosecond intervals since January 1, 1601 (UTC) (per MSDN: FILETIME Structure). SQL Server time era starts on 1900-01-01 00:00:00 (per SELECT CAST(0 as DATETIME). I found several examples showing how to convert FILETIME values to T-SQL DATETIME (I'm not 100% sure they are accurate, though), but could not find anything about reverse conversion. Even the general idea (or algorithm) would help.

    Read the article

  • Send bulk email from SQL Server 2008

    - by dbnewbie
    Hi, I am relatively new to databases,so please forgive me if my query sounds trivial. I need to send bulk email to all the contacts listed in a particular table. The body of the email is stored in a .doc file which begins as Dear Mr. _. The SQL server will read this file, append the last name of the contact in the blank space and send it to the corresponding email address for that last name. Any thoughts on how to proceed with this? Any insights,suggestions,tips will be greatly appreciated. Thanks!

    Read the article

  • Files and filegroups sql server 2005

    - by Dhivagar
    Can we move default file to another filegroup. sample code is given below Sample code create database EMPLOYEE ON PRIMARY ( NAME = 'PRIMARY_01', FILENAME = 'C:\METADATA\PRIM01.MDF', SIZE = 5 MB , MAXSIZE =50 MB, FILEGROWTH = 2 MB), ( NAME = 'SECONDARY_02', FILENAME = 'C:\METADATA\SEC02.NDF' ), FILEGROUP EMPLOYEE_dETAILS ( NAME = 'EMPDETILS_01', FILENAME = 'C:\METADATA\EMPDET01.NDF', SIZE = 5 MB , MAXSIZE =50 MB, FILEGROWTH = 2 MB), ( NAME = 'EMPDETILS_02', FILENAME = 'C:\METADATA\EMPDET02.NDF', SIZE = 5 MB , MAXSIZE =50 MB, FILEGROWTH = 2 MB) LOG ON ( NAME = 'TRANSACLOG', FILENAME ='c:\METADATA\TRAS01.LDF', SIZE = 5 MB , MAXSIZE =50 MB, FILEGROWTH = 2 MB ) now i want to move the FILENAME = 'C:\METADATA\SEC02.NDF' from deault primary file to the FILEGROUP EMPLOYEE_dETAILS ? need assist ??

    Read the article

  • How do I use BCP or Sql Server Management Studio to get BLOB data our of Sql Server?

    - by Eric
    I'm sorry if this question has been asked already, but I couldn't find it anywhere. I have a table that stores files as BLOBS. The column that holds the file is an image datatype. I would like to be able to extract the binary data out of the column and turn it in to an actual file. I would ideally like to be able to do this with BCP or management studio if possible. I have tried BCP, but for some reason when I try and pull out an office document Word thinks it's corrupt. Here's what I've tried so far (obviously the values have been changed to protect the innocent :): bcp "select document_binary_data from database where id = 12345" queryout "c:\filename.doc" -n -S server -U username -P password This isn't working though? Any thoughts?

    Read the article

  • Sql Server Compact Edition 3.5: Does the data persist in the database file?

    - by jerbersoft
    Hi guys, I have this question in which I have a SQL Server Compact Edition database for a desktop application. I am completely new to Sql Server Compact Edition. The question is, does the data inserted in the database persist even if the application is shut down or restarted? Coz I cant seem to find my data when using Sql Server Management Studio to manage the database. Am I missing anything/something? EDIT: Is SQL Server Compact Edition used for caching local data only? We cant use it like what we normally do on Sql Server Express for example managing data using Sql Server Management Studio?

    Read the article

  • sql server - bulk insert error

    - by user554134
    I am using bulk insert and getting below error: Note: The data in the load file is not beyong the configured column length Running Command: bulk insert load_data from 'C:\temp\dataload\load_file.txt' with (firstrow = 1, fieldterminator = '0x09', rowterminator = '\n',MAXERRORS = 0, ERRORFILE = 'C:\temp\dataload\load_file') Contents of load file: user_name file_path asset_owner city import_date admin C:\ admin toronto 04/12/2012 Error: Msg 4863, Level 16, State 1, Line 1 Bulk load data conversion error (truncation) for row 1, column 6 (validated). Msg 7399, Level 16, State 1, Line 1 The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error. Msg 7330, Level 16, State 2, Line 1 Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >