Search Results

Search found 5861 results on 235 pages for 'ssis reporting pack'.

Page 5/235 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • SSIS - The expression cannot be parsed because it contains invalid elements at the location specifie

    - by simonsabin
    If you get the following error when trying to write an expression there is an easy solution Attempt to parse the expression "@[User::FilePath] + "\" + @[User::FileName] + ".raw"" failed.  The token "." at line number "1", character number "<some position>" was not recognized. The expression cannot be parsed because it contains invalid elements at the location specified. The SSIS expression language is a C based language and the \ is a token, this means you have to escape it with another one. i.e "\" becomes "\\", unlike C# you can't prefix the string with a @, you have to use the escaping route. In summary when ever you want to use \ you need to use two \\

    Read the article

  • How hard it will be for the programmer to learn MS SSRS adn SSIS [closed]

    - by user75380
    I have a programming background in php/python/java for 5 years and I know MySQL and PostgreSQL. Currently in our company the MSQL Business Intelligence person is leaving his job in 4 months. I am thinking of trying to go to his place, at least try as I want to move in Business Intelligence field in SSRS and SSIS. I just want to know that is it possible for me to get my head around those things in 4 months because I have no idea how they work and how hard it will be for me to pick up those things. Can I do that? I just want to know from experienced people if I can move towards that field? At least how should I start? In my area there are shortages of person, so once I know the stuff I can get into junior jobs easily but I want to know from experienced people.

    Read the article

  • SSIS is Case-Sensitive

    - by andyleonard
    Introduction SSIS is case-sensitive even if the database is case-insensitive. Imagine... ... you work in an ETL shop where someone who believes in natural keys won the Battle of the Joins. Imagine one of your natural keys is a string. (I know it's a stretch... play along!). Let's build some tables to sketch it out. If you do not have a TestDB database, why not? Build one! You'll use it often. Use TestDB go Create Table SSIS1 ( StrID char ( 5 ) , Name varchar ( 15 ) , Value int ) Insert Into SSIS1...(read more)

    Read the article

  • SQL Server 2008 - Script Data as Insert Statements from SSIS Package

    - by Brandon King
    SQL Server 2008 provides the ability to script data as Insert statements using the Generate Scripts option in Management Studio. Is it possible to access the same functionality from within a SSIS package? Here's what I'm trying to accomplish... I have a scheduled job that nightly scripts out all the schema and data for a SQL Server 2008 database and then uses the script to create a "mirror copy" SQLCE 3.5 database. I've been using Narayana Vyas Kondreddi's sp_generate_inserts stored procedure to accomplish this, but it has problems with some datatypes and greater-than-4K columns (holdovers from SQL Server 2000 days). The Script Data function looks like it could solve my problems, if only I could automate it. Any suggestions?

    Read the article

  • SSIS - XML Source Script

    - by simonsabin
    The XML Source in SSIS is great if you have a 1 to 1 mapping between entity and table. You can do more complex mapping but it becomes very messy and won't perform. What other options do you have? The challenge with XML processing is to not need a huge amount of memory. I remember using the early versions of Biztalk with loaded the whole document into memory to map from one document type to another. This was fine for small documents but was an absolute killer for large documents. You therefore need a streaming approach. For flexibility however you want to be able to generate your rows easily, and if you've ever used the XmlReader you will know its ugly code to write. That brings me on to LINQ. The is an implementation of LINQ over XML which is really nice. You can write nice LINQ queries instead of the XMLReader stuff. The downside is that by default LINQ to XML requires a whole XML document to work with. No streaming. Your code would look like this. We create an XDocument and then enumerate over a set of annoymous types we generate from our LINQ statement XDocument x = XDocument.Load("C:\\TEMP\\CustomerOrders-Attribute.xml");   foreach (var xdata in (from customer in x.Elements("OrderInterface").Elements("Customer")                        from order in customer.Elements("Orders").Elements("Order")                        select new { Account = customer.Attribute("AccountNumber").Value                                   , OrderDate = order.Attribute("OrderDate").Value }                        )) {     Output0Buffer.AddRow();     Output0Buffer.AccountNumber = xdata.Account;     Output0Buffer.OrderDate = Convert.ToDateTime(xdata.OrderDate); } As I said the downside to this is that you are loading the whole document into memory. I did some googling and came across some helpful videos from a nice UK DPE Mike Taulty http://www.microsoft.com/uk/msdn/screencasts/screencast/289/LINQ-to-XML-Streaming-In-Large-Documents.aspx. Which show you how you can combine LINQ and the XmlReader to get a semi streaming approach. I took what he did and implemented it in SSIS. What I found odd was that when I ran it I got different numbers between using the streamed and non streamed versions. I found the cause was a little bug in Mikes code that causes the pointer in the XmlReader to progress past the start of the element and thus foreach (var xdata in (from customer in StreamReader("C:\\TEMP\\CustomerOrders-Attribute.xml","Customer")                                from order in customer.Elements("Orders").Elements("Order")                                select new { Account = customer.Attribute("AccountNumber").Value                                           , OrderDate = order.Attribute("OrderDate").Value }                                ))         {             Output0Buffer.AddRow();             Output0Buffer.AccountNumber = xdata.Account;             Output0Buffer.OrderDate = Convert.ToDateTime(xdata.OrderDate);         } These look very similiar and they are the key element is the method we are calling, StreamReader. This method is what gives us streaming, what it does is return a enumerable list of elements, because of the way that LINQ works this results in the data being streamed in. static IEnumerable<XElement> StreamReader(String filename, string elementName) {     using (XmlReader xr = XmlReader.Create(filename))     {         xr.MoveToContent();         while (xr.Read()) //Reads the first element         {             while (xr.NodeType == XmlNodeType.Element && xr.Name == elementName)             {                 XElement node = (XElement)XElement.ReadFrom(xr);                   yield return node;             }         }         xr.Close();     } } This code is specifically designed to return a list of the elements with a specific name. The first Read reads the root element and then the inner while loop checks to see if the current element is the type we want. If not we do the xr.Read() again until we find the element type we want. We then use the neat function XElement.ReadFrom to read an element and all its sub elements into an XElement. This is what is returned and can be consumed by the LINQ statement. Essentially once one element has been read we need to check if we are still on the same element type and name (the inner loop) This was Mikes mistake, if we called .Read again we would advance the XmlReader beyond the start of the Element and so the ReadFrom method wouldn't work. So with the code above you can use what ever LINQ statement you like to flatten your XML into the rowsets you want. You could even have multiple outputs and generate your own surrogate keys.        

    Read the article

  • SSIS - Range lookups

    - by Repieter
      When developing an ETL solution in SSIS we sometimes need to do range lookups in SSIS. Several solutions for this can be found on the internet, but now we have built another solution which I would like to share, since it's pretty easy to implement and the performance is fast.   You can download the sample package to see how it works. Make sure you have the AdventureWorks2008R2 and AdventureWorksDW2008R2 databases installed. (Apologies for the layout of this blog, I don't do this too often :))   To give a little bit more information about the example, this is basically what is does: we load a facttable and do an SCD type 2 lookup operation of the Product dimension. This is done with a script component.   First we query the Data warehouse to create the lookup dataset. The query that is used for that is:   SELECT     [ProductKey]     ,[ProductAlternateKey]     ,[StartDate]     ,ISNULL([EndDate], '9999-01-01') AS EndDate FROM [DimProduct]     The output of this query is stored in a DataTable:     string lookupQuery = @"                         SELECT                             [ProductKey]                             ,[ProductAlternateKey]                             ,[StartDate]                             ,ISNULL([EndDate], '9999-01-01') AS EndDate                         FROM [DimProduct]";           OleDbCommand oleDbCommand = new OleDbCommand(lookupQuery, _oleDbConnection);         OleDbDataAdapter adapter = new OleDbDataAdapter(oleDbCommand);           _dataTable = new DataTable();         adapter.Fill(_dataTable);     Now that the dimension data is stored in the DataTable we use the following method to do the actual lookup:   public int RangeLookup(string businessKey, DateTime lookupDate)     {         // set default return value (Unknown)         int result = -1;           DataRow[] filteredRows;         filteredRows = _dataTable.Select(string.Format("ProductAlternateKey = '{0}'", businessKey));           for (int i = 0; i < filteredRows.Length; i++)         {             // check if the lookupdate is found between the startdate and enddate of any of the records             if (lookupDate >= (DateTime)filteredRows[i][2] && lookupDate < (DateTime)filteredRows[i][3])             {                 result = (filteredRows[i][0] == null) ? -1 : (int)filteredRows[i][0];                 break;             }         }           filteredRows = null;           return result;     }       This method is executed for every row that passes the script component. This is implemented in the ProcessInputRow method   public override void Input0_ProcessInputRow(Input0Buffer Row)     {         // Perform the lookup operation on the current row and put the value in the Surrogate Key Attribute         Row.ProductKey = RangeLookup(Row.ProductNumber, Row.OrderDate);     }   Now what actually happens?!   1. Every record passes the business key and the orderdate to the RangeLookup method. 2. The DataTable is then filtered on the business key of the current record. The output is stored in a DataRow [] object. 3. We loop over the DataRow[] object to see where the orderdate meets the following expression: (lookupDate >= (DateTime)filteredRows[i][2] && lookupDate < (DateTime)filteredRows[i][3]) 4. When the expression returns true (so where the data is between the Startdate and the EndDate), the surrogate key of the dimension record is returned   We have done some testing with this solution and it works great for us. Hope others can use this example to do their range lookups.

    Read the article

  • Why I don't use SSIS checkpoint files

    - by jamiet
    In a recent discussion in regard to general ETL best practises the subject of checkpoint files as a means for package restartability came up and I stated that I was dead against using them. For anyone that may care, here is why: Configuring them is distinctly unintuitive (that's a matter of opinion but if you follow the link I'll wager that you will agree) they don't make any allowance for loop iterations they cannot store variables of type Object they are limited in ability. There are many scenarios where you may want to execute certain containers regardless of whether the package is started from a checkpoint file but the current usage model does not allow for this. they are ignored by eventhandlers, which wouldn't be so bad if there were a way to toggle this behaviour in certain scenarios they dont work properly I'll expand on the last bullet point. I have encountered situations where the behaviour for tasks executing concurrently is unpredictable. That is, sometimes the completion of a task that executes concurrently with a failed/failing task will make it into the checkpoint file and sometimes it won't. This is near-impossible to reproduce but it does happen as my good friend John Welch will hopefully concur (if he is reading). Is anyone out there making successful use of checkpoint files within SSIS? I would be interested in knowing about that if so. @Jamiet

    Read the article

  • Execute a SSIS package in Sync or Async mode from SQL Server 2012

    - by Davide Mauri
    Today I had to schedule a package stored in the shiny new SSIS Catalog store that can be enabled with SQL Server 2012. (http://msdn.microsoft.com/en-us/library/hh479588(v=SQL.110).aspx) Once your packages are stored here, they will be executed using the new stored procedures created for this purpose. This is the script that will get executed if you try to execute your packages right from management studio or through a SQL Server Agent job, will be similar to the following: Declare @execution_id bigint EXEC [SSISDB].[catalog].[create_execution] @package_name='my_package.dtsx', @execution_id=@execution_id OUTPUT, @folder_name=N'BI', @project_name=N'DWH', @use32bitruntime=False, @reference_id=Null Select @execution_id DECLARE @var0 smallint = 1 EXEC [SSISDB].[catalog].[set_execution_parameter_value] @execution_id,  @object_type=50, @parameter_name=N'LOGGING_LEVEL', @parameter_value=@var0 DECLARE @var1 bit = 0 EXEC [SSISDB].[catalog].[set_execution_parameter_value] @execution_id,  @object_type=50, @parameter_name=N'DUMP_ON_ERROR', @parameter_value=@var1 EXEC [SSISDB].[catalog].[start_execution] @execution_id GO The problem here is that the procedure will simply start the execution of the package and will return as soon as the package as been started…thus giving you the opportunity to execute packages asynchrously from your T-SQL code. This is just *great*, but what happens if I what to execute a package and WAIT for it to finish (and thus having a synchronous execution of it)? You have to be sure that you add the “SYNCHRONIZED” parameter to the package execution. Before the start_execution procedure: exec [SSISDB].[catalog].[set_execution_parameter_value] @execution_id,  @object_type=50, @parameter_name=N'SYNCHRONIZED', @parameter_value=1 And that’s it . PS From the RC0, the SYNCHRONIZED parameter is automatically added each time you schedule a package execution through the SQL Server Agent. If you’re using an external scheduler, just keep this post in mind .

    Read the article

  • Details on Oracle's Primavera P6 Reporting Database R2

    - by mark.kromer
    Below is a graphic screenshot of our detailed announcement for the new Oracle data warehouse product for Primavera P6 called P6 Reporting Database R2. This DW product includes the ETL, data warehouse star schemas and ODS that you'll need to build an enterprise reporting solution for your projects & portfolios. This product is included on a restricted license basis with the new Primavera P6 Analytics R1 product from Oracle because those Analytics are built in OBIEE based on this data warehouse product.

    Read the article

  • Inequality joins, Asynchronous transformations and Lookups : SSIS

    - by jamiet
    It is pretty much accepted by SQL Server Integration Services (SSIS) developers that synchronous transformations are generally quicker than asynchronous transformations (for a description of synchronous and asynchronous transformations go read Asynchronous and synchronous data flow components). Notice I said “generally” and not “always”; there are circumstances where using asynchronous transformations can be beneficial and in this blog post I’ll demonstrate such a scenario, one that is pretty common when building data warehouses. Imagine I have a [Customer] dimension table that manages information about all of my customers as a slowly-changing dimension. If that is a type 2 slowly changing dimension then you will likely have multiple rows per customer in that table. Furthermore you might also have datetime fields that indicate the effective time period of each member record. Here is such a table that contains data for four dimension members {Terry, Max, Henry, Horace}: Notice that we have multiple records per customer and that the [SCDStartDate] of a record is equivalent to the [SCDEndDate] of the record that preceded it (if there was one). (Note that I am on record as saying I am not a fan of this technique of storing an [SCDEndDate] but for the purposes of clarity I have included it here.) Anyway, the idea here is that we will have some incoming data containing [CustomerName] & [EffectiveDate] and we need to use those values to lookup [Customer].[CustomerId]. The logic will be: Lookup a [CustomerId] WHERE [CustomerName]=[CustomerName] AND [SCDStartDate] <= [EffectiveDate] AND [EffectiveDate] <= [SCDEndDate] The conventional approach to this would be to use a full cached lookup but that isn’t an option here because we are using inequality conditions. The obvious next step then is to use a non-cached lookup which enables us to change the SQL statement to use inequality operators: Let’s take a look at the dataflow: Notice these are all synchronous components. This approach works just fine however it does have the limitation that it has to issue a SQL statement against your lookup set for every row thus we can expect the execution time of our dataflow to increase linearly in line with the number of rows in our dataflow; that’s not good. OK, that’s the obvious method. Let’s now look at a different way of achieving this using an asynchronous Merge Join transform coupled with a Conditional Split. I’ve shown it post-execution so that I can include the row counts which help to illustrate what is going on here: Notice that there are more rows output from our Merge Join component than on the input. That is because we are joining on [CustomerName] and, as we know, we have multiple records per [CustomerName] in our lookup set. Notice also that there are two asynchronous components in here (the Sort and the Merge Join). I have embedded a video below that compares the execution times for each of these two methods. The video is just over 8minutes long. View on Vimeo  For those that can’t be bothered watching the video I’ll tell you the results here. The dataflow that used the Lookup transform took 36 seconds whereas the dataflow that used the Merge Join took less than two seconds. An illustration in case it is needed: Pretty conclusive proof that in some scenarios it may be quicker to use an asynchronous component than a synchronous one. Your mileage may of course vary. The scenario outlined here is analogous to performance tuning procedural SQL that uses cursors. It is common to eliminate cursors by converting them to set-based operations and that is effectively what we have done here. Our non-cached lookup is performing a discrete operation for every single row of data, exactly like a cursor does. By eliminating this cursor-in-disguise we have dramatically sped up our dataflow. I hope all of that proves useful. You can download the package that I demonstrated in the video from my SkyDrive at http://cid-550f681dad532637.skydrive.live.com/self.aspx/Public/BlogShare/20100514/20100514%20Lookups%20and%20Merge%20Joins.zip Comments are welcome as always. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Best ASP.NET reporting engine with custom reports creation ability

    - by Alex Konduforov
    We need to choose the reporting engine for our ASP.NET application. The main functional requirement is an ability for end users (not programmers, just normal users) to create custom reports. We will be using SQL Server as a database so I am aware of some options: SQL Server Reporting services, Crystal Reports, Active Reports, even WindwardReports. But frankly speaking I've never used any of those except Reporting services and it's quite difficult to choose which one suits the best to customer needs of custom reports creation. Is it possible to get some pros and cons for these options or at least your advice on what would be better to use in this case. Thanks a lot.

    Read the article

  • SSMS Tools Pack now supports Denali CTP1

    - by AaronBertrand
    Earlier today, Mladen Prajdic ( blog | twitter ) released an updated version of his SSMS Tools Pack (v.1.9.4), a free add-in for Management Studio that provides a ton of helpful functionality that isn't available with the native tools. I'm really glad this happened, because I've installed Denali on all of my VMs and have been using it for most of my work, and I've been missing some of the little things the tool adds. In addition to adding Denali support, Mladen also fixed a handful of minor bugs...(read more)

    Read the article

  • AIA Foundation Pack 11gR1 verfügbar!

    - by Hans Viehmann
    Nach der Ankündigung des AIA Foundation Pack 11gR1 am Rande der "Collaborate 2010" in Las Vegas (s. Press Release hier), steht jetzt auch die Software auf edelivery.oracle.com zum Download bereit.In der Pressemeldung sind neben einer Zusammenfasssung der neuen Funktionalitäten auch eine Reihe von Links auf aktuelle Infos enthalten - es lohnt sich also, einen Blick darauf zu werfen ...

    Read the article

  • SQL Server 2012 Service Pack 1 CTP4 is available

    - by AaronBertrand
    This morning the SQL Server team announced the release of Service Pack 1 CTP4 for SQL Server 2012. Back in July I talked about CTP3 and how the release contained BI features only; no fixes. The newer CTP does have fixes and other engine enhancements as well; there is even proper documentation in Books Online about the enhancements. The download page also lists them: http://www.microsoft.com/en-us/download/details.aspx?id=34700 The build # is 11.0.2845....(read more)

    Read the article

  • Using Find/Replace with regular expressions inside a SSIS package

    - by jamiet
    Another one of those might-be-useful-again-one-day-so-I’ll-share-it-in-a-blog-post blog posts I am currently working on a SQL Server Integration Services (SSIS) 2012 implementation where each package contains a parameter called ETLIfcHist_ID: During normal execution this will get altered when the package is executed from the Execute Package Task however we want to make sure that at deployment-time they all have a default value of –1. Of course, they tend to get changed during development so I wanted a way of easily changing them all back to the default value. Opening up each package in turn and editing them was an option but given that we have over 40 packages and we might want to carry out this reset fairly frequently I needed a more automated method so I turned to Visual Studio’s Find/Replace… feature Of course, we don’t know what value will be in that parameter so I can’t simply search for a particular value; hence I opted to use a regular expression to identify the value to be change. In the rest of this blog post I’ll explain how to do that. For demonstration purposes I have taken the contents of a .dtsx file and stripped out everything except the element containing the parameters (<DTS:PackageParameters>), if you want to play along at home you can copy-paste the XML document below into a new XML file and open it up in Visual Studio: <?xml version="1.0"?> <DTS:Executable xmlns:DTS="www.microsoft.com/SqlServer/Dts">   <DTS:PackageParameters>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="InterfaceHistory_ID: used for Lineage"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25846C7E}"       DTS:ObjectName="ETLIfcHist_ID">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">VALUE_TO_BE_CHANGED</DTS:Property>     </DTS:PackageParameter>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="Some other description"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25845C7E}"       DTS:ObjectName="SomeOtherObjectName">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">SomeOtherValue</DTS:Property>     </DTS:PackageParameter>   </DTS:PackageParameters> </DTS:Executable> We are trying to identify the value of the parameter whose name is ETLIfcHist_ID – notice that in the XML document above that value is VALUE_TO_BE_CHANGED. The following regular expression will find the appropriate portion of the XML document: {\<DTS\:PackageParameter[\n ]*DTS\:CreationName="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:DataType="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:Description="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:DTSID="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:ObjectName="ETLIfcHist_ID"\>[\n ]*\<DTS\:Property[\n ]*DTS\:DataType="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:Name="ParameterValue"\>}[A-Za-z0-9\:_\{\}- ]*{\<\/DTS\:Property\>} I have highlighted the name of the parameter that we’re looking for. I have also highlighted two portions identified by pairs of curly braces “{…}”; these are important because they pick out the two portions either side of the value I want to replace, in other words the portions highlighted here: <DTS:PackageParameters>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="InterfaceHistory_ID: used for Lineage"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25846C7E}"       DTS:ObjectName="ETLIfcHist_ID">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">VALUE_TO_BE_CHANGED</DTS:Property>     </DTS:PackageParameter> Those sections in the curly braces are termed tag expressions and can be identified in the replace expression using a backslash and a number identifying which tag expression you’re referring to according to its ordinal position. Hence, our replace expression is simply: \1-1\2 We’re saying the portion of our file identified by the regular expression should be replaced by the first curly brace section, then the literal –1, then the second curly brace section. Make sense? Give it a go yourself by plugging those two expressions into Visual Studio’s Find and Replace dialog. If you set it to look in “All Open Documents” then you can open up the code-behind of all your packages and change all of them at once. The Find and Replace dialog will look like this: That’s it! I realise that not everyone will be looking to change the value of a parameter but hopefully I have shown you a technique that you can modify to work for your own scenario. Given that this blog post is, y’know, on the web I have no doubt that someone is going to find a fault with my find regex expression and if that person is you….that’s OK. Let me know about it in the comments below and perhaps we can work together to come up with something better! Note that some parameters may have a different set of properties (for example some, but not all, of my parameters have a DTS:Required attribute) so your find regular expression may have to change accordingly. When researching this I found the following article to be invaluable: Visual Studio Find/Replace Regular Expression Usage @Jamiet

    Read the article

  • DELETE and EDIT is not working in my python program

    - by user2968025
    This is a simple python program that ADD, DELETE, EDIT and VIEW student records. The problem is, DELETE and EDIT is not working. I dont know why but when I tried removing one '?' in the DELETE dunction, I had the error that says there are only 8 columns and it needs 10. But originally, there are only 9 columns. I don't know where it got the other one to make it 10. Please help.. :( import sys import sqlite3 import tkinter import tkinter as tk from tkinter import * from tkinter.ttk import * def newRecord(): studentnum="" name="" age="" birthday="" address="" email="" course="" year="" section="" con=sqlite3.connect("Students.db") cur=con.cursor() cur.execute("CREATE TABLE IF NOT EXISTS student(studentnum TEXT, name TEXT, age TEXT, birthday TEXT, address TEXT, email TEXT, course TEXT, year TEXT, section TEXT)") def save(): studentnum=en1.get() name=en2.get() age=en3.get() birthday=en4.get() address=en5.get() email=en6.get() course=en7.get() year=en8.get() section=en9.get() student=(studentnum,name,age,birthday,address,email,course,year,section) cur.execute("INSERT INTO student(studentnum,name,age,birthday,address,email,course,year,section) VALUES(?,?,?,?,?,?,?,?,?)",student) con.commit() win=tkinter.Tk();win.title("Students") lbl=tkinter.Label(win,background="#000",foreground="#ddd",width=30,text="Add Record") lbl.pack() lbl1=tkinter.Label(win,width=30,text="Student Number : ") lbl1.pack() en1=tkinter.Entry(win,width=30) en1.pack() lbl2=tkinter.Label(win,width=30,text="Name : ") lbl2.pack() en2=tkinter.Entry(win,width=30) en2.pack() lbl3=tkinter.Label(win,width=30,text="Age : ") lbl3.pack() en3=tkinter.Entry(win,width=30) en3.pack() lbl4=tkinter.Label(win,width=30,text="Birthday : ") lbl4.pack() en4=tkinter.Entry(win,width=30) en4.pack() lbl5=tkinter.Label(win,width=30,text="Address : ") lbl5.pack() en5=tkinter.Entry(win,width=30) en5.pack() lbl6=tkinter.Label(win,width=30,text="Email : ") lbl6.pack() en6=tkinter.Entry(win,width=30) en6.pack() lbl7=tkinter.Label(win,width=30,text="Course : ") lbl7.pack() en7=tkinter.Entry(win,width=30) en7.pack() lbl8=tkinter.Label(win,width=30,text="Year : ") lbl8.pack() en8=tkinter.Entry(win,width=30) en8.pack() lbl9=tkinter.Label(win,width=30,text="Section : ") lbl9.pack() en9=tkinter.Entry(win,width=30) en9.pack() btn1=tkinter.Button(win,background="#000",foreground="#ddd",width=30,text="Save Student",command=save) btn1.pack() def editRecord(): studentnum1="" def edit(): studentnum1=en10.get() studentnum="" name="" age="" birthday="" address="" email="" course="" year="" section="" con=sqlite3.connect("Students.db") cur=con.cursor() row=cur.fetchone() cur.execute("DELETE FROM student WHERE name = '%s'" % studentnum1) con.commit() def save(): studentnum=en1.get() name=en2.get() age=en3.get() birthday=en4.get() address=en5.get() email=en6.get() course=en7.get() year=en8.get() section=en8.get() student=(studentnum,name,age,email,birthday,address,email,course,year,section) cur.execute("INSERT INTO student(studentnum,name,age,email,birthday,address,email,course,year,section) VALUES(?,?,?,?,?,?,?,?,?)",student) con.commit() win=tkinter.Tk();win.title("Students") lbl=tkinter.Label(win,background="#000",foreground="#ddd",width=30,text="Edit Reocrd :"+'\t'+studentnum1) lbl.pack() lbl1=tkinter.Label(win,width=30,text="Student Number : ") lbl1.pack() en1=tkinter.Entry(win,width=30) en1.pack() lbl2=tkinter.Label(win,width=30,text="Name : ") lbl2.pack() en2=tkinter.Entry(win,width=30) en2.pack() lbl3=tkinter.Label(win,width=30,text="Age : ") lbl3.pack() en3=tkinter.Entry(win,width=30) en3.pack() lbl4=tkinter.Label(win,width=30,text="Birthday : ") lbl4.pack() en4=tkinter.Entry(win,width=30) en4.pack() lbl5=tkinter.Label(win,width=30,text="Address : ") lbl5.pack() en5=tkinter.Entry(win,width=30) en5.pack() lbl6=tkinter.Label(win,width=30,text="Email : ") lbl6.pack() en6=tkinter.Entry(win,width=30) en6.pack() lbl7=tkinter.Label(win,width=30,text="Course : ") lbl7.pack() en7=tkinter.Entry(win,width=30) en7.pack() lbl8=tkinter.Label(win,width=30,text="Year : ") lbl8.pack() en8=tkinter.Entry(win,width=30) en8.pack() lbl9=tkinter.Label(win,width=30,text="Section : ") lbl9.pack() en9=tkinter.Entry(win,width=30) en9.pack() btn1=tkinter.Button(win,background="#000",foreground="#ddd",width=30,text="Save Record",command=save) btn1.pack() win=tkinter.Tk();win.title("Edit Student") lbl=tkinter.Label(win,background="#000",foreground="#ddd",width=30,text="Edit Record") lbl.pack() lbl10=tkinter.Label(win,width=30,text="Student Number : ") lbl10.pack() en10=tkinter.Entry(win) en10.pack() btn2=tkinter.Button(win,background="#000",foreground="#ddd",width=30,text="Edit",command=edit) btn2.pack() def deleteRecord(): studentnum1="" win=tkinter.Tk();win.title("Delete Student Record") lbl=tkinter.Label(win,background="#000",foreground="#ddd",width=30,text="Delete Record") lbl.pack() lbl10=tkinter.Label(win,text="Student Number") lbl10.pack() en10=tkinter.Entry(win) en10.pack() def delete(): studentnum1=en10.get() con=sqlite3.connect("Students.db") cur=con.cursor() row=cur.fetchone() cur.execute("DELETE FROM student WHERE name = '%s';" % studentnum1) con.commit() win=tkinter.Tk();win.title("Record Deleted") lbl=tkinter.Label(win,background="#000",foreground="#ddd",width=30,text="Record Deleted :") lbl.pack() lbl=tkinter.Label(win,width=30,text=studentnum1) lbl.pack() btn=tkinter.Button(win,background="#000",foreground="#ddd",width=30,text="Ok",command=win.destroy) btn.pack() btn2=tkinter.Button(win,background="#000",foreground="#ddd",width=30,text="Delete",command=delete) btn2.pack() def viewRecord(): con=sqlite3.connect("Students.db") cur=con.cursor() win=tkinter.Tk();win.title("View Student Record"); row=cur.fetchall() lbl1=tkinter.Label(win,background="#000",foreground="#ddd",width=300,text="\n\tStudent Number"+"\t\tName"+"\t\tAge"+"\t\tBirthday"+"\t\tAddress"+"\t\tEmail"+"\t\tCourse"+"\t\tYear"+"\t\nSection") lbl1.pack() for row in cur.execute("SELECT * FROM student"): lbl2=tkinter.Label(win,width=300,text= row[0] + '\t\t' + row[1] + '\t' + row[2] + '\t\t' + row[3] + '\t\t' + row[4] + '\t\t' + row[5] + '\t\t' + row[6] + '\t\t' + row[7] + '\t\t' + row[8] + '\n') lbl2.pack() con.close() but1=tkinter.Button(win,background="#000",foreground="#fff", width=150,text="Close",command=win.destroy) but1.pack() root=tkinter.Tk();root.title("Student Records") menubar=tkinter.Menu(root) manage=tkinter.Menu(menubar,tearoff=0) manage.add_command(label='New Record',command=newRecord) manage.add_command(label='Edit Record',command=editRecord) manage.add_command(label='Delete Record',command=deleteRecord) menubar.add_cascade(label='Manage',menu=manage) view=tkinter.Menu(menubar,tearoff=0) view.add_command(label='View Record',command=viewRecord) menubar.add_cascade(label='View',menu=view) root.config(menu=menubar) lbl=tkinter.Label(root,background="#000",foreground="#ddd",font=("Verdana",15),width=30,text="Student Records") lbl.pack() lbl1=tkinter.Label(root,text="\nSubmitted by :") lbl1.pack() lbl2=tkinter.Label(root,text="Chavez, Vissia Nicole P") lbl2.pack() lbl3=tkinter.Label(root,text="BSIT 4-4") lbl3.pack()

    Read the article

  • Inability to detect the Output from inside a SSIS script component

    - by Danaja
    In the script of the script components the Output buffer is not being detected as an existing component. I am trying to use the following piece of code Output0Buffer.AddRow(); within the public override void Input0_ProcessInputRow(Input0Buffer Row) method. I know it should be available within this method because at the moment I am copying and using a component from a previous project that has this code and it works. but when I create a new component and put the same code in it doesn't Can any one explain why this is happening?

    Read the article

  • T-SQL Tuesday #005: Reporting

    - by Adam Machanic
    This month's T-SQL Tuesday is hosted by Aaron Nelson of SQLVariations . Aaron has picked a really fantastic topic: Reporting . Reporting is a lot more than just SSRS. Whether or not you realize it, you deal with all sorts of reports every day. Server up-time reports. Application activity reports. And even DMVs, which as Aaron points out are simply reports about what's going on inside of SQL Server. This month's topic can be twisted any number of ways, so have fun and be creative! I'm really looking...(read more)

    Read the article

  • Exploding maps in Reporting Services 2008 R2

    - by Rob Farley
    Kaboom! Well, that was the imagery that secretly appeared in my mind when I saw “USA By State Exploded” in the list of installed maps in Report Builder 3.0 – part of the spatial offering of SQL Server Reporting Server 2008 R2. Alas, it just means that the borders are bigger. Clicking on it showed me. Unfortunately, I’m not interested in maps of the US. None of my clients are there (at least, not yet – feel free to get in touch if you want to change this ‘feature’ of my company). So instead, I’ve recently been getting hold of some data for Australian areas. I’ve just bought some PostCode shapes for South Australia, and will use this in demos for conferences and for showing clients how this kind of report can really impact their reporting. One of the companies I was talking about getting shape files sent me a sample. So I chose the “ESRI shapefile” option you see above, and browsed to my file. It appeared in the window like this: Australians will immediately recognise this as the area around Wollongong, just south of Sydney. Well, apart from me. I didn’t. I had to put a Bing Maps layer behind it to work that out, but that’s not for this post. The thing that I discovered was that if I selected the Exploded USA option (but without clicking Next), and then chose my shape file, then my area around Wollongong would be exploded too! Huh! I think this is actually a bug, but a potentially useful one! Some further investigation (involving creating two identical reports, one with this exploded view, one without), showed that the Exploded View is done by reducing the ScaleFactor property of the PolygonLayer in the map control. The Exploded version has it below 1. If you set to above one, your shapes overlap. I discovered this by accident… I guess I hadn’t looked through all the PolygonLayer options to work out what they all do. And because this post is about Reporting, it can qualify for this month’s T-SQL Tuesday, hosted by Aaron Nelson (@sqlvariant). Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SSIS and StreamInsight Working Together.

    I have been thinking a lot recently about what it would be like to have StreamInsight and SSIS working together.  Well the CAT team have produced a paper on some of our options here. Here are some of my thoughts. There is of course a slight mismatch in their types of usage.  StreamInsight is an Event Stream processing engine capable of operating on new data in the sub second timeframe.  The engine allows you to do real time analytics and take decisions on events that have potentially only just happened.  SSIS on the other hand is a batch processing engine.  In general I do not like having to invoke the same package more than once every 90 seconds or so as it can start to get expensive.  Usually when doing batch processing we have an hour or longer of grace before we have to move data from A –> B. StreamInsight operates on streams of data.  Before anyone mentions it yes I know StreamInsight is equally adept at using the IEnumerable interface, but I would argue live streaming and real-time analytics is a primary goal of the product.  SSIS does not have an “Always On” button I do not like the idea of embedding StreamInsight inside SSIS using a transform particularly.  It means StreamInsight becomes a batch processing engine because it can only operate when the SSIS package is running and SSIS is in charge of when that happens. If I am to have StreamInsight within SSIS then I prefer to have StreamInsight on the adapters.  This way you can force the adapters to stay open and introduce events into your Pipeline.   SSIS has a much richer set of transforms out of the box than StreamInsight.  Although “Always On” was not a design goal of SSIS I have used it like this and it works just fine. SSIS being called from within StreamInsight, now that excites me.  see below   For a while now I have been thinking what it would be like to decouple the Data Flow task from the SSIS package and expose it as something with which you can interact.  Anything can instantiate this version of a DFT as it would expose one or more  input interfaces and one or more output interfaces.  I can imagine that this would be a big hit when moving to “The Cloud” as well.  I could see the Data Flow task maybe being hosted in Azure Appfabric or some such layer. StreamInsight would be able to take advantage of this as well.   I am interested to see where this goes and will be pressing for more meat around the subject when I visit Redmond soon.

    Read the article

  • In SSIS Convert European Currency Format to United States Currency Format

    - by Rob
    I have an interesting problem. I have an SSIS package that processes account data. We are now processing files from Europe. These files are in a CSV format using text qualifiers. For an example of the problem: In the United States the currency format is 123456.99 (We purposely leave the thousands separator out). The files sent from Europe are coming in with two formats. One is 123456,99 and the other is 123.456,00. SSIS is attempting to parse the text file and place it into a NUMERIC(20,2) field. This causes a parsing error in SSIS even with the text qualifiers. If I change the field to CURRENCY it sends a conversion error. I would like for SSIS to deal with this directly without requiring the data to be in the United States format. Has anyone had this problem? Any help will be greatly appreciated. Rob

    Read the article

  • MS SQL dts to ssis migration error

    - by Manjot
    Hi, I have migrated some DTS packages to SSIS 2005 using "Migration" wizard. When I tried to run it, it fails saying you need a higher version of SSIS even though the destination SSIS server is on 9.0.4211 level. then I digged in the package using business intelligence studio and saw that one of the package subtasks is "Transform data task" (the dts version) and the package fails to run that. The storage location for this dts task is set to "Embedded in Task". I didn't touch it. why didn't it convert this task to an SSIS data flow task? any help please? Thansk in advance

    Read the article

  • How *not* to handle a compensation step on failure in an SSIS package

    - by James Luetkehoelter
    Just stumbed across this where I'm working. Someone created a global error handler for a package that included this SQL step: DELETE FROM Table WHERE DateDiff(MI, ExportedDate, GetDate()) < 5 So if the package runs for longer than 5 minutes and fails, nothing gets cleaned up. Please people, don't do this... Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • SQL Server Training in the UK–SSIS, MDX, Admin, MDS, Internals

    - by simonsabin
    If you are looking for SQL Server training they there is no better place to start than a new company Technitrain Its been setup by a fellow MVP and SQLBits Organiser Chris Webb. Why this company rather than any others? Training based on real world experience by the best in the business. The key to Technitrain’s model is not to cram the shelves high with courses and get some average Joe trainers to deliver them. Technitrain bring in world renowned experts in their fields to deliver courses written...(read more)

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >