Search Results

Search found 89638 results on 3586 pages for 'file table'.

Page 132/3586 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • Many-to-many relationship on same table with association object

    - by Nicholas Knight
    Related (for the no-association-object use case): http://stackoverflow.com/questions/1889251/sqlalchemy-many-to-many-relationship-on-a-single-table Building a many-to-many relationship is easy. Building a many-to-many relationship on the same table is almost as easy, as documented in the above question. Building a many-to-many relationship with an association object is also easy. What I can't seem to find is the right way to combine association objects and many-to-many relationships with the left and right sides being the same table. So, starting from the simple, naïve, and clearly wrong version that I've spent forever trying to massage into the right version: t_groups = Table('groups', metadata, Column('id', Integer, primary_key=True), ) t_group_groups = Table('group_groups', metadata, Column('parent_group_id', Integer, ForeignKey('groups.id'), primary_key=True, nullable=False), Column('child_group_id', Integer, ForeignKey('groups.id'), primary_key=True, nullable=False), Column('expires', DateTime), ) mapper(Group_To_Group, t_group_groups, properties={ 'parent_group':relationship(Group), 'child_group':relationship(Group), }) What's the right way to map this relationship?

    Read the article

  • Mapping enum to a table with hibernate annotation

    - by Thierry-Dimitri Roy
    I have a table DEAL and a table DEAL_TYPE. I would like to map this code: public class Deal { DealType type; } public enum DealType { BASE("Base"), EXTRA("Extra"); } The problem is that the data already exist in the database. And I'm having a hard time mapping the classes to the database. The database looks something like that: TABLE DEAL { Long id; Long typeId; } TABLE DEAL_TYPE { Long id; String text; } I know I could use a simple @OneToMany relationship from deal to deal type, but I would prefer to use an enum. Is this possible? I almost got it working by using a EnumType.ORDINAL type. But unfortunately, my IDs in my deal type table are not sequential, and do not start at 1. Any suggestions?

    Read the article

  • c# Create a unique name with a GUID

    - by Dave Rook
    I am creating a back up solution. I doubt there is anything new in what I'm trying to achieve. Before copying the file I want to take a backup of the destination file in case anything becomes corrupt. This means renaming the file. I have to be careful when renaming in case the file already exists and adding a 01 to the end is not safe. My question is, based upon not finding the answer else where, would adding a GUID to the file name work. So, if my file was called file01.txt, renaming the file to file01.txtGUID (where GUID is the generated GUID), I could then perform my back up of that file (at this instance having 2 back ups) and then, after ensuring the file has copied (by comparing length of file to the source), delete the file with the GUID in the name. I know the GUID is not 100% guaranteed to be unique but would this suffice?

    Read the article

  • Problem with parsing XML into table variable

    - by Stanley Ross
    I'm using the following code to read a SQL XML Variable into a table variable. I am getting the following error. " Incorrect syntax near '.'. " Can't quite Figure it out DECLARE @LOBS Table ( LineGUID varchar(40) ) DECLARE @lg xml SET @lg = '<?xml version="1.0" encoding="utf-16" standalone="yes"?> <Table> <LOB> <LineGuid>d6e3adad-8c53-4768-91a3-745c0dae0e08</LineGuid> </LOB> <LOB> <LineGuid>4406db8f-0d19-47da-953b-afc1db38b124</LineGuid> </LOB> </Table>' INSERT INTO @LOBS(LineGUID) SELECT ParamValues.ID.value('.','VARCHAR(40)') FROM @lg.nodes('/Table/LOB/LineGuid') AS ParamValues(ID)

    Read the article

  • How Do I Update a Table From Another Table Only If the Result Count is 1?

    - by Russ Bradberry
    I have a table of 2 tables in a one to many relationship. I want to run an update script that will update the table with the FK of the related table only if there is one result (because if there is multiple then we need to decide which one to use, in another method) Here is what I have so far: UPDATE import_hourly_event_reports i SET i.banner_id = b.banner_id FROM banner b JOIN plan p ON b.plan_id = p.id WHERE b.campain_id = i.campaign_id AND b.size_id = i.size_id AND p.site_id = i.site_id HAVING COUNT(b.banner_id) = 1 As you can see, the HAVING clause doesn't quite work as I'd expect it. I only want to update the row in the import table with the id of the banner from the banner table if the count is equal to 1.

    Read the article

  • creating a table based on fields from three different tables

    - by ozlem
    Hi, I am using MS-Access 2003. I have three tables containing values of w,Q and L. In the Q table I have three fields: country_name, ISIC_code, and Q value. In the L table, there are three fields, country_name, ISIC_code, and L value. And in the w table there are two fields; country_name and w value. Country_names, and ISIC_code might be different for each table. What I want to do is create a new table with the values b(j)=w(i)L(ij)/Q(ij), where i is the country and j is the ISIC_code. So first I will check if the country name and ISIC_code are the same in L and Q tables. If they are equal I will calculate L/Q and then multiply this with w value of the same country if it exists. I appreciate any help.

    Read the article

  • get n records at a time from a temporary table

    - by Claudiu
    I have a temporary table with about 1 million entries. The temporary table stores the result of a larger query. I want to process these records 1000 at a time, for example. What's the best way to set up queries such that I get the first 1000 rows, then the next 1000, etc.? They are not inherently ordered, but the temporary table just has one column with an ID, so I can order it if necessary. I was thinking of creating an extra column with the temporary table to number all the rows, something like: CREATE TEMP TABLE tmptmp AS SELECT ##autonumber somehow##, id FROM .... --complicated query then I can do: SELECT * FROM tmptmp WHERE autonumber>=0 AND autonumber < 1000 etc... how would I actually accomplish this? Or is there a better way? I'm using Python and PostgreSQL.

    Read the article

  • multi-row update table with "different" data

    - by kralco626
    I think the best way to explain this is to tell you what I have. I have two tables A and B both have columns Field1 and Field2. However Field 2 is not populated in table B I want to populate field 2 of table B with field 2 of table A where field 1 of table A matches field 1 of table B. something like update tableB set Field2 = tableA.field2 where tablea.field1 = tableb.field1. The reason this may seem so odd and obscure is that I'm tyring to do an inital data load form an old database to a new one. please let me know if you need clarification

    Read the article

  • create temporary table from cursor

    - by Claudiu
    Is there any way, in PostgreSQL accessed from Python using SQLObject, to create a temporary table from the results of a cursor? Previously, I had a query, and I created the temporary table directly from the query. I then had many other queries interacting w/ that temporary table. Now I have much more data, so I want to only process 1000 rows at a time or so. However, I can't do CREATE TEMP TABLE ... AS ... from a cursor, not as far as I can see. Is the only thing to do something like: rows = cur.fetchmany(1000); cur2 = conn.cursor() cur2.execute("""CREATE TEMP TABLE foobar (id INTEGER)""") for row in rows: cur2.execute("""INSERT INTO foobar (%d)""" % row) or is there a better way? This seems awfully inefficient.

    Read the article

  • Regex to delete HTML within <table> tags

    - by johnv
    I have an HTML document in .txt format containing multiple tables and other texts and I am trying to delete any HTML (anything within "<") if it's inside a table (between and ). For example: =================== other text <other HTML> <table> <b><u><i>bold underlined italic text</b></u></i> </table> other text <other HTML> ============== The final output would be as the following. Note that only HTML within and are removed. ============== other text <other HTML> <table> bold underlined italic text </table> other text <other HTML> ============= Any help is greatly appreciated!

    Read the article

  • Querying Same Lookup Table With Multiple Columns

    - by dmaruca
    I'm a bit confused on this. I have a data table structured like this: Table: Data DataID Val 1 Value 1 2 Value 2 3 Value 3 4 Value 4 Then I have another table structured like this: Table: Table1 Col1 Col2 1 2 3 4 4 3 2 1 Both columns from Table1 point to the data in the data table. How can I get this data to show in a query? For example, a query to return this: Query: Query1 Column1 Column2 Value 1 Value 2 Value 3 Value 4 Value 4 Value 3 Value 2 Value 1 I'm familiar enough with SQL to do a join with one column, but lost beyond that. Any help is appreciated. Sample sql or a link to something to read. Thanks! PS: This is in sqlite

    Read the article

  • how to use constants in SQL CREATE TABLE?

    - by kchiu
    Hi, I have 3 SQL tables, defined as follows: CREATE TABLE organs( abbreviation VARCHAR(16), -- ... other stuff ); CREATE TABLE blocks( abbreviation VARCHAR(16), -- ... other stuff ); CREATE TABLE slides( title VARCHAR(16), -- ... other stuff ); The 3 fields above all use VARCHAR(16) because they're related and have the same length restriction. Is there a (preferably portable) way to put '16' into a constant / variable and reference that instead in CREATE TABLE? eg. something like this would be nice: CREATE TABLE slides( title VARCHAR(MAX_TITLE_LENGTH), -- ... other stuff ); I'm using PostgreSQL 8.4. thanks a lot, and Happy New Year! cheers.

    Read the article

  • How can I decrease time opening myisam table in an "union all" of the same table??

    - by parm.95
    I have a myisam table with 2.5M rows, I use an union all to get my results as following: (SELECT t.id FROM t WHERE type=1 LIMIT 10) UNION ALL (SELECT t.id FROM t WHERE type=2 LIMIT 10) ... UNION ALL (SELECT t.id FROM t WHERE type=25 LIMIT 10) the time to opening the table t is about 6ms. With a single request: SELECT t.id FROM t WHERE type=1 LIMIT 10 the time is about 1ms. What I don't understand is why mysql spend more time to the same table in union all. It should recognize that is the same table and so just opening at the first union. Does anybody can help me to decrease the time for opening table in a "union all"? Thanks in advance!

    Read the article

  • DataSet.GetChanges - Save the updated record in a different table than the source one

    - by John Dev
    I'm doing operation on a dataset containing data from a sql table named Test_1 and then get the updated records using the DataSet.GetChanges(DataRowState.Modified) function. Then i try to save the dataset containing the updated records on a different table than the source one (the table is names Test and has the same structure as Test_1) using the following statement: sqlDataAdapter.Update(changesDataSet,"Test"); I'm getting the following error : Update unable to find TableMapping['Test'] or DataTable 'Test' I'm new to ado.net and don't even know if it"s something possible. Any advice is welcome. Just to provide a bit of context. ETL jobs are importing data in temp table with same structure as the original but with _jobid suffix. Right after a rule engine is doing validation before updating the original table.

    Read the article

  • Hide or display a button according to row count of a table

    - by kawtousse
    Hi, i have a HTML table and a button send. First of all the send button must have this style: style.display="none". But if the table has at least one row the button should be displayed (block/inline); I still have no idea how to relate between the table and the button. I try to use JavaScript but i should think about a function and I don't found any of it to apply at type table. Thinking about CSS still also hard since I cannot find a relation between the table and the button.

    Read the article

  • How to display a border-bottom only if table cells are not empty (CSS)

    - by Polarpro
    Hey there, I've got a Filemaker calculation that generates an HTML page with several tables. If the calculation results in values for certain fields the result would be <table> <tr><td>Example value 1</td></tr> <tr><td>Example value 2</td></tr> ... </table> If the calculation finds no values to be displayed, the result would simply be <table> </table> In the first case, I want to the table to display a border at the bottom (or any other horizontal line); in the second case, I don't want to display a border at the bottom. I cannot find a way to get this done using a CSS... Thanks in adavance :-)

    Read the article

  • Show the specific field on mysql table based on active date

    - by mrjimoy_05
    Suppose that I have 3 tables: A) Table UsrHeader ----------------- UsrID | UsrName ----------------- 1 | Abc 2 | Bcd B) Table UsrDetail ------------------------------- UsrID | UsrLoc | Date ------------------------------- 1 | LocA | 10 Aug 2012 1 | LocB | 15 Aug 2012 2 | LocA | 10 Aug 2012 C) Table Trx ----------------------------- TrxID | TrxDate | UsrID ----------------------------- 1 | 10 Aug 2012 | 1 2 | 16 Aug 2012 | 1 3 | 11 Aug 2012 | 2 What I want to do is to show the table like: --------------------------------------- TrxID | TrxDate | UsrID | UsrLoc --------------------------------------- 1 | 10 Aug 2012 | 1 | LocA 2 | 16 Aug 2012 | 1 | LocB 3 | 11 Aug 2012 | 2 | LocA Notice that there is one user but different location. That's based on the UsrDetail table that the user on a specified date has moved to another location. So, it should be show the user specific location on that date on every transaction. I have try this code but it is no luck: SELECT trx.TrxID, trx.TrxDate, trx.UsrID, User.UsrName, User.UsrLoc FROM trx INNER JOIN ( SELECT UsrHeader.UsrID, UsrHeader.UsrName, UserDetail.UsrLoc FROM UsrHeader INNER JOIN ( SELECT UsrDetail.UsrID, UsrDetail.UsrLoc, UsrDetail.Date FROM UsrDetail ) AS UserDetail ON UserDetail.UsrID = UsrHeader.UsrID ) AS User ON User.UsrID = trx.UsrID AND trx.TrxDate >= User.Date How to do that? Thanks..

    Read the article

  • Convert Excel File 'xls' to CSV, CAUTION: Bumps Ahead

    - by faizanahmad
    The task was to provide users with an interface where they can upload the 'csv' files, these files were to be processed and loaded to Database by a Console application. The code in Console application could not handle the 'xls' files so we thought, OK, lets convert 'xls' to 'csv' in the code, Seemed like fun. The idea was to convert it right after uploading within 'csv' file. As Microsoft does not recommend using the  Excel objects in ASP.NET, we decided to use the Jet engine to open xls. (Ace driver is used for xlsx) The code was pretty straight, can be found on following links: http://www.c-sharpcorner.com/uploadfile/yuanwang200409/102242008174401pm/1.aspx http://www.devasp.net/net/articles/display/141.html FIRST BUMP 'OleDbException (0x80004005): Unspecified error' ( Impersonation ): The ablove code ran fine in my test web site and test console application, but it gave an 'OleDbException (0x80004005): Unspecified error' in main web site, turns out imperonation was set to True and as soon as I changed it to False, it did work. on My XP box, web site was running under user                   'ASPNET'  with imperosnation set to FALSE                   'IUSR_*' i.e IIS guest user with impersonation set to TRUE The weired part was that both users had same rights on the folders I was saving files to and on Excel app in DCOM Config.  We decided to give it a try on Windows Server 2003 with web site set to windows authentication ( impersonation = true ) and yes it did work. SECOND BUMP 'External table not in correct format': I got this error with some files and it appeared that the file from client has some metadata issues  ( when I opened the file in Excel and try to save it ,excel  would give me this error saying File can not be saved in current format ) and the error was caused by that. Some people were able to reslove the error by using "Extended Properties=HTML Import;" in connection string. But it did not work for me. We decided to detour from here and use Excel object :( as we had no control on client setting the meta deta of Excel files. Before third bump there were a ouple of small thingies like 'Retrieving the COM class factory for component with CLSID {00024500-0000-0000-C000-000000000046} failed due to the following error: 80070005' Fix can be found at http://blog.crowe.co.nz/archive/2006/03/02/589.aspx THIRD BUMP ( Could not get rid of the EXCEL process  ):  I has all the code in place to 'Quiet' the excel, but, it just did not work. work around was done to Kill the process as we knew no other application on server was using EXCEL.  The normal steps to quite the excel application worked just fine in console application though.   FOURTH BUMP: Code worked with one file 1 on my machine and with the other file 2 code will break. and the same code will work perfectly fine with file 2 on some other machine . We moved it to QA  ( Windows Server 2003 )and worked with every file just perfect. But , then there was another problem: one user can upload it and second cant, permissions on folder and DCOM Conifg checked. Another Detour: Uplooad the xls as it is and convert in Console application.   Lesson Learnt:  If its 'xlsx' use 'ACE Driver' or read xml within excel as recommneded by MS. If xls and you know its always going to be properly formatted  'jet Engine'  Code: Imports Microsoft.Office.Interop Private Function ConvertFile(ByVal SourceFolder As String, ByVal FileName As String, ByVal FileExtension As String)As Boolean     Dim appExcel As New Excel.Application     Dim workBooks As Excel.Workbooks = appExcel.Workbooks     Dim objWorkbook As Excel.Workbook      Try                   objWorkbook = workBooks.Open(CompleteFilePath )                            objWorkbook.SaveAs(Filename:=CObj(SourceFolder & FileName & ".csv"), FileFormat:=Excel.XlFileFormat.xlCSV)       Catch ex As Exception         GenerateAlert(ex.Message().Replace("'", "") & " Error Converting File to CSV.")         LogError(ex )         Return False      Finally                      If Not(objWorkbook is Nothing) then               objWorkbook.Close(SaveChanges:=CObj(False))           End If           ReleaseObj(objWorkbook)                                      ReleaseObj(workBooks)           appExcel.Quit()           ReleaseObj(appExcel)                                 Dim proc As System.Diagnostics.Process           For Each proc In System.Diagnostics.Process.GetProcessesByName("EXCEL")               proc.Kill()           Next         DeleteSourceFile(SourceFolder & FileName & FileExtension)     End Try  Return True  End Function   Private Sub ReleaseObj(ByVal o As Object)     Try      System.Runtime.InteropServices.Marshal.ReleaseComObject(o)   Catch ex As Exception           LogError(ex )   Finally      o = Nothing    End Try End Sub     Protected Sub DeleteSourceFile(Byval CompleteFilePath As string)         Try             Dim MyFile As FileInfo = New FileInfo(CompleteFilePath)             If  MyFile.Exists Then                 File.Delete(CompleteFilePath)             Else              Throw New FileNotFoundException()             End If         Catch ex As Exception             GenerateAlert( " Source File could not be deleted.")              LogError(ex)         End Try     End Sub  The code to kill the process ( Avoid it if you can ): Dim proc As System.Diagnostics.Process For Each proc In System.Diagnostics.Process.GetProcessesByName("EXCEL")     proc.Kill() Next

    Read the article

  • Trace File Source Adapter

    The Trace File Source adapter is a useful addition to your SSIS toolbox.  It allows you to read 2005 and 2008 profiler traces stored as .trc files and read them into the Data Flow.  From there you can perform filtering and analysis using the power of SSIS. There is no need for a SQL Server connection this just uses the trace file. Example Usages Cache warming for SQL Server Analysis Services Reading the flight recorder Find out the longest running queries on a server Analyze statements for CPU, memory by user or some other criteria you choose Properties The Trace File Source adapter has two properties, both of which combine to control the source trace file that is read at runtime. SQL Server 2005 and SQL Server 2008 trace files are supported for both the Database Engine (SQL Server) and Analysis Services. The properties are managed by the Editor form or can be set directly from the Properties Grid in Visual Studio. Property Type Description AccessMode Enumeration This property determines how the Filename property is interpreted. The values available are: DirectInput Variable Filename String This property holds the path for trace file to load (*.trc). The value is either a full path, or the name of a variable which contains the full path to the trace file, depending on the AccessMode property. Trace Column Definition Hopefully the majority of you can skip this section entirely, but if you encounter some problems processing a trace file this may explain it and allow you to fix the problem. The component is built upon the trace management API provided by Microsoft. Unfortunately API methods that expose the schema of a trace file have known issues and are unreliable, put simply the data often differs from what was specified. To overcome these limitations the component uses  some simple XML files. These files enable the trace column data types and sizing attributes to be overridden. For example SQL Server Profiler or TMO generated structures define EventClass as an integer, but the real value is a string. TraceDataColumnsSQL.xml  - SQL Server Database Engine Trace Columns TraceDataColumnsAS.xml    - SQL Server Analysis Services Trace Columns The files can be found in the %ProgramFiles%\Microsoft SQL Server\100\DTS\PipelineComponents folder, e.g. "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsSQL.xml" "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml" If at runtime the component encounters a type conversion or sizing error it is most likely due to a discrepancy between the column definition as reported by the API and the actual value encountered. Whilst most common issues have already been fixed through these files we have implemented specific exception traps to direct you to the files to enable you to fix any further issues due to different usage or data scenarios that we have not tested. An example error that you can fix through these files is shown below. Buffer exception writing value to column 'Column Name'. The string value is 999 characters in length, the column is only 111. Columns can be overridden by the TraceDataColumns XML files in "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml". Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trace File Source transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Please note that the Microsoft Trace classes used in the component are not supported on 64-bit platforms. To use the Trace File Source on a 64-bit host you need to ensure you have the 32-bit (x86) tools available, and the way you execute your package is setup to use them, please see the help topic 64-bit Considerations for Integration Services for more details. Downloads Trace Sources for SQL Server 2005 -- Trace Sources for SQL Server 2008 Version History SQL Server 2008 Version 2.0.0.382 - SQL Sever 2008 public release. (9 Apr 2009) SQL Server 2005 Version 1.0.0.321 - SQL Server 2005 public release. (18 Nov 2008) -- Screenshots

    Read the article

  • Trace File Source Adapter

    The Trace File Source adapter is a useful addition to your SSIS toolbox.  It allows you to read 2005 and 2008 profiler traces stored as .trc files and read them into the Data Flow.  From there you can perform filtering and analysis using the power of SSIS. There is no need for a SQL Server connection this just uses the trace file. Example Usages Cache warming for SQL Server Analysis Services Reading the flight recorder Find out the longest running queries on a server Analyze statements for CPU, memory by user or some other criteria you choose Properties The Trace File Source adapter has two properties, both of which combine to control the source trace file that is read at runtime. SQL Server 2005 and SQL Server 2008 trace files are supported for both the Database Engine (SQL Server) and Analysis Services. The properties are managed by the Editor form or can be set directly from the Properties Grid in Visual Studio. Property Type Description AccessMode Enumeration This property determines how the Filename property is interpreted. The values available are: DirectInput Variable Filename String This property holds the path for trace file to load (*.trc). The value is either a full path, or the name of a variable which contains the full path to the trace file, depending on the AccessMode property. Trace Column Definition Hopefully the majority of you can skip this section entirely, but if you encounter some problems processing a trace file this may explain it and allow you to fix the problem. The component is built upon the trace management API provided by Microsoft. Unfortunately API methods that expose the schema of a trace file have known issues and are unreliable, put simply the data often differs from what was specified. To overcome these limitations the component uses  some simple XML files. These files enable the trace column data types and sizing attributes to be overridden. For example SQL Server Profiler or TMO generated structures define EventClass as an integer, but the real value is a string. TraceDataColumnsSQL.xml  - SQL Server Database Engine Trace Columns TraceDataColumnsAS.xml    - SQL Server Analysis Services Trace Columns The files can be found in the %ProgramFiles%\Microsoft SQL Server\100\DTS\PipelineComponents folder, e.g. "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsSQL.xml" "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml" If at runtime the component encounters a type conversion or sizing error it is most likely due to a discrepancy between the column definition as reported by the API and the actual value encountered. Whilst most common issues have already been fixed through these files we have implemented specific exception traps to direct you to the files to enable you to fix any further issues due to different usage or data scenarios that we have not tested. An example error that you can fix through these files is shown below. Buffer exception writing value to column 'Column Name'. The string value is 999 characters in length, the column is only 111. Columns can be overridden by the TraceDataColumns XML files in "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml". Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trace File Source transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Please note that the Microsoft Trace classes used in the component are not supported on 64-bit platforms. To use the Trace File Source on a 64-bit host you need to ensure you have the 32-bit (x86) tools available, and the way you execute your package is setup to use them, please see the help topic 64-bit Considerations for Integration Services for more details. Downloads Trace Sources for SQL Server 2005 -- Trace Sources for SQL Server 2008 Version History SQL Server 2008 Version 2.0.0.382 - SQL Sever 2008 public release. (9 Apr 2009) SQL Server 2005 Version 1.0.0.321 - SQL Server 2005 public release. (18 Nov 2008) -- Screenshots

    Read the article

  • java.util.zip.ZipException: Error opening file When Deploying an Application to Weblogic Server

    - by lmestre
    The latest weeks we had a hard time trying to solve a deployment issue.* WebLogic Server 10.3.6* Target: WLS Cluster<21-10-2013 05:29:40 PM CLST> <Error> <Console> <BEA-240003> <Console encountered the following error weblogic.management.DeploymentException:        at weblogic.servlet.internal.WarDeploymentFactory.findOrCreateComponentMBeans(WarDeploymentFactory.java:69)        at weblogic.application.internal.MBeanFactoryImpl.findOrCreateComponentMBeans(MBeanFactoryImpl.java:48)        at weblogic.application.internal.MBeanFactoryImpl.createComponentMBeans(MBeanFactoryImpl.java:110)        at weblogic.application.internal.MBeanFactoryImpl.initializeMBeans(MBeanFactoryImpl.java:76)        at weblogic.management.deploy.internal.MBeanConverter.createApplicationMBean(MBeanConverter.java:89)        at weblogic.management.deploy.internal.MBeanConverter.createApplicationForAppDeployment(MBeanConverter.java:67)        at weblogic.management.deploy.internal.MBeanConverter.setupNew81MBean(MBeanConverter.java:315)        at weblogic.deploy.internal.targetserver.operations.ActivateOperation.compatibilityProcessor(ActivateOperation.java:81)        at weblogic.deploy.internal.targetserver.operations.AbstractOperation.setupPrepare(AbstractOperation.java:295)        at weblogic.deploy.internal.targetserver.operations.ActivateOperation.doPrepare(ActivateOperation.java:97)        at weblogic.deploy.internal.targetserver.operations.AbstractOperation.prepare(AbstractOperation.java:217)        at weblogic.deploy.internal.targetserver.DeploymentManager.handleDeploymentPrepare(DeploymentManager.java:747)        at weblogic.deploy.internal.targetserver.DeploymentManager.prepareDeploymentList(DeploymentManager.java:1216)        at weblogic.deploy.internal.targetserver.DeploymentManager.handlePrepare(DeploymentManager.java:250)        at weblogic.deploy.internal.targetserver.DeploymentServiceDispatcher.prepare(DeploymentServiceDispatcher.java:159)        at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.doPrepareCallback(DeploymentReceiverCallbackDeliverer.java:171)        at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.access$000(DeploymentReceiverCallbackDeliverer.java:13)        at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer$1.run(DeploymentReceiverCallbackDeliverer.java:46)        at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:545)        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:256)        at weblogic.work.ExecuteThread.run(ExecuteThread.java:221)Caused by: java.util.zip.ZipException: Error opening file - C:\Oracle\Middleware\user_projects\domains\MyDomain\servers\MyServer\stage\myapp\myapp.war Message - error in opening zip file        at weblogic.servlet.utils.WarUtils.existsInWar(WarUtils.java:87)        at weblogic.servlet.utils.WarUtils.isWebServices(WarUtils.java:76)        at weblogic.servlet.internal.WarDeploymentFactory.findOrCreateComponentMBeans(WarDeploymentFactory.java:61) So the first idea you have with that error is that the war file is corrupted or has incorrect privileges.        We tried:1. Unzipping the  war file, the file was perfect.2. Checking the size, same size as in other environments.3. Checking the ownership of the file, same as in other environments.4. Checking the permissions of the file, same as other applications.Then we accepted the file was fine, so we tried enabling some deployment debugs, but no clues.We also tried:1. Delete all contents of <MyDomain/servers/<MyServer>/tmp> a and <MyDomain/servers/<MyServer>/cache> folders, the issue persisted.2. When renaming the application the deployment was sucessful3. When targeting to the Admin Server, deployment was also working.4. Using 'Copy this application onto every target for me' didn't help either.Finally, my friend 'Test Case' solved the issue again.I saw this name in the config.xml<jdbc-system-resource>    <name>myapp</name>    <target></target>    <descriptor-file-name>jdbc/myapp-jdbc.xml</descriptor-file-name>  </jdbc-system-resource> So, it turned out that customer had created a DataSource with the same name as the application 'myapp' in the above example.By deleting the datasource and created another exact DataSource with a different name the issue was solved.At this point, Do you know Why 'java.util.zip.ZipException: Error opening file' was occurring?Because all names is WebLogic Server need to be unique.References: http://docs.oracle.com/cd/E23943_01/web.1111/e13709/setup.htm"Assigning Names to WebLogic Server ResourcesMake sure that each configurable resource in your WebLogic Server environment has a unique name. Each, domain, server, machine, cluster, JDBC data source, virtual host, or other resource must have a unique name." Enjoy!

    Read the article

  • Why is it that, table is not printing in the xsl-fo here? please help me guys.

    - by atrueguy
    This is my xml file <?xml version="1.0" encoding="ISO-8859-1"?> <?xml-stylesheet type="text/xsl" href="currency.xsl"?> <currencylist> <title>Currencies By Country</title> <countries> <country>Australia</country> <currency>Australian Dollar</currency> </countries> <countries> <country>Austria</country> <currency>Schilling</currency> </countries> <countries> <country>Belgium</country> <currency>Belgium Franc</currency> </countries> <countries> <country>Canada</country> <currency>Canadian Dollar</currency> </countries> <countries> <country>England</country> <currency>Pound</currency> </countries> <countries> <country>Fiji</country> <currency>Fijian Dollar</currency> </countries> <countries> <country>France</country> <currency>Franc</currency> </countries> <countries> <country>Germany</country> <currency>DMark</currency> </countries> <countries> <country>Hong Kong</country> <currency>Hong Kong Dollar</currency> </countries> <countries> <country>Italy</country> <currency>Lira</currency> </countries> <countries> <country>Japan</country> <currency>Yen</currency> </countries> <countries> <country>Netherlands</country> <currency>Guilder</currency> </countries> <countries> <country>Switzerland</country> <currency>SFranc</currency> </countries> <countries> <country>USA</country> <currency>Dollar</currency> </countries> </currencylist> This is my xsl-fo file: <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/"> <fo:root xmlns:fo="http://www.w3.org/1999/XSL/Format"> <fo:layout-master-set> <fo:simple-page-master master-name="Letter" page-height="11in" page-width="8.5in"> <fo:region-body region-name="only_region" margin="0.7in" margin-top="1.2in" margin-left="1.1in"/> <fo:region-before region-name="xsl-region-before" extent="1.5in" /> <fo:region-after region-name="xsl-region-after" extent="1.5in" /> <fo:region-start region-name="xsl-region-after" extent="1.5in" /> </fo:simple-page-master> </fo:layout-master-set> <fo:page-sequence master-reference="Letter"> <fo:flow flow-name="only_region"> <fo:block text-align="left"><xsl:call-template name="show_title"/></fo:block> <fo:table-and-caption> <fo:table> <fo:table-column column-width="25mm"/> <fo:table-column column-width="25mm"/> <fo:table-column column-width="25mm"/> <fo:table-header> <fo:table-row> <fo:table-cell> <fo:block font-weight="bold">SI No</fo:block> </fo:table-cell> <fo:table-cell> <fo:block font-weight="bold">Country</fo:block> </fo:table-cell> <fo:table-cell> <fo:block font-weight="bold">Currency</fo:block> </fo:table-cell> </fo:table-row> </fo:table-header> <fo:table-body> <xsl:for-each select="currencylist/countries"> <fo:table-row> <fo:table-cell> <fo:block> <xsl:value-of select="position()"/> </fo:block> </fo:table-cell> <fo:table-cell> <fo:block> <xsl:value-of select="country"/> </fo:block> </fo:table-cell> <fo:table-cell> <fo:block> <xsl:value-of select="currency"/> </fo:block> </fo:table-cell> </fo:table-row> </xsl:for-each> </fo:table-body> </fo:table> </fo:table-and-caption> </fo:flow> </fo:page-sequence> </fo:root> </xsl:template> <xsl:template name="show_title" match="currencylist"> <xsl:value-of select="currencylist/title"/> </xsl:template> </xsl:stylesheet> Table structure is not printing, but the values are printing, please help guys.

    Read the article

  • File sharing for small, distributed, non-technical, non-profit organization?

    - by mnmldave
    Problem: I've started volunteering for a small non-profit with fewer than five non-technical Windows users who need to share 20-30GB of files (Office documents, images, PDFs, etc.) amongst themselves online. Background: The users are accustomed to a Windows network share on a machine that backed up their data locally. An on-site "disaster" has forced them to work from their homes for awhile and to re-evaluate their file sharing needs (office was located in an old building with obvious electrical issues, etc.). Access to time from volunteers with IT experience seems to be difficult. Demonstrably minimizing energy consumption is a nice-to-have. I'm currently considering Jungle Disk (a Desktop account shared amongst the handful of employees since their TOS and my inquiries to their helpdesk seem to indicate this is permissible). It appears easy-to-use, inexpensive, secure, has backup functionality, and can scale to accomodate more data when needed. I've not used it myself though (have only used Dropbox for personal use) and systems isn't my area of expertise, so am worried I might be jumping on a bandwagon. That said, any suggestions, thoughts or similar experiences would be really appreciated.

    Read the article

  • How do I change file protections running XP on a disk from Windows Server?

    - by cdkMoose
    I had a Windows Server 2003 machine running at home, along with my desktop which I use for development. Server went belly up, but since my desktop is reasonably powerful, I figured I would move the disk from the file server (it was OK) into my XP machine to keep all of the files. Disk comes up fine and shows all of the files. I have been getting access denied errors when trying to work with some of the files. When I display attributes in Explorer, none of them are marked Read-Only. When I view properties on the directories, the Read-Only checkbox is not checked, but has a green background(which I thought meant mixed usage for files in the directory). When I click on the checkbox to clear it and click Apply, the disk does some work and all looks well. However, I continue to get the Access Denied errors, the files still don't show any Read-Only attribute and the directory properties shows the green background again on the Read-Only checkbox. I did check the box which says to apply the change to the folder and all files /subfilders under it. I am assuming that the issue relates to userids/permissions carried over from the Server install. So, why does it let me think I can change the attribute when I can't and how can I correct this problem so that the disk correctly recognizes the ids from XP?

    Read the article

  • Database Table Prefixes

    - by DoctorMick
    We're having a few discussions at work around the naming of our database tables. We're working on a large application with approx 100 database tables (ok, so it isn't that large), most of which can be categorized in to different functional area, and we're trying to work out the best way of naming/organizing these within an Oracle database. The three current options are: Create the different functional areas in separate schemas. Create everything in the same schema but prefix the tables with the functional area Create everything in the same schema with no prefixes We have various pro's and con's around each one but I'd be interested to hear everyone's opinions on what the best solution is.

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >