Search Results

Search found 38336 results on 1534 pages for 'sql wait types'.

Page 455/1534 | < Previous Page | 451 452 453 454 455 456 457 458 459 460 461 462  | Next Page >

  • Recovery of Pinnacle Studio Project Files

    - by seanieb
    My external hard drive had some sort of issue a few months ago, but I was able to recover my files using a data recovery software program. However my Pinnacle studio files are not being recovered as before, they are being recovered as directory's/folders that have sub directory's and files. And I have tried with several different recovery programs and they all recover the projects as directories. And the projects all contain one file called README.TXT: * WARNING This directory contains the descriptive data of the project, split into. various subdirectories and files for better access. DO NOT EDIT, ADD, CHANGE OR MODIFY ANY OF IT'S CONTENTS! This gives me hope that I could some how just turn the directory into a .stu Pinnacle studio project file. How would I go about doing this? Or is there another way to solve this problem?

    Read the article

  • mime-attachment incompatibility on forwarded messages from android to an iphone

    - by Peter Carrero
    A couple of guys got the evo android in the office. I got an iphone. When I receive a forwarded email from one of those phones, the forwarded piece comes as a mime-attachment that the iphone cannot open. I also tried this with a motorola droid and got the same result. I am not sure if the issue is the android sending the message on the wrong format or the iphone not being able to understand a mime type that it should... Has anyone experienced this, and, most importantly, found a fix/workaround? Thanks.

    Read the article

  • Pictures clicked on Start Menu's Recent Items list opens in non default program

    - by jay
    I am using Picasa Photo Viewer and have associated all JPG and PNG files to open with it. However when I open an image from the Recent Items list in the Windows 7 Start Menu it opens with Windows Photo Viewer. The context menu for such items reveals no actions that would make it go to Windows Photo Viewer, and the default (one in bold) opens with the Picasa viewer as you'd expect. It's just that the left click behaves differently for some reason. Any ideas on how to fix this?

    Read the article

  • Is this a File Header / Magic Number?

    - by Hammer Bro.
    I've got 120,000 files (way more, actually; this is just an arbitrary subset) of an unknown type. Linux file does not identify them (not that they're necessarily Linux files), nor do any other methods I've tried. There are only two hints about them that I currently have. One is that I suspect some compression is employed -- I have metadata that claims the file sizes are always some amount larger than what I observe. The other is that in 100,000 of these files, the first 16 bytes are always: ff ee ee dd 00 00 00 00 01 00 00 00 00 00 00 00 That really looks like a file header/magic number to me, but I just can't place it. Does anyone know what kind of files this would indicate? Alternatively, can anyone convince me that these suspiciously common bytes certainly do not indicate a specific file type? UPDATE I don't know the exact reverse-engineering details, but most of the files in our case are zips after the first 29(? or so) bytes are ignored. So in practice the problem is solved (we know how to process the files) but in theory the question is still unanswered -- I don't know which application routinely prepends about 29 bytes to its zips. [I'm not sure if I should leave the question open or not at this point.]

    Read the article

  • Interop.Outlook.UserProperties.Add causing problem during connection time

    - by aanataliya
    Hi All, I have created a plug-in for outlook. Plug-in has only below code. private void OnNewOutlookInspector(Outlook.Inspector OutlookInsptr) { Outlook.MailItem MlItem = (Outlook.MailItem)OutlookInsptr.CurrentItem; //if I remove below line. Everything is working fine. MlItem.UserProperties.Add("INSPINIT", Outlook.OlUserPropertyType.olText , true , true ).Value = "1"; } public void OnConnection(object application, Extensibility.ext_ConnectMode connectMode, object addInInst, ref System.Array custom) { applicationObject = application; addInInstance = addInInst; MessageBox.Show("in connection new 2"); OutlkApp = (Outlook.Application)application; OutlkInsptrs = OutlkApp.Inspectors; OutlkInsptrs.NewInspector += new Outlook.InspectorsEvents_NewInspectorEventHandler(OnNewOutlookInspector); } Problem I am facing is, When I send HTML mail while plug-in is enabled, receiving end it is being received as a plain text. Below is the mail content along with the header and body at recieving end. x-sender: [email protected] x-receiver: [email protected] Received: from blr-s-07.pointcrossblr.com ([192.168.1.107]) by blr-ws-134.pointcrossblr.com with Microsoft SMTPSVC(6.0.2600.5949); Wed, 22 Dec 2010 17:11:02 +0530 Received: from blrws134 ([192.168.1.175]) by blr-s-07.pointcrossblr.com with Microsoft SMTPSVC(6.0.3790.4675); Wed, 22 Dec 2010 17:11:02 +0530 From: "Ashif Nataliya" <[email protected]> To: <[email protected]> Cc: <[email protected]> Subject: RTF FRM blr to pc.com cc blr-ws-134 Date: Wed, 22 Dec 2010 17:11:02 +0530 Message-ID: <[email protected]> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_NextPart_000_00F7_01CBA1FB.36115580" X-Mailer: Microsoft Outlook 14.0 Content-Language: en-us X-MS-TNEF-Correlator: 00000000DCB2344DE8F50F4FBC91085BB5C06D55A4172000 thread-index: AcuhzRuTOBkvHPUnS1aLi9+cHNAWhA== Return-Path: [email protected] X-OriginalArrivalTime: 22 Dec 2010 11:41:02.0822 (UTC) FILETIME=[1C788860:01CBA1CD] This is a multipart message in MIME format. ------=_NextPart_000_00F7_01CBA1FB.36115580 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit HTML Test Test Mail ------=_NextPart_000_00F7_01CBA1FB.36115580 Content-Type: application/ms-tnef; name="winmail.dat" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="winmail.dat" // and some other code..... Any help is appreciated. Thanks.

    Read the article

  • Adding file type to ack permanently

    - by Martin Tóth
    I've recently learned how to let ack support more filetypes (adding the following to .ackrc): --type-add latte=.latte Unfortunately, that produces an info line on every ack search I use, even ones with 0 results. $ ack --latte dump ack: --type-add: Type "latte" does not exist, creating with ".latte" ... Is there a way to make this a more permanent addition? (i.e. get rid of this info line) This looks to me like it's adding this new type on every ack call. Is it a problem with my installation of ack? I'm on Mac OS X 10.5.8 with ack 1.92 (Running under Perl 5.10.1)

    Read the article

  • Utility for extracting MIME attachments

    - by tripleee
    I am looking for a command-line tool for Unix (ideally, available in a Debian / Ubuntu package) for extracting all MIME parts from a multipart email message (or the body from a singlepart with an interesting content-type, for that matter). I have been using the mimeexplode tool which ships with the Perl MIME::Tools package, but it's not really production quality (the script is included as an example only, and has issues with what it regards as "evil" character sets) and I could certainly roll my own script based on that, but if this particular wheel has already been innovated, perhaps I shouldn't.

    Read the article

  • How to change defaulp pdf viewer for all users in command line

    - by dodecaplex
    I'm using Debian squeeze with Gnome Desktop for all my users. I have a group of machines to set up so that all users should use xpdf as a default viewer (rather than evince). I want this set up to be done by command line (even better, using puppet). I know about xpg-mime command, but the man page says that the default command should not be used as root. I could manually tweek the /etc/gnome/defaults.list files, but I'm looking for a single command I could run to perform the setting without an editor interaction. Any idea ?

    Read the article

  • storing map template in database

    - by Timigen
    I am working on an application that displays choropleth maps. These maps are of all different types, some display state by county, country by state/province, or world by country. How should I handle storing the map information in the database? My Thoughts: I won't need to do queries to find POI inside a region, so I don't think there is a need to use spatial datatypes. I am considering storing a map as a geoJSON object (I am using JS mapping library that accepts geoJSON). The only issue is what if I want a map of the US northeast. Then I would have geoJSON for the US and a separate one for the US northeast, which would be redundant. Would it make sense to have a shape database where I had each state then when I needed a map of the US I could query for each state, and when I needed a map of the US Northeast I could again query for what I need? Note: I am not concerned with storing the data for each region, just the region itself. I will query for the data on the fly for the specific region.

    Read the article

  • storing data for maps database

    - by Timigen
    I am working on an application that displays choropleth maps. These maps are of all different types, some display state by county, country by state/province, or world by country. How should I handle storing the map information in the database? My Thoughts: I won't need to do queries to find POI inside a region, so I don't think there is a need to use spatial datatypes. I am considering storing a map as a geoJSON object (I am using JS mapping library that accepts geoJSON). The only issue is what if I want a map of the US northeast. Then I would have geoJSON for the US and a separate one for the US northeast, which would be redundant. Would it make sense to have a shape database where I had each state then when I needed a map of the US I could query for each state, and when I needed a map of the US Northeast I could again query for what I need? Note: I am not concerned with storing the data for each region, just the region itself. I will query for the data on the fly for the specific region.

    Read the article

  • Benchmarking ORM associations

    - by barerd
    I am trying to benchmark two cases of self referential many to many as described in datamapper associations. Both cases consist of an Item clss, which may require many other items. In both cases, I required the ruby benchmark library and source file, created two items and benchmarked require/unrequie functions as below: Benchmark.bmbm do |x| x.report("require:") { item_1.require_item item_2, 10 } x.report("unrequire:") { item_1.unrequire_item item_2 } end To be clear, both functions are datamapper add/modify functions like: componentMaps.create :component_id => item.id, :quantity => quantity componentMaps.all(:component_id => item.id).destroy! and links_to_components.create :component_id => item.id, :quantity => quantity links_to_components.all(:component_id => item.id).destroy! The results are variable and in the range of 0.018001 to 0.022001 for require function in both cases, and 0.006 to 0.01 for unrequire function in both cases. This made me suspicious about the correctness of my test method. Edit I went ahead and compared a "get by primary key case" to a "finding first matching record case" by: (1..10000).each do |i| Item.create :name => "item_#{i}" end Benchmark.bmbm do |x| x.report("Get") { item = Item.get 9712 } x.report("First") { item = Item.first :name => "item_9712" } end where the results were very different like 0 sec compared to 0.0312, as expected. This suggests that the benchmarking works. I wonder whether I benchmarked the two types of associations correctly, and whether a difference between 0.018 and 0.022 sec significant?

    Read the article

  • Software solution from the 2000's, should I attempt to patch or remake the whole thing?

    - by ShadowScripter
    I was sent out to discuss a system that a certain company is currently using and what should be done with it. The company manufactures various carton displays. This system was developed to keep track of clients, orders and prices. Lots have happened since the system was created and the system is now, as the manager described it, "locked up" and "problematic", which I translate as "not dynamic" and "unstable". Some info about the system It was developed around the year 2000 Fairly small system, 2-5 users, 6 forms, ~8 tables with average quantities of data Built on early Visual Basic, forms created with the drag and drop design. Interface is basically just a window with a menu and some forms Uses MSSQL database (SQL2005 server) to store data and ODBC driver to query, data was migrated from excel before this system, and before excel it was handled, calculated and written by hand and paper Users work in Microsoft XP environment (and up) Their main problem is that they can't adjust and calculate prices, can't add new carton types etc, correctly anymore because they can't (or rather, they don't know how to) touch the data on the server. I suggested 3 possible solutions Attempt to patch the current system Create a fresh new interface (preferably similar environment, VB.net or VB based) Bring it back to an Excel solution, considering it is such a small system There might be more options, but these are the ones I could think of. My questions are What should I recommend and why? What is or could be the pros and cons of these alternatives? Are there other (possibly better) alternatives?

    Read the article

  • using xml type attribute for derived complex types

    - by David Michel
    Hi All, I'm trying to get derived complex types from a base type in an xsd schema. it works well when I do this (inspired by this): xml file: <person xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Employee"> <name>John</name> <height>59</height> <jobDescription>manager</jobDescription> </person> xsd file: <xs:element name="person" type="Person"/> <xs:complexType name="Person" abstract="true"> <xs:sequence> <xs:element name= "name" type="xs:string"/> <xs:element name= "height" type="xs:double" /> </xs:sequence> </xs:complexType> <xs:complexType name="Employee"> <xs:complexContent> <xs:extension base="Person"> <xs:sequence> <xs:element name="jobDescription" type="xs:string" /> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> However, if I want to have the person element inside, for example, a sequence of another complex type, it doesn't work anymore: xml: <staffRecord> <company>mycompany</company> <dpt>sales</dpt> <person xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Employee"> <name>John</name> <height>59</height> <jobDescription>manager</jobDescription> </person> </staffRecord> xsd file: <xs:element name="staffRecord"> <xs:complexType> <xs:sequence> <xs:element name="company" type="xs:string"/> <xs:element name="dpt" type="xs:string"/> <xs:element name="person" type="Person"/> <xs:complexType name="Person" abstract="true"> <xs:sequence> <xs:element name= "name" type="xs:string"/> <xs:element name= "height" type="xs:double" /> </xs:sequence> </xs:complexType> <xs:complexType name="Employee"> <xs:complexContent> <xs:extension base="Person"> <xs:sequence> <xs:element name="jobDescription" type="xs:string" /> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> </xs:sequence> </xs:complexType> </xs:element> When validating the xml with that schema with xmllint (under linux), I get this error message then: config.xsd:12: element complexType: Schemas parser error : Element '{http://www.w3.org/2001/XMLSchema}sequence': The content is not valid. Expected is (annotation?, (element | group | choice | sequence | any)*). WXS schema config.xsd failed to compile Any idea what is wrong ? David

    Read the article

  • How to load a binary file(.bin) of size 6 MB in a varbinary(MAX) column of SQL Server 2005 database

    - by Feroz Khan
    How to load a binary file(.bin) of size 6 MB in a varbinary(MAX) column of SQL Server 2005 database using ADO in vc++ application. This is the code I am using to load the file which I used to load a .bmp file BOOL CSaveView::PutECGInDB(CString strFilePath, FieldPtr pFileData) { //Open File CFile fileImage; CFileStatus fileStatus; fileImage.Open(strFilePath,CFile::modeRead); fileImage.GetStatus(fileStatus); //Alocating memory for data ULONG nBytes = (ULONG)fileStatus.m_size; HGLOBAL hGlobal = GlobalAlloc(GPTR,nBytes); LPVOID lpData = GlobalLock(hGlobal); //Putting data into file fileImage.Read(lpData,nBytes); HRESULT hr; _variant_t varChunk; long lngOffset = 0; UCHAR chData; SAFEARRAY FAR *psa = NULL; SAFEARRAYBOUND rgsabound[1]; try { //Create a safe array to store the BYTES rgsabound[0].lLbound = 0; rgsabound[0].cElements = nBytes; psa = SafeArrayCreate(VT_UI1,1,rgsabound); while(lngOffset<(long)nBytes) { chData = ((UCHAR*)lpData)[lngOffset]; hr = SafeArrayPutElement(psa,&lngOffset,&chData); if(hr != S_OK) { return false; } lngOffset++; } lngOffset = 0; //Assign the safe array to a varient varChunk.vt = VT_ARRAY|VT_UI1; varChunk.parray = psa; hr = pFileData-AppendChunk(varChunk); if(hr != S_OK) { return false; } } catch(_com_error &e) { //get info from _com_error _bstr_t bstrSource(e.Source()); _bstr_t bstrDescription(e.Description()); _bstr_t bstrErrorMessage(e.ErrorMessage()); _bstr_t bstrErrorCode(e.Error()); TRACE("Exception thrown for classes generated by #import"); TRACE("\tCode= %08lx\n",(LPCSTR)bstrErrorCode); TRACE("\tCode Meaning = %s\n",(LPCSTR)bstrErrorMessage); TRACE("\tSource = %s\n",(LPCSTR)bstrSource); TRACE("\tDescription = %s\n",(LPCSTR)bstrDescription); } catch(...) { TRACE("Unhandle Exception"); } //Free Memory GlobalUnlock(lpData); return true; } But when I read the same file using Getchunk funcion it gives me all 0,s but the size of the file I get is same as the one uploaded. Your help will be highly appreciated. Thanks in advance.

    Read the article

  • What is causing this SQL 2005 Primary Key Deadlock between two real-time bulk upserts?

    - by skimania
    Here's the scenario: I've got a table called MarketDataCurrent (MDC) that has live updating stock prices. I've got one process called 'LiveFeed' which reads prices streaming from the wire, queues up inserts, and uses a 'bulk upload to temp table then insert/update to MDC table.' (BulkUpsert) I've got another process which then reads this data, computes other data, and then saves the results back into the same table, using a similar BulkUpsert stored proc. Thirdly, there are a multitude of users running a C# Gui polling the MDC table and reading updates from it. Now, during the day when the data is changing rapidly, things run pretty smoothly, but then, after market hours, we've recently started seeing an increasing number of Deadlock exceptions coming out of the database, nowadays we see 10-20 a day. The imporant thing to note here is that these happen when the values are NOT changing. Here's all the relevant info: Table Def: CREATE TABLE [dbo].[MarketDataCurrent]( [MDID] [int] NOT NULL, [LastUpdate] [datetime] NOT NULL, [Value] [float] NOT NULL, [Source] [varchar](20) NULL, CONSTRAINT [PK_MarketDataCurrent] PRIMARY KEY CLUSTERED ( [MDID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] - stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][1] [1]: http://farm5.static.flickr.com/4049/4690759452_6b94ff7b34.jpg I've got a Sql Profiler Trace Running, catching the deadlocks, and here's what all the graphs look like. stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][2] [2]: http://farm5.static.flickr.com/4035/4690125231_78d84c9e15_b.jpg Process 258 is called the following 'BulkUpsert' stored proc, repeatedly, while 73 is calling the next one: ALTER proc [dbo].[MarketDataCurrent_BulkUpload] @updateTime datetime, @source varchar(10) as begin transaction update c with (rowlock) set LastUpdate = getdate(), Value = t.Value, Source = @source from MarketDataCurrent c INNER JOIN #MDTUP t ON c.MDID = t.mdid where c.lastUpdate < @updateTime and c.mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') and c.value <> t.value insert into MarketDataCurrent with (rowlock) select MDID, getdate(), Value, @source from #MDTUP where mdid not in (select mdid from MarketDataCurrent with (nolock)) and mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') commit And the other one: ALTER PROCEDURE [dbo].[MarketDataCurrent_LiveFeedUpload] AS begin transaction -- Update existing mdid UPDATE c WITH (ROWLOCK) SET LastUpdate = t.LastUpdate, Value = t.Value, Source = t.Source FROM MarketDataCurrent c INNER JOIN #TEMPTABLE2 t ON c.MDID = t.mdid; -- Insert new MDID INSERT INTO MarketDataCurrent with (ROWLOCK) SELECT * FROM #TEMPTABLE2 WHERE MDID NOT IN (SELECT MDID FROM MarketDataCurrent with (NOLOCK)) -- Clean up the temp table DELETE #TEMPTABLE2 commit To clarify, those Temp Tables are being created by the C# code on the same connection and are populated using the C# SqlBulkCopy class. To me it looks like it's deadlocking on the PK of the table, so I tried removing that PK and switching to a Unique Constraint instead but that increased the number of deadlocks 10-fold. I'm totally lost as to what to do about this situation and am open to just about any suggestion. HELP!!

    Read the article

  • Load a 6 MB binary file in a SQL Server 2005 VARBINARY(MAX) column using ADO/VC++?

    - by Feroz Khan
    How to load a binary file(.bin) of size 6 MB in a varbinary(MAX) column of SQL Server 2005 database using ADO in a VC++ application. This is the code I am using to load the file which I used to load a .bmp file: BOOL CSaveView::PutECGInDB(CString strFilePath, FieldPtr pFileData) { //Open File CFile fileImage; CFileStatus fileStatus; fileImage.Open(strFilePath,CFile::modeRead); fileImage.GetStatus(fileStatus); //Alocating memory for data ULONG nBytes = (ULONG)fileStatus.m_size; HGLOBAL hGlobal = GlobalAlloc(GPTR,nBytes); LPVOID lpData = GlobalLock(hGlobal); //Putting data into file fileImage.Read(lpData,nBytes); HRESULT hr; _variant_t varChunk; long lngOffset = 0; UCHAR chData; SAFEARRAY FAR *psa = NULL; SAFEARRAYBOUND rgsabound[1]; try { //Create a safe array to store the BYTES rgsabound[0].lLbound = 0; rgsabound[0].cElements = nBytes; psa = SafeArrayCreate(VT_UI1,1,rgsabound); while(lngOffset<(long)nBytes) { chData = ((UCHAR*)lpData)[lngOffset]; hr = SafeArrayPutElement(psa,&lngOffset,&chData); if(hr != S_OK) { return false; } lngOffset++; } lngOffset = 0; //Assign the safe array to a varient varChunk.vt = VT_ARRAY|VT_UI1; varChunk.parray = psa; hr = pFileData->AppendChunk(varChunk); if(hr != S_OK) { return false; } } catch(_com_error &e) { //get info from _com_error _bstr_t bstrSource(e.Source()); _bstr_t bstrDescription(e.Description()); _bstr_t bstrErrorMessage(e.ErrorMessage()); _bstr_t bstrErrorCode(e.Error()); TRACE("Exception thrown for classes generated by #import"); TRACE("\tCode= %08lx\n",(LPCSTR)bstrErrorCode); TRACE("\tCode Meaning = %s\n",(LPCSTR)bstrErrorMessage); TRACE("\tSource = %s\n",(LPCSTR)bstrSource); TRACE("\tDescription = %s\n",(LPCSTR)bstrDescription); } catch(...) { TRACE("***Unhandle Exception***"); } //Free Memory GlobalUnlock(lpData); return true; } But when I read the same file using Getchunk function it gives me all 0s but the size of the file I get is same as the one uploaded. Your help will be highly appreciated.

    Read the article

  • Should I create a unique clustered index, or non-unique clustered index on this SQL 2005 table?

    - by Bremer
    I have a table storing millions of rows. It looks something like this: Table_Docs ID, Bigint (Identity col) OutputFileID, int Sequence, int …(many other fields) We find ourselves in a situation where the developer who designed it made the OutputFileID the clustered index. It is not unique. There can be thousands of records with this ID. It has no benefit to any processes using this table, so we plan to remove it. The question, is what to change it to… I have two candidates, the ID identity column is a natural choice. However, we have a process which does a lot of update commands on this table, and it uses the Sequence to do so. The Sequence is non-unique. Most records only contain one, but about 20% can have two or more records with the same Sequence. The INSERT app is a VB6 piece of crud throwing thousands insert commands at the table. The Inserted values are never in any particular order. So the Sequence of one insert may be 12345, and the next could be 12245. I know that this could cause SQL to move a lot of data to keep the clustered index in order. However, the Sequence of the inserts are generally close to being in order. All inserts would take place at the end of the clustered table. Eg: I have 5 million records with Sequence spanning 1 to 5 million. The INSERT app will be inserting sequence’s at the end of that range at any given time. Reordering of the data should be minimal (tens of thousands of records at most). Now, the UPDATE app is our .NET star. It does all UPDATES on the Sequence column. “Update Table_Docs Set Feild1=This, Field2=That…WHERE Sequence =12345” – hundreds of thousands of these a day. The UPDATES are completely and totally, random, touching all points of the table. All other processes are simply doing SELECT’s on this (Web pages). Regular indexes cover those. So my question is, what’s better….a unique clustered index on the ID column, benefiting the INSERT app, or a non-unique clustered index on the Sequence, benefiting the UPDATE app?

    Read the article

  • A few problems with Delphi involving Mail Merge, SQL + Databases.

    - by Daniel
    My first problem is with mail merge. I have created a a Data File and a table, yet I am not able to fill my table with information from my Data File. The << just seems to be inserted after wherever the cursor is on the page, which is not where the table is. All that is entered into the actual table is a '59'. Therefore I think I either need to to change the code or be able to move the cursor. Here is the code I am currently using: wrdDoc.Tables.Add(wrdSelection.Range, ADOTable1.FieldCount, 3); wrdDoc.Tables.Item(1).Columns.Item(1).SetWidth(51,wdAdjustNone); wrdDoc.Tables.Item(1).Columns.Item(2).SetWidth(20,wdAdjustNone); wrdDoc.Tables.Item(1).Columns.Item(3).SetWidth(100,wdAdjustNone); // Set the shading on the first row to light gray wrdDoc.Tables.Item(1).Rows.Item(1).Cells .Shading.BackgroundPatternColorIndex := wdGray25; // BOLD the first row wrdDoc.Tables.Item(1).Rows.Item(1).Range.Bold := True; // Center the text in Cell (1,1) wrdDoc.Tables.Item(1).Cell(1,1).Range.Paragraphs.Alignment := wdAlignParagraphCenter; // Fill each row of the table with data wrdDoc.Tables.Item(1).Cell(1, 1).Range.InsertAfter('Time'); wrdDoc.Tables.Item(1).Cell(1, 2).Range.InsertAfter(''); wrdDoc.Tables.Item(1).Cell(1, 3).Range.InsertAfter('Teacher'); For Count := 1 to (ADOTable1.FieldCount - 1) do begin wrdDoc.Tables.Item(1).Cell((Count + 1), 1).Range.InsertAfter(wrdSelection.Range,'Time' + IntToStr(Count)); wrdDoc.Tables.Item(1).Cell((Count + 1), 2).Range.InsertAfter(wrdSelection.Range,'THonorific' + IntToStr(Count)); wrdDoc.Tables.Item(1).Cell((Count + 1), 3).Range.InsertAfter(wrdSelection.Range,'TSurname' + IntToStr(Count)); end; My second problem is that I do not know what the correct SQL syntax is for editing the name of a column in the database (I am using Delphi 7 and Microsoft Jet Engine if that makes a difference). The third problem is that when I add a new column to my database manually (which I need to do) I get a 'violation' error in one of my units when I activate an ADOTable. This only happens on one unit and it happens when I add a column with any name anywhere in the table. I know that is vague but I can't seem to narrow down the problem any further than that. If you could help with me with any of those it would be great. Thanks.

    Read the article

  • What is the best approach in SQL to store multi-level descriptions?

    - by gime
    I need a new perspective on how to design a reliable and efficient SQL database to store multi-level arrays of data. This problem applies to many situations but I came up with this example: There are hundreds of products. Each product has an undefined number of parts. Each part is built from several elements. All products are described in the same way. All parts would require the same fields to describe them (let's say: price, weight, part name), all elements of all parts also have uniform design (for example: element code, manufacturer). Plain and simple. One element may be related to only part, and each part is related to one product only. I came up with idea of three tables: Products: -------------------------------------------- prod_id prod_name prod_price prod_desc 1 hoover 120 unused next Parts: ---------------------------------------------------- part_id part_name part_price part_weight prod_id 3 engine 10 20 1 and finally Elements: --------------------------------------- el_id el_code el_manufacturer part_id 1 BFG12 GE 3 Now, select a desired product, select all from PARTS where prod_id is the same, and then select all from ELEMENTS where part_id matches - after multiple queries you've got all data. I'm just not sure if this is the right approach. I've got also another idea, without ELEMENTS table. That would decrease queries but I'm a bit afraid it might be lame and bad practice. Instead of ELEMENTS table there are two more fields in the PARTS table, so it looks like this: part_id, part_name, part_price, part_weight, prod_id, part_el_code, part_el_manufacturer they would be text type, and for each part, information about elements would be stored as strings, this way: part_el_code | code_of_element1; code_of_element2; code_of_element3 part_el_manufacturer | manuf_of_element1; manuf_of_element2; manuf_of_element3 Then all we need is to explode() data from those fields, and we get arrays, easy to display. Of course this is not perfect and has some limitations, but is this idea ok? Or should I just go with the first idea? Or maybe there is a better approach to this problem? It's really hard to describe it in few words, and that means it's hard to search for answer. Also, understanding the principles of designing databases is not that easy as it seems.

    Read the article

  • Trace File Source Adapter

    The Trace File Source adapter is a useful addition to your SSIS toolbox.  It allows you to read 2005 and 2008 profiler traces stored as .trc files and read them into the Data Flow.  From there you can perform filtering and analysis using the power of SSIS. There is no need for a SQL Server connection this just uses the trace file. Example Usages Cache warming for SQL Server Analysis Services Reading the flight recorder Find out the longest running queries on a server Analyze statements for CPU, memory by user or some other criteria you choose Properties The Trace File Source adapter has two properties, both of which combine to control the source trace file that is read at runtime. SQL Server 2005 and SQL Server 2008 trace files are supported for both the Database Engine (SQL Server) and Analysis Services. The properties are managed by the Editor form or can be set directly from the Properties Grid in Visual Studio. Property Type Description AccessMode Enumeration This property determines how the Filename property is interpreted. The values available are: DirectInput Variable Filename String This property holds the path for trace file to load (*.trc). The value is either a full path, or the name of a variable which contains the full path to the trace file, depending on the AccessMode property. Trace Column Definition Hopefully the majority of you can skip this section entirely, but if you encounter some problems processing a trace file this may explain it and allow you to fix the problem. The component is built upon the trace management API provided by Microsoft. Unfortunately API methods that expose the schema of a trace file have known issues and are unreliable, put simply the data often differs from what was specified. To overcome these limitations the component uses  some simple XML files. These files enable the trace column data types and sizing attributes to be overridden. For example SQL Server Profiler or TMO generated structures define EventClass as an integer, but the real value is a string. TraceDataColumnsSQL.xml  - SQL Server Database Engine Trace Columns TraceDataColumnsAS.xml    - SQL Server Analysis Services Trace Columns The files can be found in the %ProgramFiles%\Microsoft SQL Server\100\DTS\PipelineComponents folder, e.g. "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsSQL.xml" "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml" If at runtime the component encounters a type conversion or sizing error it is most likely due to a discrepancy between the column definition as reported by the API and the actual value encountered. Whilst most common issues have already been fixed through these files we have implemented specific exception traps to direct you to the files to enable you to fix any further issues due to different usage or data scenarios that we have not tested. An example error that you can fix through these files is shown below. Buffer exception writing value to column 'Column Name'. The string value is 999 characters in length, the column is only 111. Columns can be overridden by the TraceDataColumns XML files in "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml". Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trace File Source transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Please note that the Microsoft Trace classes used in the component are not supported on 64-bit platforms. To use the Trace File Source on a 64-bit host you need to ensure you have the 32-bit (x86) tools available, and the way you execute your package is setup to use them, please see the help topic 64-bit Considerations for Integration Services for more details. Downloads Trace Sources for SQL Server 2005 -- Trace Sources for SQL Server 2008 Version History SQL Server 2008 Version 2.0.0.382 - SQL Sever 2008 public release. (9 Apr 2009) SQL Server 2005 Version 1.0.0.321 - SQL Server 2005 public release. (18 Nov 2008) -- Screenshots

    Read the article

  • Trace File Source Adapter

    The Trace File Source adapter is a useful addition to your SSIS toolbox.  It allows you to read 2005 and 2008 profiler traces stored as .trc files and read them into the Data Flow.  From there you can perform filtering and analysis using the power of SSIS. There is no need for a SQL Server connection this just uses the trace file. Example Usages Cache warming for SQL Server Analysis Services Reading the flight recorder Find out the longest running queries on a server Analyze statements for CPU, memory by user or some other criteria you choose Properties The Trace File Source adapter has two properties, both of which combine to control the source trace file that is read at runtime. SQL Server 2005 and SQL Server 2008 trace files are supported for both the Database Engine (SQL Server) and Analysis Services. The properties are managed by the Editor form or can be set directly from the Properties Grid in Visual Studio. Property Type Description AccessMode Enumeration This property determines how the Filename property is interpreted. The values available are: DirectInput Variable Filename String This property holds the path for trace file to load (*.trc). The value is either a full path, or the name of a variable which contains the full path to the trace file, depending on the AccessMode property. Trace Column Definition Hopefully the majority of you can skip this section entirely, but if you encounter some problems processing a trace file this may explain it and allow you to fix the problem. The component is built upon the trace management API provided by Microsoft. Unfortunately API methods that expose the schema of a trace file have known issues and are unreliable, put simply the data often differs from what was specified. To overcome these limitations the component uses  some simple XML files. These files enable the trace column data types and sizing attributes to be overridden. For example SQL Server Profiler or TMO generated structures define EventClass as an integer, but the real value is a string. TraceDataColumnsSQL.xml  - SQL Server Database Engine Trace Columns TraceDataColumnsAS.xml    - SQL Server Analysis Services Trace Columns The files can be found in the %ProgramFiles%\Microsoft SQL Server\100\DTS\PipelineComponents folder, e.g. "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsSQL.xml" "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml" If at runtime the component encounters a type conversion or sizing error it is most likely due to a discrepancy between the column definition as reported by the API and the actual value encountered. Whilst most common issues have already been fixed through these files we have implemented specific exception traps to direct you to the files to enable you to fix any further issues due to different usage or data scenarios that we have not tested. An example error that you can fix through these files is shown below. Buffer exception writing value to column 'Column Name'. The string value is 999 characters in length, the column is only 111. Columns can be overridden by the TraceDataColumns XML files in "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml". Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trace File Source transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Please note that the Microsoft Trace classes used in the component are not supported on 64-bit platforms. To use the Trace File Source on a 64-bit host you need to ensure you have the 32-bit (x86) tools available, and the way you execute your package is setup to use them, please see the help topic 64-bit Considerations for Integration Services for more details. Downloads Trace Sources for SQL Server 2005 -- Trace Sources for SQL Server 2008 Version History SQL Server 2008 Version 2.0.0.382 - SQL Sever 2008 public release. (9 Apr 2009) SQL Server 2005 Version 1.0.0.321 - SQL Server 2005 public release. (18 Nov 2008) -- Screenshots

    Read the article

  • SQL analytical mash-ups deliver real-time WOW! for big data

    - by KLaker
    One of the overlooked capabilities of SQL as an analysis engine, because we all just take it for granted, is that you can mix and match analytical features to create some amazing mash-ups. As we move into the exciting world of big data these mash-ups can really deliver those "wow, I never knew that" moments. While Java is an incredibly flexible and powerful framework for managing big data there are some significant challenges in using Java and MapReduce to drive your analysis to create these "wow" discoveries. One of these "wow" moments was demonstrated at this year's OpenWorld during Andy Mendelsohn's general keynote session.  Here is the scenario - we are looking for fraudulent activities in our big data stream and in this case we identifying potentially fraudulent activities by looking for specific patterns. We using geospatial tagging of each transaction so we can create a real-time fraud-map for our business users. Where we start to move towards a "wow" moment is to extend this basic use of spatial and pattern matching, as shown in the above dashboard screen, to incorporate spatial analytics within the SQL pattern matching clause. This will allow us to compute the distance between transactions. Apologies for the quality of this screenshot….hopefully below you see where we have extended our SQL pattern matching clause to use location of each transaction and to calculate the distance between each transaction: This allows us to compare the time of the last transaction with the time of the current transaction and see if the distance between the two points is possible given the time frame. Obviously if I buy something in Florida from my favourite bike store (may be a new carbon saddle for my Trek) and then 5 minutes later the system sees my credit card details being used in Arizona there is high probability that this transaction in Arizona is actually fraudulent (I am fast on my Trek but not that fast!) and we can flag this up in real-time on our dashboard: In this post I have used the term "real-time" a couple of times and this is an important point and one of the key reasons why SQL really is the only language to use if you want to analyse  big data. One of the most important questions that comes up in every big data project is: how do we do analysis? Many enlightened customers are now realising that using Java-MapReduce to deliver analysis does not result in "wow" moments. These "wow" moments only come with SQL because it is offers a much richer environment, it is simpler to use and it is faster - which makes it possible to deliver real-time "Wow!". Below is a slide from Andy's session showing the results of a comparison of Java-MapReduce vs. SQL pattern matching to deliver our "wow" moment during our live demo.  You can watch our analytical mash-up "Wow" demo that compares the power of 12c SQL pattern matching + spatial analytics vs. Java-MapReduce  here: You can get more information about SQL Pattern Matching on our SQL Analytics home page on OTN, see here http://www.oracle.com/technetwork/database/bi-datawarehousing/sql-analytics-index-1984365.html.  You can get more information about our spatial analytics here: http://www.oracle.com/technetwork/database-options/spatialandgraph/overview/index.html If you would like to watch the full Database 12c OOW presentation see here: http://medianetwork.oracle.com/video/player/2686974264001

    Read the article

  • Is there any good hosting for asp.net and MySQL

    - by HAJJAJ
    HI every one ,I have account with one of the hosting company, and i did my project in asp.net and I used MySQL for the database. the hosting company is not giving me the full privileges to create new user or to create new stored procedure!!! this is what they said for me: Due to the shared nature of our environment we had to make some modifications to your procedure (namely the definer). We also had to review your procedure to determine if it would be compatible with our environment. While your procedures will work (via phpMyAdmin or some other interface), it is unlikely they will be accessible via the Connector/.NET (ADO.NET) that your application is likely using. This is due to a security restriction with how that connector works in shared environments. http://dev.mysql.com/doc/refman/5.0/en/connector-net-programming-stored.html "Note When you call a stored procedure, the command object makes an additional SELECT call to determine the parameters of the stored procedure. You must ensure that the user calling the procedure has the SELECT privilege on the mysql.proc table to enable them to verify the parameters. Failure to do this will result in an error when calling the procedure." Unfortunately, giving read privileges on the mysql.proc table will give you access to the data of our other customers and that is not an acceptable risk. If your application can only work using stored procedures, then MSSQL will probably be the better option for your site. I apologize for the inconvenience and the wait to have this ticket completed. So is there any good hosting that any body already used it to publish his asp.net and mysql project ??? this is one of my stored procedure and i think it's sample and it will not harm any other uses!!: -- -------------------------------------------------------------------------------- -- Routine DDL -- Note: comments before and after the routine body will not be stored by the server -- -------------------------------------------------------------------------------- DELIMITER $$ CREATE DEFINER=`root`@`localhost` PROCEDURE `SpcategoriesRead`( IN PaRactioncode VARCHAR(5), IN PaRCatID BIGINT, IN PaRSearchText TEXT ) BEGIN -- CREATING TEMPORARY TABLE TO SAVE DATA FROM THE ACTIONCODE SELECTS -- DROP TEMPORARY TABLE IF EXISTS TEMP; CREATE temporary table tmp ( CatID BIGINT primary key not null, CatTitle TEXT, CatDescription TEXT, CatTitleAr TEXT, CatDescriptionAr TEXT, PictureID BIGINT, Published BOOLEAN, DisplayOrder BIGINT, CreatedOn DATE ); IF PaRactioncode = 1 THEN -- Retrive all DATA from the database -- INSERT INTO tmp SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tbcategories; ELSEIF PaRactioncode = 2 THEN -- Retrive all from the database By ID -- INSERT INTO tmp SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tbcategories WHERE CatID=PaRCatID; ELSEIF PaRactioncode = 3 THEN -- NOSET YET -- INSERT INTO tmp SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tbcategories WHERE Published=1 ORDER BY DisplayOrder; END IF; IF PaRSearchText IS NOT NULL THEN set PaRSearchText=concat('%', PaRSearchText ,'%'); SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tmp WHERE Concat(CatTitle, CatDescription, CatTitleAr, CatDescriptionAr) LIKE PaRSearchText; ELSE SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tmp; END IF; DROP TEMPORARY TABLE IF EXISTS tmp; END

    Read the article

< Previous Page | 451 452 453 454 455 456 457 458 459 460 461 462  | Next Page >