Search Results

Search found 53054 results on 2123 pages for 'sql sample database'.

Page 585/2123 | < Previous Page | 581 582 583 584 585 586 587 588 589 590 591 592  | Next Page >

  • Any way to program a system which automatically restores home / sql database

    - by Mirage
    I have made two shell scripts Script 1: It does all Home directory backups with name username_home_date.tar.gz Script 2: It does SQL backups of all sites every 3 hrs. username_databse_date.sql.gz Now currently if I want to restore the site, I have to copy the tar file to /home/username and then untar there with all the permissions as well and then manually import the database. Is there any way (for instance a program, system or script) that I can just select which backup I want to restore and do automatically? Maybe like a cPanel addon thing.

    Read the article

  • mysql single database relocation

    - by asdmin
    I would like to know if it's possible to operate different databases on different filesystem locations. Background: we are a hosting service, which hosts mysql, web, and smtp to it's customer, but all our services (sql, smtp, http) are located in a different place. We are going to assign a single logical volume to a customer, which will accommodate the customer's mailing, weppages and (hopefully) sql database. Web pages and mailing are already covered, but I am not able to find a configuration setting which would enable me to specify the location of a database (the directory where mysql stores the DB). Let me please highlight, the target here is to relocate different databases to different locations in the filesystem, not moving them from a single place to an another (single) place. Also please do not bother answering with soft and hard symbolic links. ;) Thanks

    Read the article

  • how to change database timezone on vps

    - by michael
    I am running my domain on a vps and I have virtualmin and webmin access. In my php files, I need to record the current time by using mysql NOW() when a row is inserted. I changed the timezone from the php configuration on webmin, but the database function NOW() is still using the default timezone. How can I change the database timezone? PS: I run mysql command to change timezone on webmin, but it gave me the error: Failed to execute SQL : SQL SET time_zone = 'America/New_York'; failed : Unknown or incorrect time zone: 'America/New_York'

    Read the article

  • December release of Microsoft All-In-One Code Framework is available now.

    - by Jialiang
    The code samples in Microsoft All-In-One Code Framework are updated on 2010-12-13. Download address: http://1code.codeplex.com/releases/view/57459#DownloadId=185534 Updated code sample index categorized by technologies: http://1code.codeplex.com/wikipage?title=All-In-One%20Code%20Framework%20Sample%20Catalog (it also allows you to download individual code samples instead of the entire All-In-One Code Framework sample package.) If it’s the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on YouTube http://www.youtube.com/watch?v=cO5Li3APU58, or read the introduction on our homepage http://1code.codeplex.com/,  and this Port25 article http://port25.technet.com/archive/2010/01/18/the-all-in-one-code-framework.aspx.  -------------- New ASP.NET Code Samples VBASPNETAJAXWebChat and CSASPNETAJAXWebChat Most of you have some experience in chatting with friends on the web. So you may want to know how to make a web chat application, it seems to be quite complicated. But ASP.NET gives you the power to buiild a chat room easily. In this code sample, we will construct our own web chat room with the amazing AJAX feature. The principle is simple relatively. As we all know, a base chat application need 4 base controls: one List control to show the chat room members, one List control to show the message list, one TextBox control to input messages and one button to send message. User inputs his message in the textbox first and then presses Send button, it will send the message to the server. The message list will update every 2 seconds to get the newest message list in the chat room from the server. We need to know, it is hard for us to make an AJAX web chat application like a windows form application because we cannot keep the connection after one web request ended. So a lot of events which communicates between client side and server side cannot be realized. The common workaround is to make web requests in every some seconds to check whether the server side has been updated. But another technique called COMET makes it possible. But it is different with AJAX and will not be talked in details in this KB. For more details about COMET, we can get some clues from the Reference.   CSASPNETCurrentOnlineUserList and VBASPNETCurrentOnlineUserList This sample demos a system that needs to display a list of current online users' information. As a matter of fact, Membership.GetNumberOfUsersOnline Method  can get the number of online users and there is a convenient approach to check whether the user is online by using Membership.GetUser(string userName).IsOnline property,however many asp.net projects are not using membership.So in this case,the sample shows how to display a list of current online users' information without using membership provider. It is not difficult to check whether the user is online by using session.Many projects tend to be used “Session_End” event to mark a user as “Offline”,however ,it may not be a good idea,because it can’t detect the user status accurately. In addition, "Session_End" event is only available in the "InProc" session mode. If you are storing session states in the State Server or SQL Server, "Session_End" event will never fire. To handle this issue, we need to save the user online status to a  global DataTable or  DataBase. In the sample application, define a global DataTable to store the information of online users.Use XmlHttpRequest in the pages to update and check user's last active time at intervals and also retrieve information on how many users are still online. The sample project can auto delete offline users' information from a global DataTable by checking users’ last active time. A step-by-step guide illustrating how to display a list of current online users' information without using membership provider: 1. Login page. Let user sign in and add current user’s information to a global datatable while Initialize the global datatable which used to store information of current online users. 2. Current online user list page. Use XmlHttpRequest in this page to update and check user's last active time at intervals and also retrieve information on how many users are still online. 3. If user closes the page without clicking  the sign out link button ,the sample project can auto mark the user as offline and delete offline users' information from a global DataTable which used to store information of current online users  by checking users’ last active time. Then the current online user list will be like this:   CSASPNETIPtoLocation This sample demonstrates how to find the geographical location from an IP address. As we know, it is not hard for us to get the IP address of visitors via Request.ServerVariable property, but it is really difficult for us to know where they come from. To achieve this feature, the sample uses a free third party web service from http://freegeoip.appspot.com/, which returns the information about an IP address we send to the server in the format of XML, JSON or CSV. It makes all things easier.   CSASPNETBackgroundWorker Sometimes we do an operation which needs long time to complete. It will stop the response and the page is blank until the operation finished. In this case, we want the operation to run in the background, and in the page, we want to display the progress of the running operation. Therefore, the user can know the operation is running and can know the progress. CSASPNETInheritingFromTreeNode In windows forms TreeView, each tree node has a property called "Tag" which can be used to store a custom object. Many customers want to implement the same tag feature in ASP.NET TreeView. This project creates a custom TreeView control named "CustomTreeView" to achieve this goal. CSASPNETRemoteUploadAndDownload and VBASPNETRemoteUploadAndDownload This code sample was created in response to a code sample request in our new code sample request frunction for customers. The code samples demonstrate uploading files to and downloading files from a remote HTTP or FTP server. In .NET Framework 2.0 and higher versions, there are some lightweight class libraries which support HTTP and FTP protocol transmission. By using these classes, we can achieve this programming requirement.   CSASPNETImageEditUpload and VBASPNETImageEditUpload This demo will shows how to insert, edit and update a common image with the type of "jpg", "png", "gif" or "bmp" . We mainly use two different SqlDataSources with the same database to bind to GridView and FormView in order to establish the “cascading” effort. Besides we apply our self-made ImageHanlder to encoding or decoding images of different types, and use context to output the stream of images. We will explicitly assign the binary streams of images through the event of “FormView_ItemInserting” or “Form_ItemUpdating” to synchronize the stream both in what we can see on an aspx page as well as in what’s really stored in the database.   WebBrowser Control, Network and other Windows General New Code Samples   CSWebBrowserSuppressError and VBWebBrowserSuppressError The sample demonstrates how to make WebBrowser suppress errors, such as script error, navigation error and so on.   CSWebBrowserWithProxy and VBWebBrowserWithProxy The sample demonstrates how to make WebBrowser use a proxy server.   CSWebDownloadProgress and VBWebDownloadProgress The sample demonstrates how to show progress during the download. It also supplies the features to Start, Pause, Resume and Cancel a download.   CppSetDesktopWallpaper, CSSetDesktopWallpaper and VBSetDesktopWallpaper This code sample application allows you select an image, view a preview (resized smaller to fit if necessary), select a display style among Tile, Center, Stretch, Fit (Windows 7 and later) and Fill (Windows 7 and later), and set the image as the Desktop wallpaper. CSWindowsServiceRecoveryProperty and VBWindowsServiceRecoveryProperty CSWindowsServiceRecoveryProperty example demonstrates how to use ChangeServiceConfig2 to configure the service "Recovery" properties in C#. This example operates all the options you can see on the service "Recovery" tab, including setting the "Enable actions for stops with errors" option in Windows Vista and later operating systems. This example also include how to grant the shut down privilege to the process, so that we can configure a special option in the "Recovery" tab - "Restart Computer Options...".   New Office Development Code Samples   CSOneNoteRibbonAddIn and VBOneNoteRibbonAddIn The code sample demonstrates a OneNote 2010 COM add-in that implements IDTExtensibility2. The add-in also supports customizing the Ribbon by implementing the IRibbonExtensibility interface. It is a skeleton OneNote add-in that developers can extend it to implement more functions. The code sample was requested by a customer in our code sample request service. We expect that this could help developers in the community.   New Windows Shell Code Samples   CppShellExtPreviewHandler, CSShellExtPreviewHandler and VBShellExtPreviewHandler In the past two months, we released the code samples of Windows Context Menu Handler, Infotip Handler, and Thumbnail Handler. This is the fourth part of the shell extension series: Preview Handler. The code samples demo the C++, C# and VB.NET implementation of a preview handler for a new file type registered with the .recipe extension. Preview handlers are called when an item is selected to show a lightweight, rich, read-only preview of the file's contents in the view's reading pane. This is done without launching the file's associated application. Windows Vista and later operating systems support preview handlers. To be a valid preview handler, several interfaces must be implemented. This includes IPreviewHandler (shobjidl.h); IInitializeWithFile, IInitializeWithStream, or IInitializeWithItem (propsys.h); IObjectWithSite (ocidl.h); and IOleWindow (oleidl.h). There are also optional interfaces, such as IPreviewHandlerVisuals (shobjidl.h), that a preview handler can implement to provide extended support. Windows API Code Pack for Microsoft .NET Framework makes the implementation of these interfaces very easy in .NET. The example preview handler provides previews for .recipe files. The .recipe file type is simply an XML file registered as a unique file name extension. It includes the title of the recipe, its author, difficulty, preparation time, cook time, nutrition information, comments, an embedded preview image, and so on. The preview handler extracts the title, comments, and the embedded image, and display them in a preview window.   In response to many customers' request, we added setup projects in every shell extension samples in this release. Those setup projects allow you to deploy the shell extensions to your end users' machines. ---------- Download address: http://1code.codeplex.com/releases/view/57459#DownloadId=185534 Updated code sample index categorized by technologies: http://1code.codeplex.com/wikipage?title=All-In-One%20Code%20Framework%20Sample%20Catalog (it also allows you to download individual code samples instead of the entire All-In-One Code Framework sample package.) If you have any feedback for us, please email: [email protected]. We look forward to your comments.

    Read the article

  • Oracle pl\sql question for my homework in oracle 11G class [migrated]

    - by Bjolds
    I am new to oracle 11G programming and i have run into a tough situation with pl\sql funtions and automation. I ame unsure how to create the function for the automation of Registration system for a College registration system. Here is what i want to do. I want to automate the registrations system so that it automaticly registers students. Then I want a procedure to automate the grading system. I have included the code that i am written to make most of this assignment work which it does but unsure how to incorporate Pl\SQL automated fuctions for the registrations system, and the grading system. So Any help or Ideas I would greatly appreciate please. set Linesize 250 set pagesize 150 drop table student; drop table faculty; drop table Course; drop table Section; drop table location; DROP TABLE courseInstructor; DROP TABLE Registration; DROP TABLE grade; create table student( studentid number(10), Lastname varchar2(20), Firstname Varchar2(20), MI Char(1), address Varchar2(20), city Varchar2(20), state Char(2), zip Varchar2(10), HomePhone Varchar2(10), Workphone Varchar2(10), DOB Date, Pin VARCHAR2(10), Status Char(1)); ALTER TABLE Student Add Constraint Student_StudentID_pk Primary Key (studentID); Insert into student values (1,'xxxxxxxx','xxxxxxxxxx','x','xxxxxxxxxxxxxxx','Columbus','oh','44159','xxx-xxx-xxxx','xxx-xxx-xxxx','06-Mar-1957','1211','c'); create table faculty( FacultyID Number(10), FirstName Varchar2(20), Lastname Varchar2(20), MI Char(1), workphone Varchar2(10), CellPhone Varchar2(10), Rank Varchar2(20), Experience Varchar2(10), Status Char(1)); ALTER TABLE Faculty ADD Constraint Faculty_facultyId_PK PRIMARY KEY (FacultyID); insert into faculty values (1,'xxx','xxxxxxxxxxxx',xxx-xxx-xxxx','xxx-xxx-xxxx','professor','20','f'); create table Course( CourseId number(10), CourseNumber Varchar2(20), CourseName Varchar(20), Description Varchar(20), CreditHours Number(4), Status Char(1)); ALTER TABLE Course ADD Constraint Course_CourseID_pk PRIMARY KEY(CourseID); insert into course values (1,'cit 100','computer concepts','introduction to PCs','3.0','o'); insert into course values (2,'cit 101','Database Program','Database Programming','4.0','o'); insert into course values (3,'Math 101','Algebra I','Algebra I Concepts','5.0','o'); insert into course values (4,'cit 102a','Pc applications','Aplications 1','3.0','o'); insert into course values (5,'cit 102b','pc applications','applications 2','3.0','o'); insert into course values (6,'cit 102c','pc applications','applications 3','3.0','o'); insert into course values (7,'cit 103','computer concepts','introduction systems','3.0','c'); insert into course values (8,'cit 110','Unified language','UML design','3.0','o'); insert into course values (9,'cit 165','cobol','cobol programming','3.0','o'); insert into course values (10,'cit 167','C++ Programming 1','c++ programming','4.0','o'); insert into course values (11,'cit 231','Expert Excel','spreadsheet apps','3.0','o'); insert into course values (12,'cit 233','expert Access','database devel.','3.0','o'); insert into course values (13,'cit 169','Java Programming I','Java Programming I','3.0','o'); insert into course values (14,'cit 263','Visual Basic','Visual Basic Prog','3.0','o'); insert into course values (15,'cit 275','system analysis 2','System Analysis 2','3.0','o'); create table Section( SectionID Number(10), CourseId Number(10), SectionNumber VarChar2(10), Days Varchar2(10), StartTime Date, EndTime Date, LocationID Number(10), SeatAvailable Number(3), Status Char(1)); ALTER TABLE Section ADD Constraint Section_SectionID_PK PRIMARY KEY(SectionID); insert into section values (1,1,'18977','r','21-Sep-2011','10-Dec-2011','1','89','o'); create table Location( LocationId Number(10), Building Varchar2(20), Room Varchar2(5), Capacity Number(5), Satus Char(1)); ALTER TABLE Location ADD Constraint Location_LocationID_pk PRIMARY KEY (LocationID); insert into Location values (1,'Clevleand Hall','cl209','35','o'); insert into Location values (2,'Toledo Circle','tc211','45','o'); insert into Location values (3,'Akron Square','as154','65','o'); insert into Location values (4,'Cincy Hall','ch100','45','o'); insert into Location values (5,'Springfield Dome','SD','35','o'); insert into Location values (6,'Dayton Dorm','dd225','25','o'); insert into Location values (7,'Columbus Hall','CB354','15','o'); insert into Location values (8,'Cleveland Hall','cl204','85','o'); insert into Location values (9,'Toledo Circle','tc103','75','o'); insert into Location values (10,'Akron Square','as201','46','o'); insert into Location values (11,'Cincy Hall','ch301','73','o'); insert into Location values (12,'Dayton Dorm','dd245','57','o'); insert into Location values (13,'Springfield Dome','SD','65','o'); insert into Location values (14,'Cleveland Hall','cl241','10','o'); insert into Location values (15,'Toledo Circle','tc211','27','o'); insert into Location values (16,'Akron Square','as311','28','o'); insert into Location values (17,'Cincy Hall','ch415','73','o'); insert into Location values (18,'Toledo Circle','tc111','67','o'); insert into Location values (19,'Springfield Dome','SD','69','o'); insert into Location values (20,'Dayton Dorm','dd211','45','o'); Alter Table Student Add Constraint student_Zip_CK Check(Rtrim (Zip,'1234567890-') is null); Alter Table Student ADD Constraint Student_Status_CK Check(Status In('c','t')); Alter Table Student ADD Constraint Student_MI_CK2 Check(RTRIM(MI,'abcdefghijklmnopqrstuvwxyz')is Null); Alter Table Student Modify pin not Null; Alter table Faculty Add Constraint Faculty_Status_CK Check(Status In('f','a','i')); Alter table Faculty ADD Constraint Faculty_Rank_CK Check(Rank In ('professor','doctor','instructor','assistant','tenure')); Alter table Faculty ADD Constraint Faculty_MI_CK2 Check(RTRIM(MI,'abcdefghijklmnopqrstuvwxyz')is Null); Update Section Set Starttime = To_date('09-21-2011 6:00 PM', 'mm-dd-yyyy hh:mi pm'); Update Section Set Endtime = To_date('12-10-2011 9:50 PM', 'mm-dd-yyyy hh:mi pm'); alter table Section Add Constraint StartTime_Status_CK Check (starttime < Endtime); Alter Table Section Add Constraint Section_StartTime_ck check (StartTime < EndTime); Alter Table Section ADD Constraint Section_CourseId_FK FOREIGN KEY (CourseID) References Course(CourseId); Alter Table Section ADD Constraint Section_LocationID_FK FOREIGN KEY (LocationID) References Location (LocationId); Alter Table Section ADD Constraint Section_Days_CK Check(RTRIM(Days,'mtwrfsu')IS Null); update section set seatavailable = '99'; Alter Table Section ADD Constraint Section_SeatsAvailable_CK Check (SeatAvailable < 100); Alter Table Course Add Constraint Course_CreditHours_ck check(CreditHours < = 6.0); update location set capacity = '99'; Alter Table Location Add Constraint Location_Capacity_CK Check(Capacity < 100); Create Table Registration ( StudentID Number(10), SectionID Number(10), Constraint Registration_pk Primary key (studentId, Sectionid)); Insert into registration values (1, 2); Insert into Registration values (2, 3); Insert into registration values (3, 4); Insert into registration values (4, 5); Insert into registration values (5, 6); Insert into registration values (6, 7); Insert into registration values (7, 8); Insert into registration values (8, 9); insert into registration values (9, 10); insert into registration values (10, 11); insert into registration values (9, 12); insert into registration values (8, 13); insert into registration values (7, 14); insert into registration values (6, 15); insert into registration values (5, 17); insert into registration values (4, 18); insert into registration values (3, 19); insert into registration values (2, 20); insert into registration values (1, 21); insert into registration values (2, 22); insert into registration values (3, 23); insert into registration values (4, 24); insert into registration values (5, 25); Insert into registration values (6, 24); insert into registration values (7, 23); insert into registration values (8, 22); insert into registration values (9, 21); insert into registration values (10, 20); insert into registration values (9, 19); insert into registration values (8, 17); Create Table courseInstructor( FacultyID Number(10), SectionID Number(10), Constraint CourseInstructor_pk Primary key (FacultyId, SectionID)); insert into courseInstructor values (1, 1); insert into courseInstructor values (2, 2); insert into courseInstructor values (3, 3); insert into courseInstructor values (4, 4); insert into courseInstructor values (5, 5); insert into courseInstructor values (5, 6); insert into courseInstructor values (4, 7); insert into courseInstructor values (3, 8); insert into courseInstructor values (2, 9); insert into courseInstructor values (1, 10); insert into courseInstructor values (5, 11); insert into courseInstructor values (4, 12); insert into courseInstructor values (3, 13); insert into courseInstructor values (2, 14); insert into courseInstructor values (1, 15); Create table grade( StudentID Number(10), SectionID Number(10), Grade Varchar2(1), Constraint grade_pk Primary key (StudentID, SectionID)); CREATE OR REPLACE TRIGGER TR_CreateGrade AFTER INSERT ON Registration FOR EACH ROW BEGIN INSERT INTO grade (SectionID,StudentID,Grade) VALUES(:New.SectionID,:New.StudentID,NULL); END TR_createGrade; / CREATE OR REPLACE FORCE VIEW V_reg_student_course AS SELECT Registration.StudentID, student.LastName, student.FirstName, course.CourseName, Registration.SectionID, course.CreditHours, section.Days, TO_CHAR(StartTime, 'MM/DD/YYYY') AS StartDate, TO_CHAR(StartTime, 'HH:MI PM') AS StartTime, TO_CHAR(EndTime, 'MM/DD/YYYY') AS EndDate, TO_CHAR(EndTime, 'HH:MI PM') AS EndTime, location.Building, location.Room FROM registration, student, section, course, location WHERE registration.StudentID = student.StudentID AND registration.SectionID = section.SectionID AND section.LocationID = location.LocationID AND section.CourseID = course.CourseID; CREATE OR REPLACE FORCE VIEW V_teacher_to_course AS SELECT courseInstructor.FacultyID, faculty.FirstName, faculty.LastName, courseInstructor.SectionID, section.Days, TO_CHAR(StartTime, 'MM/DD/YYYY') AS StartDate, TO_CHAR(StartTime, 'HH:MI PM') AS StartTime, TO_CHAR(EndTime, 'MM/DD/YYYY') AS EndDate, TO_CHAR(EndTime, 'HH:MI PM') AS EndTime, location.Building, location.Room FROM courseInstructor, faculty, section, course, location WHERE courseInstructor.FacultyID = faculty.FacultyID AND courseInstructor.SectionID = section.SectionID AND section.LocationID = location.LocationID AND section.CourseID = course.CourseID; SELECT * FROM V_reg_student_course; SELECT * FROM V_teacher_to_course;

    Read the article

  • Grouping a comma separated value on common data [closed]

    - by Ankit
    I have a table with col1 id int, col2 as varchar (comma separated values) and column 3 for assigning group to them. Table looks like col1 col2 group .............................. 1 2,3,4 2 5,6 3 1,2,5 4 7,8 5 11,3 6 22,8 This is only the sample of real data, now I have to assign a group no to them in such a way that output looks like col1 col2 group .............................. 1 2,3,4 1 2 5,6 1 3 1,2,5 1 4 7,8 2 5 11,3 1 6 22,8 2 The logic for assigning group no is that every similar comma separated value of string in col2 have to be same group no as every where in col2 where '2' is there it has to be same group no but the complication is that 2,3,4 are together so they all three int value if found in any where in col2 will be assigned same group. The major part is 2,3,4 and 1,2,5 both in col2 have 2 so all int 1,2,3,4,5 have to assign same group no. Tried store procedure with match against on col2 but not getting desired result Most imp (I can't use normalization, because I can't afford to make new table from my original table which have millions of records), even normalization is not helpful in my context. This question is also on stackoverflow with bounty on this link Achieved so far:- I have set the group column auto increment and then wrote this procedure:- BEGIN declare cil1_new,col2_new,group_new int; declare done tinyint default 0; declare group_new varchar(100); declare cur1 cursor for select col1,col2,`group` from company ; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done=1; open cur1; REPEAT fetch cur1 into col1_new,col2_new,group_new; update company set group=group_new where match(col2) against(concat("'",col2_new,"'")); until done end repeat; close cur1; select * from company; END This procedure is working, no syntax mistake but the problem is that I am not achieving the desired result exactly.

    Read the article

  • What is causing this SQL 2005 Primary Key Deadlock between two real-time bulk upserts?

    - by skimania
    Here's the scenario: I've got a table called MarketDataCurrent (MDC) that has live updating stock prices. I've got one process called 'LiveFeed' which reads prices streaming from the wire, queues up inserts, and uses a 'bulk upload to temp table then insert/update to MDC table.' (BulkUpsert) I've got another process which then reads this data, computes other data, and then saves the results back into the same table, using a similar BulkUpsert stored proc. Thirdly, there are a multitude of users running a C# Gui polling the MDC table and reading updates from it. Now, during the day when the data is changing rapidly, things run pretty smoothly, but then, after market hours, we've recently started seeing an increasing number of Deadlock exceptions coming out of the database, nowadays we see 10-20 a day. The imporant thing to note here is that these happen when the values are NOT changing. Here's all the relevant info: Table Def: CREATE TABLE [dbo].[MarketDataCurrent]( [MDID] [int] NOT NULL, [LastUpdate] [datetime] NOT NULL, [Value] [float] NOT NULL, [Source] [varchar](20) NULL, CONSTRAINT [PK_MarketDataCurrent] PRIMARY KEY CLUSTERED ( [MDID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] - stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][1] [1]: http://farm5.static.flickr.com/4049/4690759452_6b94ff7b34.jpg I've got a Sql Profiler Trace Running, catching the deadlocks, and here's what all the graphs look like. stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][2] [2]: http://farm5.static.flickr.com/4035/4690125231_78d84c9e15_b.jpg Process 258 is called the following 'BulkUpsert' stored proc, repeatedly, while 73 is calling the next one: ALTER proc [dbo].[MarketDataCurrent_BulkUpload] @updateTime datetime, @source varchar(10) as begin transaction update c with (rowlock) set LastUpdate = getdate(), Value = t.Value, Source = @source from MarketDataCurrent c INNER JOIN #MDTUP t ON c.MDID = t.mdid where c.lastUpdate < @updateTime and c.mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') and c.value <> t.value insert into MarketDataCurrent with (rowlock) select MDID, getdate(), Value, @source from #MDTUP where mdid not in (select mdid from MarketDataCurrent with (nolock)) and mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') commit And the other one: ALTER PROCEDURE [dbo].[MarketDataCurrent_LiveFeedUpload] AS begin transaction -- Update existing mdid UPDATE c WITH (ROWLOCK) SET LastUpdate = t.LastUpdate, Value = t.Value, Source = t.Source FROM MarketDataCurrent c INNER JOIN #TEMPTABLE2 t ON c.MDID = t.mdid; -- Insert new MDID INSERT INTO MarketDataCurrent with (ROWLOCK) SELECT * FROM #TEMPTABLE2 WHERE MDID NOT IN (SELECT MDID FROM MarketDataCurrent with (NOLOCK)) -- Clean up the temp table DELETE #TEMPTABLE2 commit To clarify, those Temp Tables are being created by the C# code on the same connection and are populated using the C# SqlBulkCopy class. To me it looks like it's deadlocking on the PK of the table, so I tried removing that PK and switching to a Unique Constraint instead but that increased the number of deadlocks 10-fold. I'm totally lost as to what to do about this situation and am open to just about any suggestion. HELP!!

    Read the article

  • Load a 6 MB binary file in a SQL Server 2005 VARBINARY(MAX) column using ADO/VC++?

    - by Feroz Khan
    How to load a binary file(.bin) of size 6 MB in a varbinary(MAX) column of SQL Server 2005 database using ADO in a VC++ application. This is the code I am using to load the file which I used to load a .bmp file: BOOL CSaveView::PutECGInDB(CString strFilePath, FieldPtr pFileData) { //Open File CFile fileImage; CFileStatus fileStatus; fileImage.Open(strFilePath,CFile::modeRead); fileImage.GetStatus(fileStatus); //Alocating memory for data ULONG nBytes = (ULONG)fileStatus.m_size; HGLOBAL hGlobal = GlobalAlloc(GPTR,nBytes); LPVOID lpData = GlobalLock(hGlobal); //Putting data into file fileImage.Read(lpData,nBytes); HRESULT hr; _variant_t varChunk; long lngOffset = 0; UCHAR chData; SAFEARRAY FAR *psa = NULL; SAFEARRAYBOUND rgsabound[1]; try { //Create a safe array to store the BYTES rgsabound[0].lLbound = 0; rgsabound[0].cElements = nBytes; psa = SafeArrayCreate(VT_UI1,1,rgsabound); while(lngOffset<(long)nBytes) { chData = ((UCHAR*)lpData)[lngOffset]; hr = SafeArrayPutElement(psa,&lngOffset,&chData); if(hr != S_OK) { return false; } lngOffset++; } lngOffset = 0; //Assign the safe array to a varient varChunk.vt = VT_ARRAY|VT_UI1; varChunk.parray = psa; hr = pFileData->AppendChunk(varChunk); if(hr != S_OK) { return false; } } catch(_com_error &e) { //get info from _com_error _bstr_t bstrSource(e.Source()); _bstr_t bstrDescription(e.Description()); _bstr_t bstrErrorMessage(e.ErrorMessage()); _bstr_t bstrErrorCode(e.Error()); TRACE("Exception thrown for classes generated by #import"); TRACE("\tCode= %08lx\n",(LPCSTR)bstrErrorCode); TRACE("\tCode Meaning = %s\n",(LPCSTR)bstrErrorMessage); TRACE("\tSource = %s\n",(LPCSTR)bstrSource); TRACE("\tDescription = %s\n",(LPCSTR)bstrDescription); } catch(...) { TRACE("***Unhandle Exception***"); } //Free Memory GlobalUnlock(lpData); return true; } But when I read the same file using Getchunk function it gives me all 0s but the size of the file I get is same as the one uploaded. Your help will be highly appreciated.

    Read the article

  • Should I create a unique clustered index, or non-unique clustered index on this SQL 2005 table?

    - by Bremer
    I have a table storing millions of rows. It looks something like this: Table_Docs ID, Bigint (Identity col) OutputFileID, int Sequence, int …(many other fields) We find ourselves in a situation where the developer who designed it made the OutputFileID the clustered index. It is not unique. There can be thousands of records with this ID. It has no benefit to any processes using this table, so we plan to remove it. The question, is what to change it to… I have two candidates, the ID identity column is a natural choice. However, we have a process which does a lot of update commands on this table, and it uses the Sequence to do so. The Sequence is non-unique. Most records only contain one, but about 20% can have two or more records with the same Sequence. The INSERT app is a VB6 piece of crud throwing thousands insert commands at the table. The Inserted values are never in any particular order. So the Sequence of one insert may be 12345, and the next could be 12245. I know that this could cause SQL to move a lot of data to keep the clustered index in order. However, the Sequence of the inserts are generally close to being in order. All inserts would take place at the end of the clustered table. Eg: I have 5 million records with Sequence spanning 1 to 5 million. The INSERT app will be inserting sequence’s at the end of that range at any given time. Reordering of the data should be minimal (tens of thousands of records at most). Now, the UPDATE app is our .NET star. It does all UPDATES on the Sequence column. “Update Table_Docs Set Feild1=This, Field2=That…WHERE Sequence =12345” – hundreds of thousands of these a day. The UPDATES are completely and totally, random, touching all points of the table. All other processes are simply doing SELECT’s on this (Web pages). Regular indexes cover those. So my question is, what’s better….a unique clustered index on the ID column, benefiting the INSERT app, or a non-unique clustered index on the Sequence, benefiting the UPDATE app?

    Read the article

  • What is the best approach in SQL to store multi-level descriptions?

    - by gime
    I need a new perspective on how to design a reliable and efficient SQL database to store multi-level arrays of data. This problem applies to many situations but I came up with this example: There are hundreds of products. Each product has an undefined number of parts. Each part is built from several elements. All products are described in the same way. All parts would require the same fields to describe them (let's say: price, weight, part name), all elements of all parts also have uniform design (for example: element code, manufacturer). Plain and simple. One element may be related to only part, and each part is related to one product only. I came up with idea of three tables: Products: -------------------------------------------- prod_id prod_name prod_price prod_desc 1 hoover 120 unused next Parts: ---------------------------------------------------- part_id part_name part_price part_weight prod_id 3 engine 10 20 1 and finally Elements: --------------------------------------- el_id el_code el_manufacturer part_id 1 BFG12 GE 3 Now, select a desired product, select all from PARTS where prod_id is the same, and then select all from ELEMENTS where part_id matches - after multiple queries you've got all data. I'm just not sure if this is the right approach. I've got also another idea, without ELEMENTS table. That would decrease queries but I'm a bit afraid it might be lame and bad practice. Instead of ELEMENTS table there are two more fields in the PARTS table, so it looks like this: part_id, part_name, part_price, part_weight, prod_id, part_el_code, part_el_manufacturer they would be text type, and for each part, information about elements would be stored as strings, this way: part_el_code | code_of_element1; code_of_element2; code_of_element3 part_el_manufacturer | manuf_of_element1; manuf_of_element2; manuf_of_element3 Then all we need is to explode() data from those fields, and we get arrays, easy to display. Of course this is not perfect and has some limitations, but is this idea ok? Or should I just go with the first idea? Or maybe there is a better approach to this problem? It's really hard to describe it in few words, and that means it's hard to search for answer. Also, understanding the principles of designing databases is not that easy as it seems.

    Read the article

  • Write, Read and Update Oracle CLOBs with PL/SQL

    - by robertphyatt
    Fun with CLOBS! If you are using Oracle, if you have to deal with text that is over 4000 bytes, you will probably find yourself dealing with CLOBs, which can go up to 4GB. They are pretty tricky, and it took me a long time to figure out these lessons learned. I hope they will help some down-trodden developer out there somehow. Here is my original code, which worked great on my Oracle Express Edition: (for all examples, the first one writes a new CLOB, the next one Updates an existing CLOB and the final one reads a CLOB back) CREATE OR REPLACE PROCEDURE PRC_WR_CLOB (        p_document      IN VARCHAR2,        p_id            OUT NUMBER) IS      lob_loc CLOB; BEGIN    INSERT INTO TBL_CLOBHOLDERDDOC (CLOBHOLDERDDOC)        VALUES (empty_CLOB())        RETURNING CLOBHOLDERDDOC, CLOBHOLDERDDOCID INTO lob_loc, p_id;    DBMS_LOB.WRITE(lob_loc, LENGTH(UTL_RAW.CAST_TO_RAW(p_document)), 1, UTL_RAW.CAST_TO_RAW(p_document)); END; / CREATE OR REPLACE PROCEDURE PRC_UD_CLOB (        p_document      IN VARCHAR2,        p_id            IN NUMBER) IS      lob_loc CLOB; BEGIN        SELECT CLOBHOLDERDDOC INTO lob_loc FROM TBL_CLOBHOLDERDDOC        WHERE CLOBHOLDERDDOCID = p_id FOR UPDATE;    DBMS_LOB.WRITE(lob_loc, LENGTH(UTL_RAW.CAST_TO_RAW(p_document)), 1, UTL_RAW.CAST_TO_RAW(p_document)); END; / CREATE OR REPLACE PROCEDURE PRC_RD_CLOB (    p_id IN NUMBER,    p_clob OUT VARCHAR2) IS    lob_loc  CLOB; BEGIN    SELECT CLOBHOLDERDDOC INTO lob_loc    FROM   TBL_CLOBHOLDERDDOC    WHERE  CLOBHOLDERDDOCID = p_id;    p_clob := UTL_RAW.CAST_TO_VARCHAR2(DBMS_LOB.SUBSTR(lob_loc, DBMS_LOB.GETLENGTH(lob_loc), 1)); END; / As you can see, I had originally been casting everything back and forth between RAW formats using the UTL_RAW.CAST_TO_VARCHAR2() and UTL_RAW.CAST_TO_RAW() functions all over the place, but it had the nasty side effect of working great on my Oracle express edition on my developer box, but having all the CLOBs above a certain size display garbage when read back on the Oracle test database server . So...I kept working at it and came up with the following, which ALSO worked on my Oracle Express Edition on my developer box:   CREATE OR REPLACE PROCEDURE PRC_WR_CLOB (     p_document      IN VARCHAR2,     p_id        OUT NUMBER) IS       lob_loc CLOB; BEGIN     INSERT INTO TBL_CLOBHOLDERDOC (CLOBHOLDERDOC)         VALUES (empty_CLOB())         RETURNING CLOBHOLDERDOC, CLOBHOLDERDOCID INTO lob_loc, p_id;     DBMS_LOB.WRITE(lob_loc, LENGTH(p_document), 1, p_document);   END; / CREATE OR REPLACE PROCEDURE PRC_UD_CLOB (     p_document      IN VARCHAR2,     p_id        IN NUMBER) IS       lob_loc CLOB; BEGIN     SELECT CLOBHOLDERDOC INTO lob_loc FROM TBL_CLOBHOLDERDOC     WHERE CLOBHOLDERDOCID = p_id FOR UPDATE;     DBMS_LOB.WRITE(lob_loc, LENGTH(p_document), 1, p_document); END; / CREATE OR REPLACE PROCEDURE PRC_RD_CLOB (     p_id IN NUMBER,     p_clob OUT VARCHAR2) IS     lob_loc  CLOB; BEGIN     SELECT CLOBHOLDERDOC INTO lob_loc     FROM   TBL_CLOBHOLDERDOC     WHERE  CLOBHOLDERDOCID = p_id;     p_clob := DBMS_LOB.SUBSTR(lob_loc, DBMS_LOB.GETLENGTH(lob_loc), 1); END; / Unfortunately, by changing my code to what you see above, even though it kept working on my Oracle express edition, everything over a certain size just started truncating after about 7950 characters on the test server! Here is what I came up with in the end, which is actually the simplest solution and this time worked on both my express edition and on the database server (note that only the read function was changed to fix the truncation issue, and that I had Oracle worry about converting the CLOB into a VARCHAR2 internally): CREATE OR REPLACE PROCEDURE PRC_WR_CLOB (        p_document      IN VARCHAR2,        p_id            OUT NUMBER) IS      lob_loc CLOB; BEGIN    INSERT INTO TBL_CLOBHOLDERDDOC (CLOBHOLDERDDOC)        VALUES (empty_CLOB())        RETURNING CLOBHOLDERDDOC, CLOBHOLDERDDOCID INTO lob_loc, p_id;    DBMS_LOB.WRITE(lob_loc, LENGTH(p_document), 1, p_document); END; / CREATE OR REPLACE PROCEDURE PRC_UD_CLOB (        p_document      IN VARCHAR2,        p_id            IN NUMBER) IS      lob_loc CLOB; BEGIN        SELECT CLOBHOLDERDDOC INTO lob_loc FROM TBL_CLOBHOLDERDDOC        WHERE CLOBHOLDERDDOCID = p_id FOR UPDATE;    DBMS_LOB.WRITE(lob_loc, LENGTH(p_document), 1, p_document); END; / CREATE OR REPLACE PROCEDURE PRC_RD_CLOB (    p_id IN NUMBER,    p_clob OUT VARCHAR2) IS BEGIN    SELECT CLOBHOLDERDDOC INTO p_clob    FROM   TBL_CLOBHOLDERDDOC    WHERE  CLOBHOLDERDDOCID = p_id; END; /   I hope that is useful to someone!

    Read the article

  • Trace File Source Adapter

    The Trace File Source adapter is a useful addition to your SSIS toolbox.  It allows you to read 2005 and 2008 profiler traces stored as .trc files and read them into the Data Flow.  From there you can perform filtering and analysis using the power of SSIS. There is no need for a SQL Server connection this just uses the trace file. Example Usages Cache warming for SQL Server Analysis Services Reading the flight recorder Find out the longest running queries on a server Analyze statements for CPU, memory by user or some other criteria you choose Properties The Trace File Source adapter has two properties, both of which combine to control the source trace file that is read at runtime. SQL Server 2005 and SQL Server 2008 trace files are supported for both the Database Engine (SQL Server) and Analysis Services. The properties are managed by the Editor form or can be set directly from the Properties Grid in Visual Studio. Property Type Description AccessMode Enumeration This property determines how the Filename property is interpreted. The values available are: DirectInput Variable Filename String This property holds the path for trace file to load (*.trc). The value is either a full path, or the name of a variable which contains the full path to the trace file, depending on the AccessMode property. Trace Column Definition Hopefully the majority of you can skip this section entirely, but if you encounter some problems processing a trace file this may explain it and allow you to fix the problem. The component is built upon the trace management API provided by Microsoft. Unfortunately API methods that expose the schema of a trace file have known issues and are unreliable, put simply the data often differs from what was specified. To overcome these limitations the component uses  some simple XML files. These files enable the trace column data types and sizing attributes to be overridden. For example SQL Server Profiler or TMO generated structures define EventClass as an integer, but the real value is a string. TraceDataColumnsSQL.xml  - SQL Server Database Engine Trace Columns TraceDataColumnsAS.xml    - SQL Server Analysis Services Trace Columns The files can be found in the %ProgramFiles%\Microsoft SQL Server\100\DTS\PipelineComponents folder, e.g. "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsSQL.xml" "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml" If at runtime the component encounters a type conversion or sizing error it is most likely due to a discrepancy between the column definition as reported by the API and the actual value encountered. Whilst most common issues have already been fixed through these files we have implemented specific exception traps to direct you to the files to enable you to fix any further issues due to different usage or data scenarios that we have not tested. An example error that you can fix through these files is shown below. Buffer exception writing value to column 'Column Name'. The string value is 999 characters in length, the column is only 111. Columns can be overridden by the TraceDataColumns XML files in "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml". Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trace File Source transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Please note that the Microsoft Trace classes used in the component are not supported on 64-bit platforms. To use the Trace File Source on a 64-bit host you need to ensure you have the 32-bit (x86) tools available, and the way you execute your package is setup to use them, please see the help topic 64-bit Considerations for Integration Services for more details. Downloads Trace Sources for SQL Server 2005 -- Trace Sources for SQL Server 2008 Version History SQL Server 2008 Version 2.0.0.382 - SQL Sever 2008 public release. (9 Apr 2009) SQL Server 2005 Version 1.0.0.321 - SQL Server 2005 public release. (18 Nov 2008) -- Screenshots

    Read the article

  • New Communications Industry Data Model with "Factory Installed" Predictive Analytics using Oracle Da

    - by charlie.berger
    Oracle Introduces Oracle Communications Data Model to Provide Actionable Insight for Communications Service Providers   We've integrated pre-installed analytical methodologies with the new Oracle Communications Data Model to deliver automated, simple, yet powerful predictive analytics solutions for customers.  Churn, sentiment analysis, identifying customer segments - all things that can be anticipated and hence, preconcieved and implemented inside an applications.  Read on for more information! TM Forum Management World, Nice, France - 18 May 2010 News Facts To help communications service providers (CSPs) manage and analyze rapidly growing data volumes cost effectively, Oracle today introduced the Oracle Communications Data Model. With the Oracle Communications Data Model, CSPs can achieve rapid time to value by quickly implementing a standards-based enterprise data warehouse that features communications industry-specific reporting, analytics and data mining. The combination of the Oracle Communications Data Model, Oracle Exadata and the Oracle Business Intelligence (BI) Foundation represents the most comprehensive data warehouse and BI solution for the communications industry. Also announced today, Hong Kong Broadband Network enhanced their data warehouse system, going live on Oracle Communications Data Model in three months. The leading provider increased its subscriber base by 37 percent in six months and reduced customer churn to less than one percent. Product Details Oracle Communications Data Model provides industry-specific schema and embedded analytics that address key areas such as customer management, marketing segmentation, product development and network health. CSPs can efficiently capture and monitor critical data and transform it into actionable information to support development and delivery of next-generation services using: More than 1,300 industry-specific measurements and key performance indicators (KPIs) such as network reliability statistics, provisioning metrics and customer churn propensity. Embedded OLAP cubes for extremely fast dimensional analysis of business information. Embedded data mining models for sophisticated trending and predictive analysis. Support for multiple lines of business, such as cable, mobile, wireline and Internet, which can be easily extended to support future requirements. With Oracle Communications Data Model, CSPs can jump start the implementation of a communications data warehouse in line with communications-industry standards including the TM Forum Information Framework (SID), formerly known as the Shared Information Model. Oracle Communications Data Model is optimized for any Oracle Database 11g platform, including Oracle Exadata, which can improve call data record query performance by 10x or more. Supporting Quotes "Oracle Communications Data Model covers a wide range of business areas that are relevant to modern communications service providers and is a comprehensive solution - with its data model and pre-packaged templates including BI dashboards, KPIs, OLAP cubes and mining models. It helps us save a great deal of time in building and implementing a customized data warehouse and enables us to leverage the advanced analytics quickly and more effectively," said Yasuki Hayashi, executive manager, NTT Comware Corporation. "Data volumes will only continue to grow as communications service providers expand next-generation networks, deploy new services and adopt new business models. They will increasingly need efficient, reliable data warehouses to capture key insights on data such as customer value, network value and churn probability. With the Oracle Communications Data Model, Oracle has demonstrated its commitment to meeting these needs by delivering data warehouse tools designed to fill communications industry-specific needs," said Elisabeth Rainge, program director, Network Software, IDC. "The TM Forum Conformance Mark provides reassurance to customers seeking standards-based, and therefore, cost-effective and flexible solutions. TM Forum is extremely pleased to work with Oracle to certify its Oracle Communications Data Model solution. Upon successful completion, this certification will represent the broadest and most complete implementation of the TM Forum Information Framework to date, with more than 130 aggregate business entities," said Keith Willetts, chairman and chief executive officer, TM Forum. Supporting Resources Oracle Communications Oracle Communications Data Model Data Sheet Oracle Communications Data Model Podcast Oracle Data Warehousing Oracle Communications on YouTube Oracle Communications on Delicious Oracle Communications on Facebook Oracle Communications on Twitter Oracle Communications on LinkedIn Oracle Database on Twitter The Data Warehouse Insider Blog

    Read the article

  • How to I do install DB2 ODBC?

    - by Justin
    I have been trying, with no success, to install a IBM DB2 ODBC driver so that my PHP server can connect to a database. I've tried installing the db2_connect and get all sorts of problems, I tried install I Access for Linux and the RPM did not install right nor did using alien breed any useful results. I've also tried the DB2 Runtime v8.1, no success. If I attempt to run the rpm it claims I need dependencies that I can't find in apt-get. Yum is also not very helpful as it appears I don't have any repositories installed or lists... Running the simple RPM gives me this result in terminal: # rpm -ivh iSeriesAccess-7.1.0-1.0.x86_64.rpm rpm: RPM should not be used directly install RPM packages, use Alien instead! rpm: However assuming you know what you are doing... error: Failed dependencies: /bin/ln is needed by iSeriesAccess-7.1.0-1.0.x86_64 /sbin/ldconfig is needed by iSeriesAccess-7.1.0-1.0.x86_64 /bin/rm is needed by iSeriesAccess-7.1.0-1.0.x86_64 /bin/sh is needed by iSeriesAccess-7.1.0-1.0.x86_64 libc.so.6()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libc.so.6(GLIBC_2.2.5)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libc.so.6(GLIBC_2.3)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libdl.so.2()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libdl.so.2(GLIBC_2.2.5)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libgcc_s.so.1()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libm.so.6()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libm.so.6(GLIBC_2.2.5)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libodbcinst.so.1()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libodbc.so.1()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libpthread.so.0()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libpthread.so.0(GLIBC_2.2.5)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libpthread.so.0(GLIBC_2.3.2)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 librt.so.1()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 librt.so.1(GLIBC_2.2.5)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libstdc++.so.6()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libstdc++.so.6(CXXABI_1.3)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libstdc++.so.6(GLIBCXX_3.4)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 Using alien and running the dkpg gives me thes headaque: $ alien iSeriesAccess-7.1.0-1.0.x86_64.rpm --scripts # dpkg -i iseriesaccess_7.1.0-2_amd64.deb (Reading database ... 127664 files and directories currently installed.) Preparing to replace iseriesaccess 7.1.0-2 (using iseriesaccess_7.1.0-2_amd64.deb) ... Unpacking replacement iseriesaccess ... post uninstall processing for iSeriesAccess 1.0...upgrade /var/lib/dpkg/info/iseriesaccess.postrm: line 8: [: upgrade: integer expression expected Setting up iseriesaccess (7.1.0-2) ... post install processing for iSeriesAccess 1.0...configure iSeries Access ODBC Driver has been deleted (if it existed at all) because its usage count became zero odbcinst: Driver installed. Usage count increased to 1. Target directory is /etc odbcinst: Driver installed. Usage count increased to 3. Target directory is /etc Processing triggers for libc-bin ... ldconfig deferred processing now taking place So it seems the files installed right, well my odbc driver shows up but db2cli.ini is no where to be found. So several questions. Is there a better alternative to connect php to db2, say an ubuntu package I can just install? Can someone direct me to the steps that makes my ubuntu server works well with the RPM so I can build my db2 instance? Also remember I'm connection to an I Series remotely. I'm not using the DB2 Express C thing, even if I did try it to get the db2 php functions to work. And I don't have zend but I think I have every other package on the ubuntu repositories. Help, thank you!

    Read the article

  • Trace File Source Adapter

    The Trace File Source adapter is a useful addition to your SSIS toolbox.  It allows you to read 2005 and 2008 profiler traces stored as .trc files and read them into the Data Flow.  From there you can perform filtering and analysis using the power of SSIS. There is no need for a SQL Server connection this just uses the trace file. Example Usages Cache warming for SQL Server Analysis Services Reading the flight recorder Find out the longest running queries on a server Analyze statements for CPU, memory by user or some other criteria you choose Properties The Trace File Source adapter has two properties, both of which combine to control the source trace file that is read at runtime. SQL Server 2005 and SQL Server 2008 trace files are supported for both the Database Engine (SQL Server) and Analysis Services. The properties are managed by the Editor form or can be set directly from the Properties Grid in Visual Studio. Property Type Description AccessMode Enumeration This property determines how the Filename property is interpreted. The values available are: DirectInput Variable Filename String This property holds the path for trace file to load (*.trc). The value is either a full path, or the name of a variable which contains the full path to the trace file, depending on the AccessMode property. Trace Column Definition Hopefully the majority of you can skip this section entirely, but if you encounter some problems processing a trace file this may explain it and allow you to fix the problem. The component is built upon the trace management API provided by Microsoft. Unfortunately API methods that expose the schema of a trace file have known issues and are unreliable, put simply the data often differs from what was specified. To overcome these limitations the component uses  some simple XML files. These files enable the trace column data types and sizing attributes to be overridden. For example SQL Server Profiler or TMO generated structures define EventClass as an integer, but the real value is a string. TraceDataColumnsSQL.xml  - SQL Server Database Engine Trace Columns TraceDataColumnsAS.xml    - SQL Server Analysis Services Trace Columns The files can be found in the %ProgramFiles%\Microsoft SQL Server\100\DTS\PipelineComponents folder, e.g. "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsSQL.xml" "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml" If at runtime the component encounters a type conversion or sizing error it is most likely due to a discrepancy between the column definition as reported by the API and the actual value encountered. Whilst most common issues have already been fixed through these files we have implemented specific exception traps to direct you to the files to enable you to fix any further issues due to different usage or data scenarios that we have not tested. An example error that you can fix through these files is shown below. Buffer exception writing value to column 'Column Name'. The string value is 999 characters in length, the column is only 111. Columns can be overridden by the TraceDataColumns XML files in "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml". Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trace File Source transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Please note that the Microsoft Trace classes used in the component are not supported on 64-bit platforms. To use the Trace File Source on a 64-bit host you need to ensure you have the 32-bit (x86) tools available, and the way you execute your package is setup to use them, please see the help topic 64-bit Considerations for Integration Services for more details. Downloads Trace Sources for SQL Server 2005 -- Trace Sources for SQL Server 2008 Version History SQL Server 2008 Version 2.0.0.382 - SQL Sever 2008 public release. (9 Apr 2009) SQL Server 2005 Version 1.0.0.321 - SQL Server 2005 public release. (18 Nov 2008) -- Screenshots

    Read the article

  • Subsonic - How to use SQL Schema / Owner name as part of the namespace?

    - by CResults
    Hi there, I've just started using Subsonic 2.2 and so far very impressed - think it'll save me some serious coding time. Before I dive into using it full time though there is something bugging me that I'd like to sort out. In my current database (a SQL2008 db) I have split the tables, views, sps etc. up into separate chunks by schema/owner name, so all the customer tables are in the customer. schema, products in the product. schema etc., so a to select from the customers address table i'd do a select * from customer.address Unfortunately, Subsonic ignores the schema/owner name and just gives me the base table name. This is fine as I've no duplicates between schemas (e.g Customer.Address and Supplier.Address don't both exist) but I just feel the code could be clearer if I could split by schema. Ideally I'd like to be able to alter the namespace by schema/owner - I think this would have least impact on SubSonic yet make the resulting code easier to read. Problem is, I've crawled all over the Subsonic source and don't have a clue how to do this (doesn't help that I code in VB not C# = yes I know, blame the ZX Spectrum!!) If anyone has tackled this before or has an idea on how to solve it, I'd be really grateful, Thanks in advance. Ed

    Read the article

  • SQL analytical mash-ups deliver real-time WOW! for big data

    - by KLaker
    One of the overlooked capabilities of SQL as an analysis engine, because we all just take it for granted, is that you can mix and match analytical features to create some amazing mash-ups. As we move into the exciting world of big data these mash-ups can really deliver those "wow, I never knew that" moments. While Java is an incredibly flexible and powerful framework for managing big data there are some significant challenges in using Java and MapReduce to drive your analysis to create these "wow" discoveries. One of these "wow" moments was demonstrated at this year's OpenWorld during Andy Mendelsohn's general keynote session.  Here is the scenario - we are looking for fraudulent activities in our big data stream and in this case we identifying potentially fraudulent activities by looking for specific patterns. We using geospatial tagging of each transaction so we can create a real-time fraud-map for our business users. Where we start to move towards a "wow" moment is to extend this basic use of spatial and pattern matching, as shown in the above dashboard screen, to incorporate spatial analytics within the SQL pattern matching clause. This will allow us to compute the distance between transactions. Apologies for the quality of this screenshot….hopefully below you see where we have extended our SQL pattern matching clause to use location of each transaction and to calculate the distance between each transaction: This allows us to compare the time of the last transaction with the time of the current transaction and see if the distance between the two points is possible given the time frame. Obviously if I buy something in Florida from my favourite bike store (may be a new carbon saddle for my Trek) and then 5 minutes later the system sees my credit card details being used in Arizona there is high probability that this transaction in Arizona is actually fraudulent (I am fast on my Trek but not that fast!) and we can flag this up in real-time on our dashboard: In this post I have used the term "real-time" a couple of times and this is an important point and one of the key reasons why SQL really is the only language to use if you want to analyse  big data. One of the most important questions that comes up in every big data project is: how do we do analysis? Many enlightened customers are now realising that using Java-MapReduce to deliver analysis does not result in "wow" moments. These "wow" moments only come with SQL because it is offers a much richer environment, it is simpler to use and it is faster - which makes it possible to deliver real-time "Wow!". Below is a slide from Andy's session showing the results of a comparison of Java-MapReduce vs. SQL pattern matching to deliver our "wow" moment during our live demo.  You can watch our analytical mash-up "Wow" demo that compares the power of 12c SQL pattern matching + spatial analytics vs. Java-MapReduce  here: You can get more information about SQL Pattern Matching on our SQL Analytics home page on OTN, see here http://www.oracle.com/technetwork/database/bi-datawarehousing/sql-analytics-index-1984365.html.  You can get more information about our spatial analytics here: http://www.oracle.com/technetwork/database-options/spatialandgraph/overview/index.html If you would like to watch the full Database 12c OOW presentation see here: http://medianetwork.oracle.com/video/player/2686974264001

    Read the article

  • Accessing Unicode telugu text from Ms-Access Database in Java

    - by Ravi Chandra
    I have an MS-Access database ( A English-telugu Dictionary database) which contains a table storing English words and telugu meanings. I am writing a dictionary program in Java which queries the database for a keyword entered by the user and display the telugu meaning. My Program is working fine till I get the data from the database, but when I displayed it on some component like JTextArea/JEditorPane/ etc... the telugu text is being displayed as '????'. Why is it happening?. I have seen the solution for "1467412/reading-unicode-data-from-an-access-database-using-jdbc" which provides some workaround for Hebrew language. But it is not working for telugu. i.e I included setCharset("UTF8")before querying from the database. but still I am getting all '?'s. As soon as I got data from Resultset I am checking the individual characters, all the telugu characters are being encoded by 63 only. This is my observation. I guess this must be some type of encoding problem. I would be very glad if somebody provides some solution for this. Thanks in advance.

    Read the article

  • 10 Steps to access Oracle stored procedures from Crystal Reports

    Requirements to access Oracle stored procedures from CR The following requirements must be met in order for CR to access an Oracle stored procedure: 1. You must create a package that defines the REF CURSOR. This REF CURSOR must be strongly bound to a static pre-defined structure (see Strongly Bound REF CURSORs vs Weakly Bound REF CURSORs). This package must be created separately and before the creation of the stored procedure. NOTE Crystal Reports 9 native connections will support Oracle stored procedures created within packages as well as Oracle stored procedures referencing weakly bound REF CURSORs. Crystal Reports 8.5 native connections will support Oracle stored procedures referencing weakly bound REF CURSORs. 2. The procedure must have a parameter that is a REF CURSOR type. This is because CR uses this parameter to access and define the result set that the stored procedure returns. 3. The REF CURSOR parameter must be defined as IN OUT (read/write mode). After the procedure has opened and assigned a query to the REF CURSOR, CR will perform a FETCH call for every row from the query's result. This is why the parameter must be defined as IN OUT. 4. Parameters can only be input (IN) parameters. CR is not designed to work with OUT parameters. 5. The REF CURSOR variable must be opened and assigned its query within the procedure. 6. The stored procedure can only return one record set. The structure of this record set must not change, based on parameters. 7. The stored procedure cannot call another stored procedure. 8. If using an ODBC driver, it must be the CR Oracle ODBC driver (installed by CR). Other Oracle ODBC drivers (installed by Microsoft or Oracle) may not function correctly. 9. If you are using the CR ODBC driver, you must ensure that in the ODBC Driver Configuration setup, under the Advanced Tab, the option 'Procedure Return Results' is checked ON. 10. If you are using the native Oracle driver and using hard-coded date selection within the procedure, the date selection must use either a string representation format of 'YYYY-DD-MM' (i.e. WHERE DATEFIELD = '1999-01-01') or the TO_DATE function with the same format specified (i.e. WHERE DATEFIELD = TO_DATE ('1999-01-01','YYYY-MM-DD'). For more information, refer to kbase article C2008023. 11. Most importantly, this stored procedure must execute successfully in Oracle's SQL*Plus utility. If all of these conditions are met, you must next ensure you are using the appropriate database driver. Please refer to the sections in this white paper for a list of acceptable database drivers. span.fullpost {display:none;}

    Read the article

  • T-SQL - Left Outer Joins - Fileters in the where clause versus the on clause.

    - by Greg Potter
    I am trying to compare two tables to find rows in each table that is not in the other. Table 1 has a groupby column to create 2 sets of data within table one. groupby number ----------- ----------- 1 1 1 2 2 1 2 2 2 4 Table 2 has only one column. number ----------- 1 3 4 So Table 1 has the values 1,2,4 in group 2 and Table 2 has the values 1,3,4. I expect the following result when joining for Group 2: `Table 1 LEFT OUTER Join Table 2` T1_Groupby T1_Number T2_Number ----------- ----------- ----------- 2 2 NULL `Table 2 LEFT OUTER Join Table 1` T1_Groupby T1_Number T2_Number ----------- ----------- ----------- NULL NULL 3 The only way I can get this to work is if I put a where clause for the first join: PRINT 'Table 1 LEFT OUTER Join Table 2, with WHERE clause' select table1.groupby as [T1_Groupby], table1.number as [T1_Number], table2.number as [T2_Number] from table1 LEFT OUTER join table2 --****************************** on table1.number = table2.number --****************************** WHERE table1.groupby = 2 AND table2.number IS NULL and a filter in the ON for the second: PRINT 'Table 2 LEFT OUTER Join Table 1, with ON clause' select table1.groupby as [T1_Groupby], table1.number as [T1_Number], table2.number as [T2_Number] from table2 LEFT OUTER join table1 --****************************** on table2.number = table1.number AND table1.groupby = 2 --****************************** WHERE table1.number IS NULL Can anyone come up with a way of not using the filter in the on clause but in the where clause? The context of this is I have a staging area in a database and I want to identify new records and records that have been deleted. The groupby field is the equivalent of a batchid for an extract and I am comparing the latest extract in a temp table to a the batch from yesterday stored in a partioneds table, which also has all the previously extracted batches as well. Code to create table 1 and 2: create table table1 (number int, groupby int) create table table2 (number int) insert into table1 (number, groupby) values (1, 1) insert into table1 (number, groupby) values (2, 1) insert into table1 (number, groupby) values (1, 2) insert into table2 (number) values (1) insert into table1 (number, groupby) values (2, 2) insert into table2 (number) values (3) insert into table1 (number, groupby) values (4, 2) insert into table2 (number) values (4)

    Read the article

  • 10 Steps to access Oracle stored procedures from Crystal Reports

    Requirements to access Oracle stored procedures from CR The following requirements must be met in order for CR to access an Oracle stored procedure: 1. You must create a package that defines the REF CURSOR. This REF CURSOR must be strongly bound to a static pre-defined structure (see Strongly Bound REF CURSORs vs Weakly Bound REF CURSORs). This package must be created separately and before the creation of the stored procedure. NOTE Crystal Reports 9 native connections will support Oracle stored procedures created within packages as well as Oracle stored procedures referencing weakly bound REF CURSORs. Crystal Reports 8.5 native connections will support Oracle stored procedures referencing weakly bound REF CURSORs. 2. The procedure must have a parameter that is a REF CURSOR type. This is because CR uses this parameter to access and define the result set that the stored procedure returns. 3. The REF CURSOR parameter must be defined as IN OUT (read/write mode). After the procedure has opened and assigned a query to the REF CURSOR, CR will perform a FETCH call for every row from the query's result. This is why the parameter must be defined as IN OUT. 4. Parameters can only be input (IN) parameters. CR is not designed to work with OUT parameters. 5. The REF CURSOR variable must be opened and assigned its query within the procedure. 6. The stored procedure can only return one record set. The structure of this record set must not change, based on parameters. 7. The stored procedure cannot call another stored procedure. 8. If using an ODBC driver, it must be the CR Oracle ODBC driver (installed by CR). Other Oracle ODBC drivers (installed by Microsoft or Oracle) may not function correctly. 9. If you are using the CR ODBC driver, you must ensure that in the ODBC Driver Configuration setup, under the Advanced Tab, the option 'Procedure Return Results' is checked ON. 10. If you are using the native Oracle driver and using hard-coded date selection within the procedure, the date selection must use either a string representation format of 'YYYY-DD-MM' (i.e. WHERE DATEFIELD = '1999-01-01') or the TO_DATE function with the same format specified (i.e. WHERE DATEFIELD = TO_DATE ('1999-01-01','YYYY-MM-DD'). For more information, refer to kbase article C2008023. 11. Most importantly, this stored procedure must execute successfully in Oracle's SQL*Plus utility. If all of these conditions are met, you must next ensure you are using the appropriate database driver. Please refer to the sections in this white paper for a list of acceptable database drivers. span.fullpost {display:none;}

    Read the article

  • Checksum Transformation

    The Checksum Transformation computes a hash value, the checksum, across one or more columns, returning the result in the Checksum output column. The transformation provides functionality similar to the T-SQL CHECKSUM function, but is encapsulated within SQL Server Integration Services, for use within the pipeline without code or a SQL Server connection. As featured in The Microsoft Data Warehouse Toolkit by Joy Mundy and Warren Thornthwaite from the Kimbal Group. Have a look at the book samples especially Sample package for custom SCD handling. All input columns are passed through the transformation unaltered, those selected are used to generate the checksum which is passed out through a single output column, Checksum. This does not restrict the number of columns available downstream from the transformation, as columns will always flow through a transformation. The Checksum output column is in addition to all existing columns within the pipeline buffer. The Checksum Transformation uses an algorithm based on the .Net framework GetHashCode method, it is not consistent with the T-SQL CHECKSUM() or BINARY_CHECKSUM() functions. The transformation does not support the following Integration Services data types, DT_NTEXT, DT_IMAGE and DT_BYTES. ChecksumAlgorithm Property There ChecksumAlgorithm property is defined with an enumeration. It was first added in v1.3.0, when the FrameworkChecksum was added. All previous algorithms are still supported for backward compatibility as ChecksumAlgorithm.Original (0). Original - Orginal checksum function, with known issues around column separators and null columns. This was deprecated in the first SQL Server 2005 RTM release. FrameworkChecksum - The hash function is based on the .NET Framework GetHash method for object types. This is based on the .NET Object.GetHashCode() method, which unfortunately differs between x86 and x64 systems. For that reason we now default to the CRC32 option. CRC32 - Using a standard 32-bit cyclic redundancy check (CRC), this provides a more open implementation. The component is provided as an MSI file, however to complete the installation, you will have to add the transformation to the Visual Studio toolbox by hand. This process has been described in detail in the related FAQ entry for How do I install a task or transform component?, just select Checksum from the SSIS Data Flow Items list in the Choose Toolbox Items window. Downloads The Checksum Transformation is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Checksum Transformation for SQL Server 2005 Checksum Transformation for SQL Server 2008 Checksum Transformation for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.27 – SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2010) SQL Server 2008 Version 2.0.0.27 – Fix for CRC-32 algorithm that inadvertently made it sort dependent. Fix for race condition which sometimes lead to the error Item has already been added. Key in dictionary: '79764919' . Fix for upgrade mappings between 2005 and 2008. (19 Oct 2010) Version 2.0.0.24 - SQL Server 2008 release. Introduces the new CRC-32 algorithm, which is consistent across x86 and x64.. The default algorithm is now CRC32. (29 Oct 2008) Version 2.0.0.6 - SQL Server 2008 pre-release. This version was released by mistake as part of the site migration, and had known issues. (20 Oct 2008) SQL Server 2005 Version 1.5.0.43 – Fix for CRC-32 algorithm that inadvertently made it sort dependent. Fix for race condition which sometimes lead to the error Item has already been added. Key in dictionary: '79764919' . (19 Oct 2010) Version 1.5.0.16 - Introduces the new CRC-32 algorithm, which is consistent across x86 and x64. The default algorithm is now CRC32. (20 Oct 2008) Version 1.4.0.0 - Installer refresh only. (22 Dec 2007) Version 1.4.0.0 - Refresh for minor UI enhancements. (5 Mar 2006) Version 1.3.0.0 - SQL Server 2005 RTM. The checksum algorithm has changed to improve cardinality when calculating multiple column checksums. The original algorithm is still available for backward compatibility. Fixed custom UI bug with Output column name not persisting. (10 Nov 2005) Version 1.2.0.1 - SQL Server 2005 IDW 15 June CTP. A user interface is provided, as well as the ability to change the checksum output column name. (29 Aug 2005) Version 1.0.0 - Public Release (Beta). (30 Oct 2004) Screenshot

    Read the article

  • How to combine a Distance and Keyword SQL query?

    - by Jason
    Hi Folks, I have a tables in my database called "points" and "category". A user will input info into both a location input and a keyword input text box. Then I want to find points in my table where the keyword matches either the "title" field in the points table, or the "category" but are within a certain distance from the user's location. I want to order the results by distance. Here are the 2 queries which btoh work independently: $mysql = "SELECT *, ( 3959 * acos( cos( radians('$search_lat') ) * cos( radians( lat ) ) * cos( radians( longi ) - radians('$search_lng') ) + sin( radians('$search_lat') ) * sin( radians( lat ) ) ) ) AS distance FROM points HAVING distance < '$radius'"; $mysql2 = "SELECT * FROM `points` LEFT JOIN category USING ( category_id ) WHERE (point_title LIKE '%$esc_catsearch%' OR category.title LIKE '%$esc_catsearch%')"; Here is what I tried: $sql_search = sprintf("SELECT *,point_id FROM points WHERE point_title LIKE '%%%s%%' UNION SELECT *, ( 3959 * acos( cos( radians('%s') ) * cos( radians( lat ) ) * cos( radians( longi ) - radians('%s') ) + sin( radians('%s') ) * sin( radians( lat ) ) ) ) AS distance FROM points HAVING distance < '%s' ORDER BY distance LIMIT %d , %d", $esc_catsearch, mysql_real_escape_string($search_lat), mysql_real_escape_string($search_lng), mysql_real_escape_string($search_lat), mysql_real_escape_string($radius), $offset, $rowsPerPage); But it tells me there is no know column "distance". If I remove the "Order By" phrase then it works but I'm still not sure this is giving me the results I want. I also tried the query the other way around with the distance search first but that seems to ignore my keyword. Any thoughts would be much appreciated!

    Read the article

< Previous Page | 581 582 583 584 585 586 587 588 589 590 591 592  | Next Page >