Search Results

Search found 10815 results on 433 pages for 'stored procedure'.

Page 326/433 | < Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >

  • How upload files to azure in background with Delphi and OmniThread?

    - by mamcx
    I have tried to upload +100 files to azure with Delphi. However, the calls block the main thread, so I want to do this with a async call or with a background thread. This is what I do now (like explained here): procedure TCloudManager.UploadTask(const input: TOmniValue; var output: TOmniValue); var FileTask:TFileTask; begin FileTask := input.AsRecord<TFileTask>; Upload(FileTask.BaseFolder, FileTask.LocalFile, FileTask.CloudFile); end; function TCloudManager.MassiveUpload(const BaseFolder: String; Files: TDictionary<String, String>): TStringList; var pipeline: IOmniPipeline; FileInfo : TPair<String,String>; FileTask:TFileTask; begin // set up pipeline pipeline := Parallel.Pipeline .Stage(UploadTask) .NumTasks(Environment.Process.Affinity.Count * 2) .Run; // insert URLs to be retrieved for FileInfo in Files do begin FileTask.LocalFile := FileInfo.Key; FileTask.CloudFile := FileInfo.Value; FileTask.BaseFolder := BaseFolder; pipeline.Input.Add(TOmniValue.FromRecord(FileTask)); end;//for pipeline.Input.CompleteAdding; // wait for pipeline to complete pipeline.WaitFor(INFINITE); end; However this block too (why? I don't understand).

    Read the article

  • JBoss Developer Studio exploaded SAR/EAR/WAR deployment

    - by dr_hoppa
    I am new to JBoss DS and I could not figur out is how to make a folder from my project deployable, exploded SAR in my case. If I create a new server in the JBoss server view / JBoss AS perspective and select a SAR and make it a deployable then the SAR file can be deployed on the JBoss server. Another approch that I have tryed is to make the *-service.xml file as a deployable element, same procedure as with the SAR file, and to add the Eclipse project as a dependency to the classpath of the server 'Open launch configuration' / 'Classpath' after dooing this I got an error from JBoss telling me that the JBoss classes are not found. The compromise solution that I have found is to create a junction/symlink from the output of the Eclipse project 'bin' into the JBoss 'serve//deploy/xxx.sar' this is a temporary solution but will not scale in the future as multiple services must be created. What I would need: I would need to have the ability to make the exploded folder as 'mark deployable' and to be have the classes/*service.xml files loaded from there and to have hot-deployability for them. Any sugestions would help.

    Read the article

  • SQL Server 2008 alternative for SQL-DMO

    - by alexdelpiero
    Hi! I previously was using SQL-DMO to automatically generate scripts from the database. Now I upgraded to SQL Server 2008 and I don’t want to use this feature anymore since Microsoft will be dropping this feature off. Is there any other alternative I can use to connect to a server and generate scripts automatically from a database? Any answer is welcome. Thanks in advance. This is the procedure i was previously using: CREATE PROC GenerateSP ( @server varchar(30) = null, @uname varchar(30) = null, @pwd varchar(30) = null, @dbname varchar(30) = null, @filename varchar(200) = 'c:\script.sql' ) AS DECLARE @object int DECLARE @hr int DECLARE @return varchar(200) DECLARE @exec_str varchar(2000) DECLARE @spname sysname SET NOCOUNT ON -- Sets the server to the local server IF @server is NULL SELECT @server = @@servername -- Sets the database to the current database IF @dbname is NULL SELECT @dbname = db_name() -- Sets the username to the current user name IF @uname is NULL SELECT @uname = SYSTEM_USER -- Create an object that points to the SQL Server EXEC @hr = sp_OACreate 'SQLDMO.SQLServer', @object OUT IF @hr <> 0 BEGIN PRINT 'error create SQLOLE.SQLServer' RETURN END -- Connect to the SQL Server IF @pwd is NULL BEGIN EXEC @hr = sp_OAMethod @object, 'Connect', NULL, @server, @uname IF @hr <> 0 BEGIN PRINT 'error Connect' RETURN END END ELSE BEGIN EXEC @hr = sp_OAMethod @object, 'Connect', NULL, @server, @uname, @pwd IF @hr <> 0 BEGIN PRINT 'error Connect' RETURN END END --Verify the connection EXEC @hr = sp_OAMethod @object, 'VerifyConnection', @return OUT IF @hr <> 0 BEGIN PRINT 'error VerifyConnection' RETURN END SET @exec_str = 'DECLARE script_cursor CURSOR FOR SELECT name FROM ' + @dbname + '..sysobjects WHERE type = ''P'' ORDER BY Name' EXEC (@exec_str) OPEN script_cursor FETCH NEXT FROM script_cursor INTO @spname WHILE (@@fetch_status <> -1) BEGIN SET @exec_str = 'Databases("'+ @dbname +'").StoredProcedures("'+RTRIM(UPPER(@spname))+'").Script(74077,"'+ @filename +'")' EXEC @hr = sp_OAMethod @object, @exec_str, @return OUT IF @hr <> 0 BEGIN PRINT 'error Script' RETURN END FETCH NEXT FROM script_cursor INTO @spname END CLOSE script_cursor DEALLOCATE script_cursor -- Destroy the object EXEC @hr = sp_OADestroy @object IF @hr <> 0 BEGIN PRINT 'error destroy object' RETURN END GO

    Read the article

  • Optimizing mathematics on arrays of floats in Ada 95 with GNATC

    - by mat_geek
    Consider the bellow code. This code is supposed to be processing data at a fixed rate, in one second batches, It is part of an overal system and can't take up too much time. When running over 100 lots of 1 seconds worth of data the program takes 35 seconds; or 35%. How do I improce the code to get the processing time down to a minimum? The code will be running on an Intel Pentium-M which is a P3 with SSE2. package FF is new Ada.Numerics.Generic_Elementary_Functions(Float); N : constant Integer := 820; type A is array(1 .. N) of Float; type A3 is array(1 .. 3) of A; procedure F(state : in out A3; result : out A3; l : in A; r : in A) is s : Float; t : Float; begin for i in 1 .. N loop t := l(i) + r(i); t := t / 2.0; state(1)(i) := t; state(2)(i) := t * 0.25 + state(2)(i) * 0.75; state(3)(i) := t * 1.0 /64.0 + state(2)(i) * 63.0 /64.0; for r in 1 .. 3 loop s := state(r)(i); t := FF."**"(s, 6.0) + 14.0; if t > MAX then t := MAX; elsif t < MIN then t := MIN; end if; result(r)(i) := FF.Log(t, 2.0); end loop; end loop; end;

    Read the article

  • Using stdint.h and ANSI printf?

    - by nn
    Hi, I'm writing a bignum library, and I want to use efficient data types to represent the digits. Particularly integer for the digit, and long (if strictly double the size of the integer) for intermediate representations when adding and multiplying. I will be using some C99 functionality, but trying to conform to ANSI C. Currently I have the following in my bignum library: #include <stdint.h> #if defined(__LP64__) || defined(__amd64) || defined(__x86_64) || defined(__amd64__) || defined(__amd64__) || defined(_LP64) typedef uint64_t u_w; typedef uint32_t u_hw; #define BIGNUM_DIGITS 2048 #define U_HW_BITS 16 #define U_W_BITS 32 #define U_HW_MAX UINT32_MAX #define U_HW_MIN UINT32_MIN #define U_W_MAX UINT64_MAX #define U_W_MIN UINT64_MIN #else typedef uint32_t u_w; typedef uint16_t u_hw; #define BIGNUM_DIGITS 4096 #define U_HW_BITS 16 #define U_W_BITS 32 #define U_HW_MAX UINT16_MAX #define U_HW_MIN UINT16_MIN #define U_W_MAX UINT32_MAX #define U_W_MIN UINT32_MIN #endif typedef struct bn { int sign; int n_digits; // #digits should exclude carry (digits = limbs) int carry; u_hw tab[BIGNUM_DIGITS]; } bn; As I haven't written a procedure to write the bignum in decimal, I have to analyze the intermediate array and printf the values of each digit. However I don't know which conversion specifier to use with printf. Preferably I would like to write to the terminal the digit encoded in hexadecimal. The underlying issue is, that I want two data types, one that is twice as long as the other, and further use them with printf using standard conversion specifiers. It would be ideal if int is 32bits and long is 64bits but I don't know how to guarantee this using a preprocessor, and when it comes time to use functions such as printf that solely rely on the standard types I no longer know what to use.

    Read the article

  • Hibernate : Opinions in Composite PK vs Surrogate PK

    - by Albert Kam
    As i understand it, whenever i use @Id and @GeneratedValue on a Long field inside JPA/Hibernate entity, i'm actually using a surrogate key, and i think this is a very nice way to define a primary key considering my not-so-good experiences in using composite primary keys, where : there are more than 1 business-value-columns combination that become a unique PK the composite pk values get duplicated across the table details cannot change the business value inside that composite PK I know hibernate can support both types of PK, but im left wondering by my previous chats with experienced colleagues where they said that composite PK is easier to deal with when doing complex SQL queries and stored procedure processes. They went on saying that when using surrogate keys will complicate things when doing joining and there are several condition when it's impossible to do some stuffs when using surrogate keys. Although im sorry i cant explain the detail here since i was not clear enough when they explain it. Maybe i'll put more details next time. Im currently trying to do a project, and want to try out surrogate keys, since it's not getting duplicated across tables, and we can change the business-column values. And when the need for some business value combination uniqueness, i can use something like : @Table(name="MY_TABLE", uniqueConstraints={ @UniqueConstraint(columnNames={"FIRST_NAME", "LAST_NAME"}) // name + lastName combination must be unique But im still in doubt because of the previous discussion about the composite key. Could you share your experiences in this matter ? Thank you !

    Read the article

  • Setting Parameters from another Parameter In SSRS

    - by Mike
    I was able to get this working is SSRS 2008, but do to the fact that my company only has 2005 servers I need to downgrade the report to 2005. The idea is for a given person name there are two key fields EntityType and EntityId So I have a parameter from a dataset where the Label is the Name and the value is EntityType_EntityId I use the split function to take the left and right sides of from _ In 2008, I set the query parameters of the dataset to the split function and it works In 2005, I set the Default Value of each Report Parameters Now when I run the report and put textboxes showing the value of the parameters, the values are shown correctly but the query does not run. I am guessing that this is a lifecycle issue being Get Name Parameter Run Report THEN Set Parameters = Split of Name But the problem with that is the second time I run the report I should get result and I do not. Does anyone know what I am doing wrong. I guess I can pass in the underscore delimted string to the stored procedure and parse it there, but my question is can this be done in the report? Reason being other callers will pass in the parameters as two seperate values.

    Read the article

  • google static maps via TIdHTTP

    - by cloudstrif3
    Hi all. I'm trying to return content from maps.google.com from within Delphi 2006 using the TIdHTTP component. My code is as follows procedure TForm1.GetGoogleMap(); var t_GetRequest: String; t_Source: TStringList; t_Stream: TMemoryStream; begin t_Source := TStringList.Create; try t_Stream := TMemoryStream.Create; try t_GetRequest := 'http://maps.google.com/maps/api/staticmap?' + 'center=Brooklyn+Bridge,New+York,NY' + '&zoom=14' + '&size=512x512' + '&maptype=roadmap' + '&markers=color:blue|label:S|40.702147,-74.015794' + '&markers=color:green|label:G|40.711614,-74.012318' + '&markers=color:red|color:red|label:C|40.718217,-73.998284' + '&sensor=false'; IdHTTP1.Post(t_GetRequest, t_Source, t_Stream); t_Stream.SaveToFile('google.html'); finally t_Stream.Free; end; finally t_Source.Free; end; end; However I keep getting the response HTTP/1.0 403 Forbidden. I assume this means that I don't have permission to make this request but if I copy the url into my web browser IE 8, it works fine. Is there some header information that I need or something else? thanks you advance.

    Read the article

  • ARMv6 FIQ, acknowledge interrupt

    - by fastmonkeywheels
    I'm working with an i.mx35 armv6 core processor. I have Interrupt 62 configured as a FIQ with my handler installed and being called. My handler at the moment just toggles an output pin so I can test latency with a scope. With the code below, once I trigger the FIQ it continues forever as fast as it can, apparently not being acknowledged. I'm triggering the FIQ by means of the Interrupt Force Register so I'm assured that the source isn't triggering it this fast. If I disable Interrupt 62 in the AVIC in my FIQ routine the interrupt only triggers once. I have read the sections on the VIC Port in the ARM1136JF-S and ARM1136J-S Technical Reference Manual and it covers proper exit procedure. I'm only having one FIQ handler so I have no need to branch. The line that I don't understand is: STR R0, [R8,#AckFinished] I'm not sure what AckFinished is supposed to be or what this command is supposed to do. My FIQ handler is below: ldr r9, IOMUX_ADDR12 ldr r8, [r9] orr r8, #0x08 @ top LED str r8,[r9] @turn on LED bic r8, #0x08 @ top LED str r8,[r9] @turn off LED subs pc, r14, #4 IOMUX_ADDR12: .word 0xFC2A4000 @remapped IOMUX addr My handler returns just fine and normal system operation resumes if I disable it after the first go, otherwise it triggers constantly and the system appears to hang. Do you think my assumption is right that the core isn't acknowledging the AVIC or could there be another cause of this FIQ triggering?

    Read the article

  • Converting table columns to key value pairs

    - by TomD1
    I am writing a PL/SQL procedure that loads some data from Schema A into Schema B. They are both very different schemas and I can't change the structure of Schema B. Columns in various tables in Schema A (joined together in a view) need to be inserted into Schema B as key=value pairs in 2 columns in a table, each on a separate row. For example, an employee's first name might be present as employee.firstname in Schema A, but would need to be entered in Schema B as: id=>1, key=>'A123', value=>'Smith' There are almost 100 keys, with the potential for more to be added in future. This means I don't really want to hardcode any of these keys. Sample code: create table schema_a_employees ( emp_id number(8,0), firstname varchar2(50), surname varchar2(50) ); insert into schema_a_employees values ( 1, 'James', 'Smith' ); insert into schema_a_employees values ( 2, 'Fred', 'Jones' ); create table schema_b_values ( emp_id number(8,0), the_key varchar2(5), the_value varchar2(200) ); I thought an elegant solution would most likely involve a lookup table to determine what value to insert for each key, and doesn't involve effectively hardcoding dozens of similar statements like.... insert into schema_b_values ( 1, 'A123', v_firstname ); insert into schema_b_values ( 1, 'B123', v_surname ); What I'd like to be able to do is have a local lookup table in Schema A that lists all the keys from Schema B, along with a column that gives the name of the column in the table in Schema A that should be used to populate, e.g. key "A123" in Schema B should be populated with the value of the column "firstname" in Schema A, e.g. create table schema_a_lookup ( the_key varchar2(5), the_local_field_name varchar2(50) ); insert into schema_a_lookup values ( 'A123', 'firstname' ); insert into schema_a_lookup values ( 'B123', 'surname' ); But I'm not sure how I could dynamically use values from the lookup table to tell Oracle which columns to use. So my question is, is there an elegant solution to populate schema_b_values table with the data from schema_a_employees without hardcoding for every possible key (i.e. A123, B123, etc)? Cheers.

    Read the article

  • How to create a console application that does not terminate?

    - by John
    Hello, In C++,a console application can have a message handler in its Winmain procedure.Like this: int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow) { HWND hwnd; MSG msg; #ifdef _DEBUG CreateConsole("Title"); #endif hwnd = CreateDialog(hInstance, MAKEINTRESOURCE(IDD_DIALOG1), NULL, DlgProc); PeekMessage(&msg, NULL, 0, 0, PM_NOREMOVE); while(msg.message != WM_QUIT) { if(PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) { if(IsDialogMessage(hwnd, &msg)) continue; TranslateMessage(&msg); DispatchMessage(&msg); } } return 0; } This makes the process not close until the console window has received WM_QUIT message. I don't know how to do something similar in delphi. My need is not for exactly a message handler, but a lightweight "trick" to make the console application work like a GUI application using threads. So that, for example, two Indy TCP servers could be handled without the console application terminating the process. My question: How could this be accomplished?

    Read the article

  • How do I fetch a set of rows from data table

    - by cmrhema
    Hi, I have a dataset that has two datatables. In the first datatable I have EmpNo,EmpName and EmpAddress In the second datatable I have Empno,EmpJoindate, EmpSalary. I want a result where I should show EmpName as the label and his/her details in the gridview I populate a datalist with the first table, and have EmpNo as the datakeys. Then I populate the gridview inside the datatable which has EmpNo,EmpJoinDate and EmpAddress. My code is some what as below Datalist1.DataSource = dt; Datalist1.DataBind(); for (int i = 0; i < Datalist1.Items.Count; i++) { int EmpNo = Convert.ToInt32(Datalist1.DataKeys[i]); GridView gv = (GridView)Datalist1.FindControl("gv"); gv.DataSource = dt2; gv.DataBind(); } Now I have a problem, I have to bind the Details of the corresponding Employee to the gridview. Whereas the above code will display all the details of all employees in the gridview. If we use IEnumerable we give a condition where(a=a.eno=EmpNo), and bind that list to the gridview. How do I do this in datatable. kindly do not give me suggestions of altering the stored procedure which results the values in two tables, because that cannot be altered. I have to find a solution within the existing objects I have. Regards Hema

    Read the article

  • SQL Server: Is it possible to prevent SQL Agent from failing a step on error?

    - by Kenneth
    I have a stored procedure that runs custom backups for around 60 SQL servers (mixes 2000 through 2008R2). Occasionally, due to issues outside of my control (backup device inaccessible, network error, etc.) an individual backup on one or two databases will fail. This causes this entire step to fail, which means any subsequent backup commands are not executed and half of the databases on a given server may not be backed up. On the 2005+ boxes I am using TRY/CATCH blocks to manage these problems and continue backing up the remaining databases. On a 2000 server however, for example, I have no way to prevent this error from failing the entire step: Msg 3201, Level 16, State 1, Line 1 Cannot open backup device 'db-diff(\PATH\DB-DIFF-03-16-2010.DIF)'. Operating system error 5(Access is denied.). Msg 3013, Level 16, State 1, Line 1 BACKUP DATABASE is terminating abnormally. I am simply asking if anything like TRY/CATCH is possible in SQL 2000? I realize there are no built in methods for this, so I guess I am looking for some creativity. Even when wrapping each backup (or any failing statement) via sp_executesql the job fails instantly. Example: DECLARE @x INT, @iReturn INT PRINT 'Executing statement that will fail with 208.' EXEC @iReturn = Sp_executesql N'SELECT * from TABLETHATDOESNTEXIST;' PRINT Cast(@iReturn AS NVARCHAR) --In SSMS this return code prints. Executed as a job it fails and aborts before this statement.

    Read the article

  • tikz: set appropriate x value for a node

    - by basweber
    This question resulted from the question here I want to produce a curly brace which spans some lines of text. The problem is that I have to align the x coordinate manually, which is not a clean solution. Currently I use \begin{frame}{Example} \begin{itemize} \item The long Issue 1 \tikz[remember picture] \node[coordinate,yshift=0.7em] (n1) {}; \\ spanning 2 lines \item Issue 2 \tikz[remember picture] \node[coordinate, xshift=1.597cm] (n2) {}; \item Issue 3 \end{itemize} \visible<2->{ \begin{tikzpicture}[overlay,remember picture] \draw[thick,decorate,decoration={brace,amplitude=5pt}] (n1) -- (n2) node[midway, right=4pt] {One and two are cool}; \end{tikzpicture} } % end visible \end{frame} which produces the desired result: The unsatisfying thing is, that I had to figure out the xshift value of 1.597cm by trial and error (more or less) Without xshift argument the result is: I guess there is an elegant way to avoid the explicit xshift value. The best way would it imho be to calculate the maximum x value of two nodes and use this, (as already suggested by Geoff) But it would already be very handy to be able to explicitly define the absolute xvalues of both nodes while keeping their current y values. This would avoid the fiddly procedure of adapting the third post decimal position to ensure that the brace looks vertical.

    Read the article

  • Entity Framework self referencing entity deletion.

    - by Viktor
    Hello. I have a structure of folders like this: Folder1 Folder1.1 Folder1.2 Folder2 Folder2.1 Folder2.1.1 and so on.. The question is how to cascade delete them(i.e. when remove folder2 all children are also deleted). I can't set an ON DELETE action because MSSQL does not allow it. Can you give some suggesions? UPDATE: I wrote this stored proc, can I just leave it or it needs some modifications? SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE PROCEDURE sp_DeleteFoldersRecursive @parent_folder_id int AS BEGIN SET NOCOUNT ON; IF @parent_folder_id = 0 RETURN; CREATE TABLE #temp(fid INT ); DECLARE @Count INT; INSERT INTO #temp(fid) SELECT FolderId FROM Folders WHERE FolderId = @parent_folder_id; SET @Count = @@ROWCOUNT; WHILE @Count > 0 BEGIN INSERT INTO #temp(fid) SELECT FolderId FROM Folders WHERE EXISTS (SELECT FolderId FROM #temp WHERE Folders.ParentId = #temp.fid) AND NOT EXISTS (SELECT FolderId FROM #temp WHERE Folders.FolderId = #temp.fid); SET @Count = @@ROWCOUNT; END DELETE Folders FROM Folders INNER JOIN #temp ON Folders.FolderId = #temp.fid; DROP TABLE #temp; END GO

    Read the article

  • Strange execution times in t-sql

    - by TonyP
    Hi All I have two stored procedures, the first one calls the second .. If I execute the second one alone it takes over 5 minutes to complete.. But when executed within the first one it takes little over 1 minute.. What is the reason ! Here is the first one ALTER procedure [dbo].[schRefreshPriceListItemGroups] as begin tran delete from PriceListItemGroups if @@error !=0 goto rolback Insert PriceListItemGroups(comno,t$cuno,t$cpls,t$cpgs,t$dsca,t$cpru) SELECT distinct c.comno,c.t$cuno, c.t$cpls,I.t$cpgs,g.t$dsca,g.t$cpru FROM TTCCOM010nnn C JOIN TTDSLS032nnn PL ON PL.comno = c.Comno and PL.t$cpls = c.t$cpls JOIN TTIITM001nnn I ON I.t$item = pl.t$item AND I.comno = pl.comNo JOIN TTCMCS024nnn G ON g.T$cprg = I.t$cpgs AND g.comno = I.Comno WHERE c.t$cpls !='' order by comno desc, t$cuno, t$cpgs if @@error !=0 goto rolback ----------------------------------------------------- Exec scrRefreshCustomersCatalogs ----------------------------------------------------- commit tran return rolback: Rollback tran And the second one Alter proc scrRefreshCustomersCatalogs as declare @baanIds table(id int identity(1,1),baanId varchar(12)) declare @baanId varchar(12),@i int, @n int Insert @baanIds(BaanId) select baanId from ftElBaanIds() SELECT @I=1,@n=max(id) from @baanIds select @i,@n Begin tran if @@error !=0 goto xRollBack WHILE @I <=@n Begin select @baanId=baanId from @baanIds where id=@i if @@error !=0 goto xRollBack Delete from customersCatalogs where comno+'-'+t$cuno=@baanId print Convert(varchar,@i)+' baanId='+@baanId Insert customersCatalogs exec customersCatalog @baanId if @@error !=0 goto xRollBack set @i=@i+1; end Commit Tran Update statistics customersCatalogs with fullscan Return xRollBack: Print '*****Rolling back*************' Rollback tran

    Read the article

  • Unregistering COM dll with a C# Setup Project

    - by lb
    Hi All. I've been stuck on this one for a while. I'll try explain in the simplest terms and at the best of my knowledge. I will honour any help. I've got a C# project which uses a VB6 compiled ActiveX DLL that I'm constantly updating. I compile the setup project, send it to the client and they run the setup. When building the updated setup project, I would increase the 'Version' of the setup project so it wouldn't bother with 'Another version is already installed'. What started happening after a few updates I began to notice the DLL would not be updated to the new version in the installer. The client computer had the original DLL both installed and registered. First symptom: method not found exceptions from the client C# code. This is not a shared DLL and only this application needs it. I've noticed that when uninstalling the application (through the usual procedure) the DLL is also not removed from the application folder although I would set this file's property 'Permanent' to false. The registration entries in the registry are mantained also. I do update in VS6.0 the version of the DLL (usually increase the build number) before building it. Then in VS2008, I remove it from the References, and add it again from the 'Browse tab', without re-registering it on my dev machine and adding it from the COM tab. I've thought of these options. Custom step in Setup project to regsvr32.exe /u 'hardcoded path of my dll' at uninstall (ugly) Somehow find out how the 'Isolate' property can work for me without registering Find out how to execute setup project 'Conditions' that would actually check the version of the library and to update the file accordingly at every install) Any help would be incredibly welcome.

    Read the article

  • How can I test this SQL Server performance Utility?

    - by Martin Smith
    As part of my MSc I need to do a three month project later this year. I have decided to do something which will likely be useful for me in the workplace and spend the time getting to understand SQL Server internals. The deliverable for this project will be a performance advisor looking at a variety of different rules. Some static such as finding redundant indexes, some more dynamic such as using XEvents to find outlying invocations of stored procedure execution times when certain parameters are passed. I am struggling to come up with a good way of testing this though. I can obviously design a "bad" database and a synthetic workload that my tool will pick up issues on but I also need to demonstrate that it has real world utility. Looking at the self tuning database literature it is common to use TPC benchmarks but I've had a look at the TPCC site and it looks very time consuming to implement and not that good a fit to my project's testing needs in any event (I would still be able to "rig" it by the decisions I made on indexing or physical architecture). Plan A would be to find willing beta tester(s) but in the event that isn't possible I will need a fallback plan. The best idea I have come up with so far is to use the various MS sample applications as examples of real world applications. e.g. http://msftdpprodsamples.codeplex.com/ http://www.asp.net/community/projects/ Does anyone have any better suggestions?

    Read the article

  • Why might stable_sort be affecting my hashtable values?

    - by zebraman
    I have defined a struct ABC to contain an int ID, string NAME, string LAST_NAME; My procedure is this: Read in a line from an input file. Parse each line into first name and last name and insert into the ABC struct. Also, the struct's ID is given by the number of the input line. Then, push_back the struct into a vector masterlist. I am also hashing these to a hashtable defined as vector< vector , using first name and last name as keywords. That is, If my data is: Garfield Cat Snoopy Dog Cat Man Then for the keyword cash hashes to a vector containing Garfield Cat and Cat Man. I insert the structs into the hashtable using push_back again. The problem is, when I call stable_sort on my masterlist, my hashtable is affected for some reason. I thought it might be happening since the songs are being ordered differently, so I tried making a copy of the masterlist and sorting it and it still affects the hashtable though the original masterlist is unaffected. Any ideas why this might be happening?

    Read the article

  • Version control - stubs and mocks

    - by Tesserex
    For the sake of this question, I don't care about the difference between stubs, mocks, dummies, fakes, etc. Let's say I'm working on a project with one other person. I'm working on component A and he is working on component B. They work together, so I stub out B for testing, and he stubs out A. We're working in a DVCS, let's say Git, because that's actually the case here. When it comes time to merge our components together, we need to get the "real" files from my A and his B, but throw away all the fake stuff. During development, it's likely (unless I need to learn how to properly stub things) that the fakes have the same file names and class names as the real thing. So my question is: what is the proper procedure for doing version control on the fakes, and how are the components correctly merged, making sure to grab the real thing and not the fake? I would guess that one way is just do the merge, expect it to say CONFLICT, and then manually delete all the fake code out of the half-merged files. But this sounds tedious and inefficient. Should the fake things not go under VC at all? Should they be ripped out just before merging? Sorry if the answer to this should be obvious or trivial, I'm just looking for a "suggested practice" here.

    Read the article

  • Best approach, Dynamic OpenXML in T-SQL

    - by Martin Ongtangco
    hello, i'm storing XML values to an entry in my database. Originally, i extract the xml datatype to my business logic then fill the XML data into a DataSet. I want to improve this process by loading the XML right into the T-SQL. Instead of getting the xml as string then converting it on the BL. My issue is this: each xml entry is dynamic, meaning it can be any column created by the user. I tried using this approach, but it's giving me an error: CREATE PROCEDURE spXMLtoDataSet @id uniqueidentifier, @columns varchar(max) AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; DECLARE @name varchar(300); DECLARE @i int; DECLARE @xmlData xml; (SELECT @xmlData = data, @name = name FROM XmlTABLES WHERE (tableID = ISNULL(@id, tableID))); EXEC sp_xml_preparedocument @i OUTPUT, @xmlData DECLARE @tag varchar(1000); SET @tag = '/NewDataSet/' + @name; DECLARE @statement varchar(max) SET @statement = 'SELECT * FROM OpenXML(@i, @tag, 2) WITH (' + @columns + ')'; EXEC (@statement); EXEC sp_xml_removedocument @i END where i pass a dynamically written @columns. For example: spXMLtoDataSet 'bda32dd7-0439-4f97-bc96-50cdacbb1518', 'ID int, TypeOfAccident int, Major bit, Number_of_Persons int, Notes varchar(max)' but it kept on throwing me this exception: Msg 137, Level 15, State 2, Line 1 Must declare the scalar variable "@i". Msg 319, Level 15, State 1, Line 1 Incorrect syntax near the keyword 'with'. If this statement is a common table expression or an xmlnamespaces clause, the previous statement must be terminated with a semicolon.

    Read the article

  • SQL Server Multi-statement UDF - way to store data temporarily required

    - by Kharlos Dominguez
    Hello, I have a relatively complex query, with several self joins, which works on a rather large table. For that query to perform faster, I thus need to only work with a subset of the data. Said subset of data can range between 12 000 and 120 000 rows depending on the parameters passed. More details can be found here: http://stackoverflow.com/questions/3054843/sql-server-cte-referred-in-self-joins-slow As you can see, I was using a CTE to return the data subset before, which caused some performance problems as SQL Server was re-running the Select statement in the CTE for every join instead of simply being run once and reusing its data set. The alternative, using temporary tables worked much faster (while testing the query in a separate window outside the UDF body). However, when I tried to implement this in a multi-statement UDF, I was harshly reminded by SQL Server that multi-statement UDFs do not support temporary tables for some reason... UDFs do allow table variables however, so I tried that, but the performance is absolutely horrible as it takes 1m40 for my query to complete whereas the the CTE version only took 40minutes. I believe the table variables is slow for reasons listed in this thread: http://stackoverflow.com/questions/1643687/table-variable-poor-performance-on-insert-in-sql-server-stored-procedure Temporary table version takes around 1 seconds, but I can't make it into a function due to the SQL Server restrictions, and I have to return a table back to the caller. Considering that CTE and table variables are both too slow, and that temporary tables are rejected in UDFs, What are my options in order for my UDF to perform quickly? Thanks a lot in advance.

    Read the article

  • scheme struct question

    - by qzar
    ;; definition of the structure "book" ;; author: string - the author of the book ;; title: string - the title of the book ;; genre: symbol - the genre (define-struct book (author title genre)) (define lotr1 (make-book "John R. R. Tolkien" "The Fellowship of the Ring" 'Fantasy)) (define glory (make-book "David Brin" "Glory Season" 'ScienceFiction)) (define firstFamily (make-book "David Baldacci" "First Family" 'Thriller)) (define some-books (list lotr1 glory firstFamily)) ;; count-books-for-genre: symbol (list of books) -> number ;; the procedure takes a symbol and a list of books and produces the number ;; of books from the given symbol and genre ;; example: (count-books-for-genre 'Fantasy some-books) should produce 1 (define (count-books-for-genre genre lob) (if (empty? lob) 0 (if (symbol=? (book-genre (first lob)) genre) (+ 1 (count-books-for-genre (rest lob) genre)) (count-books-for-genre (rest lob) genre) ) ) ) (count-books-for-genre 'Fantasy some-books) It produce following exception first: expected argument of type non-empty list; given 'Fantasy, I don't understand whats the problem. Can somebody give me some explanation ? Thank you very much !

    Read the article

  • Differences in ansychronous VB.NET and C#???

    - by Jim Beam
    So I've been posting this week for help with an API that has asynchronous calls. You can view the CODE here: http://stackoverflow.com/questions/2638920/c-asynchronous-event-procedure-does-not-fire With a little more digging, I found out that the API is written in VB.NET and I created a VB.NET example and guess what . . . the asynchronous calls work like a charm. So, now I need to find out why the calls are not firing in the C# code I have. The API being written in VB really shouldn't matter, but again, the VB.NET code works and my C# does not. Is there a problem with the event handler and hows its being declared that causes it to not fire? UPDATE VB Code added Imports ClientSocketServices Imports DHS_Commands Imports DHS Imports Utility Imports SocketServices Class Window1 Public WithEvents AppServer As New ClientAppServer Public Token As LoginToken Private Sub login() Dim handler As New LoginHandler Token = handler.RequestLogin("admin", "admin", localPort:=12000, serverAddress:="127.0.0.1", serverLoginPort:=11000, clienttype:=LoginToken.eClientType.Client_Admin, timeoutInSeconds:=20) If Token.Authenticated Then AppServer = New ClientAppServer(Token, True) AppServer.RetrieveCollection(GetType(Gateways)) End If End Sub Private Sub ReceiveMessage(ByVal rr As RemoteRequest) Handles AppServer.ReceiveRequest If TypeOf (rr.TransferObject) Is Gateways Then MsgBox("dd") End If End Sub Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs) Handles Button1.Click login() End Sub End Class

    Read the article

  • Running SQLite 3 in MicroApache

    - by Alien426
    I'm running MicroApache (http://microapache.amadis.sytes.net) on Windows XP and would like to use SQLite 3 databases. The PHP version is 5.2.9-2. My MicroApache version has SQLite 2 support through 2 lines in the php.ini: extension=php_pdo.dll extension=php_sqlite.dll I test whether the extension works in 3 ways: 1. phpinfo() 2. extension_loaded() and get_loaded_extensions() 3. using sample code that var_dump()s the constant SQLITE3_NUM (should be the integer 2) and tries to create a database (error: class 'SQLite3' does not exist) Things I have tried (if I can remember them all): 1. copied php_sqlite3.dll from a full installation of PHP and added "extension=php_sqlite3.dll" to php.ini - error "Procedure entry point gc_remove_zval_from_buffer was not found in php5ts.dll" 2. compressed the DLL with UPX (like the other DLLs of MicroApache seem to be) - does not display an error on start, nor in the log file, but does not work 3. tried various things with php.ini - created a section "[sqlite3]" - prefixed "sqlite3." to "extension_dir=." and "extension=php_sqlite3.dll" - ... 4. tried to use PDO, it says it 'could not find driver' Who can help me get SQLite 3 to work?

    Read the article

< Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >