Search Results

Search found 109878 results on 4396 pages for 'server side objects to client side'.

Page 372/4396 | < Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >

  • Accounting System for Winforms / SQL Server applications

    - by Craig L
    If you were going to write a vertical market C# / WinForms / SQL Server application and needed an accounting "engine" for it, what software package would you chose ? By vertical market, I mean the application is intended to solve a particular set of business problems, not be a generic accounting application. Thus the value add of the program is the 70% of non-accounting related functionality present in the finished product. The 30% of accounting functionality is merely to enable the basic accounting needs of the business. I said all that to lead up to this: The accounting engine needs to be a royalty-free runtime license and not super expensive. I've found a couple C#/SQL Server accounting apps that can be had with source code and a royalty free run time for $150k+ and that would be fine for greenfield development funded by a large bankroll, but for smaller apps, that sort of capital outlay isn't feasible. Something along the lines of $5k to $15k for a royalty-free runtime would be more reasonable. Open-source would be even better. By accounting engine, I mean something that takes care of at a minimum: General Ledger Invoices Statements Accounts Receivable Payments / Credits Basically, an accounting engine should be something that lets the developer concentrate on the value added (industry specific business best practices / processes) part of the solution and not have to worry about how to implement the low level details of a double entry accounting system. Ideally, the accounting engine would be something that is licensed on a royalty free run-time basis. Suggestions, please ?

    Read the article

  • Accommodating hierarchical data in SQL Server 2005 database design

    - by Remnant
    Context I am fairly new to database design (=know the basics) and am grappling with how best to design my database for a project I am currently working on. In short, my database will keep a log of which employees have attended certain health and safety courses throughout the year. There are multiple types of course e.g. moving objects, fire safety, hygiene etc. In terms of my database design I need to accommodate the following: Each location can have multiple divisions Each division can have multiple departments Each department can have multiple functions Each function can have multiple job roles Each job role can have different course requirements Also note that the structure at each location may not be the same e.g. the departments within divisions are not the same across locations and the functions within departments may also differ. Edit - updated to better articulate problem Let's assume I am just looking at Location, Division and Department and I have my database as follows: LocationTable DivisionTable DepartmentTable LocationID(PK) DivisionID(PK) DepartmentID(PK) LocationName DivisionName DepartmentName There is a many-to-many relationship between Locations and Divisions and also between Departments and Divisions. Suppose I set up a 'Junction Table' as follows: Location_Division LocationID(FK) DivisionID(FK) Using Location_Division I could easily pull back the Divisions for any Location. However, suppose I want to pull back all departments for a given Division in a given Location. If I set up another 'Junction Table' for Division and Department then I can't see how I would differentiate Division by Location? Division_Department DivisionID(FK) DepartmentID(FK) Location_Division Division_Department LocationID DivisionID DivisionID DepartmentID 1 1 1 1 1 2 1 2 2 1 2 1 2 2 2 2 Do I need to expand the number of columns in my 'Junction Table' e.g. Location_Division_Department LocationID(FK) DivisionID(FK) DepartmentID(FK) Location_Division_Department LocationID DivisionID DepartmentID 1 1 1 1 1 2 1 1 3 2 1 1 2 1 2 2 1 3

    Read the article

  • How to build a GridView like DataBound Templated Custom ASP.NET Server Control

    - by Deyan
    I am trying to develop a very simple templated custom server control that resembles GridView. Basically, I want the control to be added in the .aspx page like this: <cc:SimpleGrid ID="SimpleGrid1" runat="server"> <TemplatedColumn>ID: <%# Eval("ID") %></ TemplatedColumn> <TemplatedColumn>Name: <%# Eval("Name") %></ TemplatedColumn> <TemplatedColumn>Age: <%# Eval("Age") %></ TemplatedColumn> </cc:SimpleGrid> and when providing the following datasource: DataTable table = new DataTable(); table.Columns.Add("ID", typeof(int)); table.Columns.Add("Name", typeof(string)); table.Columns.Add("Age", typeof(int)); table.Rows.Add(1, "Bob", 35); table.Rows.Add(2, "Sam", 44); table.Rows.Add(3, "Ann", 26); SimpleGrid1.DataSource = table; SimpleGrid1.DataBind(); the result should be a HTML table like this one. ------------------------------- | ID: 1 | Name: Bob | Age: 35 | ------------------------------- | ID: 2 | Name: Sam | Age: 44 | ------------------------------- | ID: 3 | Name: Ann | Age: 26 | ------------------------------- The main problem is that I cannot define the TemplatedColumn. When I tried to do it like this ... private ITemplate _TemplatedColumn; [PersistenceMode(PersistenceMode.InnerProperty)] [TemplateContainer(typeof(TemplatedColumnItem))] [TemplateInstance(TemplateInstance.Multiple)] public ITemplate TemplatedColumn { get { return _TemplatedColumn; } set { _TemplatedColumn = value; } } .. and subsequently instantiate the template in the CreateChildControls I got the following result which is not what I want: ----------- | Age: 35 | ----------- | Age: 44 | ----------- | Age: 26 | ----------- I know that what I want to achieve is pointless and that I can use DataGrid to achieve it, but I am giving this very simple example because if I know how to do this I would be able to develop the control that I need. Thank you.

    Read the article

  • Blob in Java/Hibernate/sql-server 2005

    - by Ramy
    Hi, I'm trying to insert an HTML blob into our sql-server2005 database. i've been using the data-type [text] for the field the blob will eventually live in. i've also put a '@Lob' annotation on the field in the domain model. The problem comes in when the HTML blob I'm attempting to store is larger than 65536 characters. Its seems that is the caracter-limit for a text data type when using the @Lob annotation. Ideally I'd like to keep the whole blob in tact rather than chunk it up into multiple rows in the database. I appreciate any help or insight that might be provided. Thanks! _Ramy Allow me to clarify annotation: @Lob @Column(length = Integer. MAX_VALUE) //per an answer on stackoverflow private String htmlBlob; database side (sql-server-2005): CREATE TABLE dbo.IndustrySectorTearSheetBlob( ... htmlBlob text NULL ... ) Still seeing truncation after 65536 characters... EDIT: i've printed out the contents of all possible strings (only 10 right now) that would be inserted into the Database. Each string seems to contain all cahracters, judging by the fact that the close html tag is present at the end of the string....

    Read the article

  • problem with ajax on the hosting server

    - by nelly
    when I Implemented chatting Function , I use Ajax to send messages between file to another . so , it is working well on local host . but , when I upload it in to remote server it doesn't work. can U tell me ,why ? is an Ajax need Special configuration ? there is my files that I used : Ajax .js file witch has "ajax_send" function that i used in chatbox.js file chatbox.js file wich consest of functions i used it to send data from php file to another one and it display the state (any user sign in or new sending message and so on ..) user.php file whitch responseble to write user name in the text file usersonline.txt and then display the online users in the online users column. send.php file that write on room1.text recive.php file that read room1.txt and then write the content into the chat box I beleve that the problem comes from the ajax code in Ajax.js File so please help me find out the problem and how to solve it. is ajax needs special server settings ? because it was working at the local host !

    Read the article

  • PHP PDO SQL Server Select statement not replacing question marks

    - by Metropolis
    Awhile ago I wrote a database class which uses PDO in order to connect to SQL Server databases and also to MySQL databases. It has always replaced the question marks fine when using it on the MySQL databases, but for the SQL Server database I had to create a work around which basically replaces the question marks manually. Here is the code for that. if($this->getPDODriver() == 'odbc' && !empty($values_a) && substr_count($query_s, "?") > 0) { $query_s = preg_replace(array_fill(0, substr_count($query_s, "?"), '/\?/'), $values_a, $query_s, 1); $values_a = NULL; } Now, I understand that this completely defeats the purpose of the question marks and PDO, but it has been working fine for me. What I would like to do now though, is find out why the question marks are not getting replaced in the first place, and remove this workaround. If I have a select statement like the following SELECT * FROM database WHERE value = ? That is what the query looks like when I go to prepare it, but when I display the query results, it is a blank array. Just remember, this class is working fine with MySQL, and it is working fine with the work around above. So I know it has something to do with the question marks.

    Read the article

  • Java Stop Server Thread

    - by ikurtz
    the following code is server code in my app: private int serverPort; private Thread serverThread = null; public void networkListen(int port){ serverPort = port; if (serverThread == null){ Runnable serverRunnable = new ServerRunnable(); serverThread = new Thread(serverRunnable); serverThread.start(); } else { } } public class ServerRunnable implements Runnable { public void run(){ try { //networkConnected = false; //netMessage = "Listening for Connection"; //networkMessage = new NetworkMessage(networkConnected, netMessage); //setChanged(); //notifyObservers(networkMessage); ServerSocket serverSocket = new ServerSocket(serverPort, backlog); commSocket = serverSocket.accept(); serverSocket.close(); serverSocket = null; //networkConnected = true; //netMessage = "Connected: " + commSocket.getInetAddress().getHostAddress() + ":" + //commSocket.getPort(); //networkMessage = new NetworkMessage(networkConnected, netMessage); //setChanged(); //notifyObservers(networkMessage); } catch (IOException e){ //networkConnected = false; //netMessage = "ServerRunnable Network Unavailable"; //System.out.println(e.getMessage()); //networkMessage = new NetworkMessage(networkConnected, netMessage); //setChanged(); //notifyObservers(networkMessage); } } } The code sort of works i.e. if im attempting a straight connection both ends communicate and update. The issue is while im listening for a connection if i want to quit listening then the server thread continues running and causes problems. i know i should not use .stop() on a thread so i was wondering what the solution would look like with this in mind? EDIT: commented out unneeded code.

    Read the article

  • SQL Server Index cost

    - by yellowstar
    I have read that one of the tradeoffs for adding table indexes in SQL Server is the increased cost of insert/update/delete queries to benefit the performance of select queries. I can conceptually understand what happens in the case of an insert because SQL Server has to write entries into each index matching the new rows, but update and delete are a little more murky to me because I can't quite wrap my head around what the database engine has to do. Let's take DELETE as an example and assume I have the following schema (pardon the pseudo-SQL) TABLE Foo col1 int ,col2 int ,col3 int ,col4 int PRIMARY KEY (col1,col2) INDEX IX_1 col3 INCLUDE col4 Now, if I issue the statement DELETE FROM Foo WHERE col1=12 AND col2 > 34 I understand what the engine must do to update the table (or clustered index if you prefer). The index is set up to make it easy to find the range of rows to be removed and do so. However, at this point it also needs to update IX_1 and the query that I gave it gives no obvious efficient way for the database engine to find the rows to update. Is it forced to do a full index scan at this point? Does the engine read the rows from the clustered index first and generate a smarter internal delete against the index? It might help me to wrap my head around this if I understood better what is going on under the hood, but I guess my real question is this. I have a database that is spending a significant amount of time in delete and I'm trying to figure out what I can do about it. When I display the execution plan for the deletion, it just shows an entry for "Clustered Index Delete" on table Foo which lists in the details section the other indices that need to be updated but I don't get any indication of the relative cost of these other indices. Are they all equal in this case? Is there some way that I can estimate the impact of removing one or more of these indices without having to actually try it?

    Read the article

  • Selecting a good SQL Server 2008 spatial index with large polygons

    - by andynormancx
    I'm having some fun trying to pick a decent SQL Server 2008 spatial index setup for a data set I am dealing with. The dataset is polygons, representing contours over the whole globe. There are 106,000 rows in the table, the polygons are stored in a geometry field. The issue I have is that many of the polygons cover a large portion of the globe. This seems to make it very hard to get a spatial index that will eliminate many rows in the primary filter. For example, look at the following query: SELECT "ID","CODE","geom".STAsBinary() as "geom" FROM "dbo"."ContA" WHERE "geom".Filter( geometry::STGeomFromText('POLYGON ((-142.03193662573682 59.53396984952896, -142.03193662573682 59.88928136451884, -141.32743833481925 59.88928136451884, -141.32743833481925 59.53396984952896, -142.03193662573682 59.53396984952896))', 4326) ) = 1 This is querying an area which intersects with only two of the polygons in the table. No matter what combination of spatial index settings I chose, that Filter() always returns around 60,000 rows. Replacing Filter() with STIntersects() of course returns just the two polygons I want, but of course takes much longer (Filter() is 6 seconds, STIntersects() is 12 seconds). Can anyone give me any hints on whether there is a spatial index setup that is likely to improve on 60,000 rows or is my dataset just not a good match for SQL Server's spatial indexing ?

    Read the article

  • Replacement for deprecated SQL Server User Defined Type with a bound Rule and Default

    - by Adam Jones
    We have a User Defined Data Type of YesNo which has an which is an alias for char(1). The type has a bound Rule (must be Y or N) and a Default (N). The aim of this is that when any of the development team create a new field of type YesNo the rule and default are automatically bound to the new column. Rules and Defaults have been deprecated and won't be available in the next a future version of SQL Server, is there another way to achieve the same functionality? I should add that I'm aware that I could use CHECK and DEFAULT constraints to replicate the functionality of the bound Rule and Defalut objects, however these would have to be applied at each usage of the type, rather than getting the functionality 'for free' by using a UDT which has a bound Rule and Default. The post relates to a database that backs an existing application, rather than a new development, so I'm aware that our use of UDT's is less than optimal. I suspect the answer to the question is 'No', however normally when features are deprecated there's usually an alternative syntax that can be used as a drop in replacement so I wanted to pose the question in-case someone knew of an alternative.

    Read the article

  • SCD2 + Merge Statement + SQL Server

    - by Nev_Rahd
    I am trying work out with MERGE statment to Insert / Update Dimension Table of Type SCD2 My source is a Table var to Merge with Dimension table. My MERGE statement is throwing an error as: The target table 'DM.DATA_ERROR.ERROR_DIMENSION' of the INSERT statement cannot be on either side of a (primary key, foreign key) relationship when the FROM clause contains a nested INSERT, UPDATE, DELETE, or MERGE statement. Found reference constraint 'FK_ERROR_DIMENSION_to_AUDIT_CreatedBy'. My MERGE Statement: DECLARE @DATAERROROBJECT AS [ERROR_DIMENSION] INSERT INTO DM.DATA_ERROR.ERROR_DIMENSION SELECT ERROR_CODE, DATA_STREAM_ID, [ERROR_SEVERITY], DATA_QUALITY_RATING, ERROR_LONG_DESCRIPTION, ERROR_DESCRIPTION, VALIDATION_RULE, ERROR_TYPE, ERROR_CLASS, VALID_FROM, VALID_TO, CURR_FLAG, CREATED_BY_AUDIT_SK, UPDATED_BY_AUDIT_SK FROM (MERGE DM.DATA_ERROR.ERROR_DIMENSION ED USING @DATAERROROBJECT OBJ ON(ED.ERROR_CODE = OBJ.ERROR_CODE AND ED.DATA_STREAM_ID = OBJ.DATA_STREAM_ID) WHEN NOT MATCHED THEN INSERT VALUES( OBJ.ERROR_CODE ,OBJ.DATA_STREAM_ID ,OBJ.[ERROR_SEVERITY] ,OBJ.DATA_QUALITY_RATING ,OBJ.ERROR_LONG_DESCRIPTION ,OBJ.ERROR_DESCRIPTION ,OBJ.VALIDATION_RULE ,OBJ.ERROR_TYPE ,OBJ.ERROR_CLASS ,GETDATE() ,'9999-12-13' ,'Y' ,1 ,1 ) WHEN MATCHED AND ED.CURR_FLAG = 'Y' AND ( ED.[ERROR_SEVERITY] <> OBJ.[ERROR_SEVERITY] OR ED.[DATA_QUALITY_RATING] <> OBJ.[DATA_QUALITY_RATING] OR ED.[ERROR_LONG_DESCRIPTION] <> OBJ.[ERROR_LONG_DESCRIPTION] OR ED.[ERROR_DESCRIPTION] <> OBJ.[ERROR_DESCRIPTION] OR ED.[VALIDATION_RULE] <> OBJ.[VALIDATION_RULE] OR ED.[ERROR_TYPE] <> OBJ.[ERROR_TYPE] OR ED.[ERROR_CLASS] <> OBJ.[ERROR_CLASS] ) THEN UPDATE SET ED.CURR_FLAG = 'N', ED.VALID_TO = GETDATE() OUTPUT $ACTION ACTION_OUT, OBJ.ERROR_CODE ERROR_CODE, OBJ.DATA_STREAM_ID DATA_STREAM_ID, OBJ.[ERROR_SEVERITY] [ERROR_SEVERITY], OBJ.DATA_QUALITY_RATING DATA_QUALITY_RATING, OBJ.ERROR_LONG_DESCRIPTION ERROR_LONG_DESCRIPTION, OBJ.ERROR_DESCRIPTION ERROR_DESCRIPTION, OBJ.VALIDATION_RULE VALIDATION_RULE, OBJ.ERROR_TYPE ERROR_TYPE, OBJ.ERROR_CLASS ERROR_CLASS, GETDATE() VALID_FROM, '9999-12-31' VALID_TO, 'Y' CURR_FLAG, 555 CREATED_BY_AUDIT_SK, 555 UPDATED_BY_AUDIT_SK ) AS MERGE_OUT WHERE MERGE_OUT.ACTION_OUT = 'UPDATE'; What am I doing wrong ?

    Read the article

  • MsSql Server high Resource Waits and Head Blocker

    - by MartinHN
    Hi I have a MS SQL Server 2008 Standard installation running a database for a webshop. The current size of the database is 2.5 GB. Running on Windows 2008 Standard. Dual Intel Xeon X5355 @ 2.00 GHz. 4 GB RAM. When I open the Activity Monitor, I see that I have a Wait Time (ms/sec) of 5000 in the "Other" category. In the Processes list, all connections from the webshop, the Head Blocker value is 1. I see every day that when I try to access the website, it can take 20-30 secs before it even starts to "work". I know that it is not network latency. (I have a 301 redirect from the same server that is executed instantly). When the first request has been served, it seems as if it's not a sleep anymore and every subsequent request is served instantly with the speed of light. The problem was worse two weeks ago, until I changed every query to include WITH (NOLOCK). But I still experience the problem, and the Wait times in the Activity Monitor is about the same. The largest table (Images) has 32764 rows (448576 KB). Some tables exceed 300000 rows, thought they're much smaller in size than the Images table. I have the default clustered index for every primary key column, only. Any ideas?

    Read the article

  • sql-server performance optimization by removing print statements

    - by AG
    We're going through a round of sql-server stored procedure optimizations. The one recommendation we've found that clearly applies for us is 'SET NOCOUNT ON' at the top of each procedure. (Yes, I've seen the posts that point out issues with this depending on what client objects you run the stored procedures from but these are not issues for us.) So now I'm just trying to add in a bit of common sense. If the benefit of SET NOCOUNT ON is simply to reduce network traffic by some small amount every time, wouldn't it also make sense to turn off all the PRINT statements we have in the stored procedures that we only use for debugging? I can't see how it can hurt performance. OTOH, it's a bit of a hassle to implement due to the fact that some of the print statements are the only thing within else clauses, so you can't just always comment out the one line and be done. The change carries some amount of risk so I don't want to do it if it isn't going to actually help. But I don't see eliminating print statements mentioned anywhere in articles on optimization. Is that because it is so obvious no one bothers to mention it?

    Read the article

  • SQL Server performance issue.

    - by Jit
    Hi Friends, I have been trying to analyze performance issue with SQL Server 2005. We have 30 jobs, one for each databases (30 databases, one per each client). The jobs run at early morning at an interval of 5 minutes. When I run the job individually for testing, for most of the databases it finishes in 7 to 9 minutes. But when these jobs run at early morning, I see few jobs taking 2 to 3 hours to finish and the same takes few minutes as mentioned above if ran independently. We dont have any other job scheduled during that time, other than these 30 jobs. If we restart the server then for 2 or so days all the jobs finishes in few minutes, but over the period of time (from 3rd day suddenly), few jobs start taking hours to finish. What could be the possible reason of performance degradation over the period of time? I verified all the SPs and we uses temp tables and I made sure none of the temp table is left without dropping at the end of SP. Let me know what are the possible reasons for such behavior. Appreciate your time and help. Thanks

    Read the article

  • Choose a XML node in SQL Server based on max value of a child element

    - by Jay
    I am trying to select from SQL Server 2005 XML datatype some values based on the max data that is located in a child node. I have multiple rows with XML similar to the following stored in a field in SQL Server: <user> <name>Joe</name> <token> <id>ABC123</id> <endDate>2013-06-16 18:48:50.111</endDate> </token> <token> <id>XYX456</id> <endDate>2014-01-01 18:48:50.111</endDate> </token> </user> I want to perform a select from this XML column where it determines the max date within the token element and would return the datarows similar to the result below for each record: Joe XYZ456 2014-01-01 18:48:50.111 I have tried to find a max function for xpath that would all me to select the correct token element but I couldn't find one that would work. I also tried to use the SQL MAX function but I wasn't able to get it working with that method either. If I only have a single token it of course works fine but when I have more than one I get a NULL, most likely because the query doesn't know which date to pull. I was hoping there would be a way to specify a where clause [max(endDate)] on the token element but haven't found a way to do that. Here is an example of the one that works when I only have a single token: SELECT XMLCOL.query('user/name').value('.','NVARCHAR(20)') as name XMLCOL.query('user/token/id').value('.','NVARCHAR(20)') as id XMLCOL.query('user/token/endDate').value(,'xs:datetime(.)','DATETIME') as endDate FROM MYTABLE

    Read the article

  • Update or Insert Row depending on whether row is present in Microsoft SQL Server 2005

    - by Srikanth
    Hi, I am passing a XML document as a input to a stored procedure in Microsoft SQL Server 2005. This is the sample XML being passed as input <Strategy StrategyID="0" TOStrategyID="8" ShutdownQtySell="1" ShutdownQtyBuy="1"> <ParameterRange ParameterSetID="6" ParameterRangeID="1" ParameterRangeFrom="0" ParameterRangeTo="20" ParameterAutoTakeOut="False"> </ParameterRange> <ParameterRange ParameterSetID="6" ParameterRangeID="4" ParameterRangeFrom="21" ParameterRangeTo="40" ParameterAutoTakeOut="False"> </ParameterRange> <ParameterRange ParameterSetID="6" ParameterRangeID="5" ParameterRangeFrom="41" ParameterRangeTo="60" ParameterAutoTakeOut="False"> </ParameterRange> <ParameterRange ParameterSetID="6" ParameterRangeID="6" ParameterRangeFrom="61" ParameterRangeTo="80" ParameterAutoTakeOut="False"> </ParameterRange> <ParameterRange ParameterSetID="6" ParameterRangeID="7" ParameterRangeFrom="81" ParameterRangeTo="100" ParameterAutoTakeOut="False"> </ParameterRange> </Strategy> I am able to retrieve the data using OpenXML functionality in SQL server I am using this to get the data corresponding to ParameterRange rows SELECT ParameterRangeID as iRangeID, ParameterSetID as iSetID, ParameterRangeFrom as fRangeFrom, ParameterRangeTo as fRangeTo, ParameterAutoTakeOut as bTakeoutEnabled FROM OPENXML(@idoc, '/Strategy/ParameterRange', 1) WITH (ParameterSetID int,ParameterRangeID int,ParameterRangeFrom float,ParameterRangeTo float,ParameterAutoTakeOut bit) Now, I need to insert/update these rows into a table TempRanges which has (iRangeID,iSetID) as the primary key. If there is a row with the primary key, I want to update it the latest values and If there is no row with that primary key, I need to insert into the table. How can I accomplish this inside the Stored Procedure ? Thanks, Sri

    Read the article

  • Advice Please: SQL Server Identity vs Unique Identifier keys when using Entity Framework

    - by c.batt
    I'm in the process of designing a fairly complex system. One of our primary concerns is supporting SQL Server peer-to-peer replication. The idea is to support several geographically separated nodes. A secondary concern has been using a modern ORM in the middle tier. Our first choice has always been Entity Framework, mainly because the developers like to work with it. (They love the LiNQ support.) So here's the problem: With peer-to-peer replication in mind, I settled on using uniqueidentifier with a default value of newsequentialid() for the primary key of every table. This seemed to provide a good balance between avoiding key collisions and reducing index fragmentation. However, it turns out that the current version of Entity Framework has a very strange limitation: if an entity's key column is a uniqueidentifier (GUID) then it cannot be configured to use the default value (newsequentialid()) provided by the database. The application layer must generate the GUID and populate the key value. So here's the debate: abandon Entity Framework and use another ORM: use NHibernate and give up LiNQ support use linq2sql and give up future support (not to mention get bound to SQL Server on DB) abandon GUIDs and go with another PK strategy devise a method to generate sequential GUIDs (COMBs?) at the application layer I'm leaning towards option 1 with linq2sql (my developers really like linq2[stuff]) and 3. That's mainly because I'm somewhat ignorant of alternate key strategies that support the replication scheme we're aiming for while also keeping things sane from a developer's perspective. Any insight or opinion would be greatly appreciated.

    Read the article

  • Connecting to SQL Server 2005 via Web Service

    - by clear-cycle-corp
    Delphi 2010, dbExpress, and SQL Server 2005 DB I am trying to make a connection to a SQL Server 2005 DB using Delphi 2010 & DBExpress. If I create a standard delphi application and hard code my connection (IT WORKS!): procedure TForm1.Button1Click(Sender: TObject); var Conn: TSQLConnection; begin Conn:= TSQLConnection.Create(nil); Conn.ConnectionName:= 'VPUCDS_VPN_SE01'; Conn.LoadParamsOnConnect := True; Conn.LoginPrompt:=True; try Conn.Connected:= True; if Conn.Connected then ShowMessage('Connected!') else ShowMessage('NOT Connected!') finally Conn.Free; end; end; All the ini files, and DLLs reside in the same folder as my executable and yes, I have DBXMsSQL & MidasLib in the uses clause again, it works if its not a web service! However, if i then move the code over to a Web services CGI module: function TTest.ConnectToDB: Boolean;stdcall; var Conn: TSQLConnection; begin Conn:= TSQLConnection.Create(nil); Conn.ConnectionName:= 'VPUCDS_VPN_SE01'; Conn.LoadParamsOnConnect := True; Conn.LoginPrompt:=True; try Conn.Connected:= True; result:= Conn.Connected; finally Conn.Free; end; end; Thanks

    Read the article

  • ORDER BY in a Sql Server 2008 view

    - by eidylon
    Hi all... we have a view in our database which has an ORDER BY in it. Now, I realize views generally don't order, because different people may use it for different things, and want it differently ordered. This view however is used for a VERY SPECIFIC use-case which demands a certain order. (It is team standings for a soccer league.) The database is Sql Server 2008 Express, v.10.0.1763.0 on a Windows Server 2003 R2 box. The view is defined as such: CREATE VIEW season.CurrentStandingsOrdered AS SELECT TOP 100 PERCENT *, season.GetRanking(TEAMID) RANKING FROM season.CurrentStandings ORDER BY GENDER, TEAMYEAR, CODE, POINTS DESC, FORFEITS, GOALS_AGAINST, GOALS_FOR DESC, DIFFERENTIAL, RANKING It returns: GENDER, TEAMYEAR, CODE, TEAMID, CLUB, NAME, WINS, LOSSES, TIES, GOALS_FOR, GOALS_AGAINST, DIFFERENTIAL, POINTS, FORFEITS, RANKING Now, when I run a SELECT against the view, it orders the results by GENDER, TEAMYEAR, CODE, TEAMID. Notice that it is ordering by TEAMID instead of POINTS as the order by clause specifies. However, if I copy the SQL statement and run it exactly as is in a new query window, it orders correctly as specified by the ORDER BY clause.

    Read the article

  • Windows Server 2008 R2 File Permissions

    - by Fly_Trap
    I’m having some problems understanding some particular file permissions behaviour. Here are the steps to reproduce: Log into the server using the default Administrator account Create a text file (testfile.txt) in C:\ProgramData containing some arbitrary text Create a new user account and make it a member of the Administrators group Log in using new account and open C:\ProgramData\testfile.txt Edit the text and try to save Upon clicking save I’m presented with the save as dialog, which indicates that i do not have the necessary permissions to edit the file. This seems odd considering that the new user account is a member of Administrators. When I view the permissions of the file I can see the there are three groups listed, System, Administrators and Users. SYSTEM and Administrators have full permissions, however, Users only has the Read & Execute and Read permissions checked. It would appear that when I open the testfile.txt from the new users account, it opens in the context of the Users group, despite being a member of Administrators, is this correct? It would certainly explain the behaviour. The reason that this is an issue for me is that if I deploy an application via 'Run as Administrator', will normal users be able to edit the text files I install to ProgramData. Is this behaviour confined to Windows server or is it the same in Vista and Win7.

    Read the article

  • SQL Server race condition issue with range lock

    - by Freek
    I'm implementing a queue in SQL Server (please no discussions about this) and am running into a race condition issue. The T-SQL of interest is the following: set transaction isolation level serializable begin tran declare @RecordId int declare @CurrentTS datetime2 set @CurrentTS=CURRENT_TIMESTAMP select top 1 @RecordId=Id from QueuedImportJobs with (updlock) where Status=@Status and (LeaseTimeout is null or @CurrentTS>LeaseTimeout) order by Id asc if @@ROWCOUNT> 0 begin update QueuedImportJobs set LeaseTimeout = DATEADD(mi,5,@CurrentTS), LeaseTicket=newid() where Id=@RecordId select * from QueuedImportJobs where Id = @RecordId end commit tran RecordId is the PK and there is also an index on Status,LeaseTimeout. What I'm basically doing is select a record of which the lease happens to be expired, while simultaneously updating the lease time with 5 minutes and setting a new lease ticket. So the problem is that I'm getting deadlocks when I run this code in parallel using a couple of threads. I've debugged it up to the point where I found out that the update statement sometimes gets executing twice for the same record. Now, I was under the impression that the with (updlock) should prevent this (it also happens with xlock btw, not with tablockx). So it actually look like there is a RangeS-U and a RangeX-X lock on the same range of records, which ought to be impossible. So what am I missing? I'm thinking it might have something to do with the top 1 clause or that SQL Server does not know that where Id=@RecordId is actually in the locked range? Deadlock graph: Table schema (simplified):

    Read the article

  • Sql Server 2005 multiple insert with c#

    - by bottlenecked
    Hello. I have a class named Entry declared like this: class Entry{ string Id {get;set;} string Name {get;set;} } and then a method that will accept multiple such Entry objects for insertion into the database using ADO.NET: static void InsertEntries(IEnumerable<Entry> entries){ //build a SqlCommand object using(SqlCommand cmd = new SqlCommand()){ ... const string refcmdText = "INSERT INTO Entries (id, name) VALUES (@id{0},@name{0});"; int count = 0; string query = string.Empty; //build a large query foreach(var entry in entries){ query += string.Format(refcmdText, count); cmd.Parameters.AddWithValue(string.Format("@id{0}",count), entry.Id); cmd.Parameters.AddWithValue(string.Format("@name{0}",count), entry.Name); count++; } cmd.CommandText=query; //and then execute the command ... } } And my question is this: should I keep using the above way of sending multiple insert statements (build a giant string of insert statements and their parameters and send it over the network), or should I keep an open connection and send a single insert statement for each Entry like this: using(SqlCommand cmd = new SqlCommand(){ using(SqlConnection conn = new SqlConnection(){ //assign connection string and open connection ... cmd.Connection = conn; foreach(var entry in entries){ cmd.CommandText= "INSERT INTO Entries (id, name) VALUES (@id,@name);"; cmd.Parameters.AddWithValue("@id", entry.Id); cmd.Parameters.AddWithValue("@name", entry.Name); cmd.ExecuteNonQuery(); } } } What do you think? Will there be a performance difference in the Sql Server between the two? Are there any other consequences I should be aware of? Thank you for your time!

    Read the article

  • IPad SQLite Push and Pull Data from external MS SQL Server DB

    - by MattyD
    This carries on from my previous post (http://stackoverflow.com/questions/4182664/ipad-app-pull-and-push-relational-data). My plan is that when the ipad application starts I am going to pull data (config data i.e. Departments, Types etc etc relational data that is used across the system) from a webhosted MS SQL Server DB via a webservice and populate it into an SQL Lite DB on the IPad. Then when I load a listing I will pull the data over the line again via a webservice and populate it into the SQL Lite db on the ipad (than just run select commands to populate the listing). My questions are: 1. What is the most efficient way to transfer data across the line via the web? Everyone seems to do it a different way. My idea is that I will have a webService for each type of data pull (e.g. RetrieveContactListing) that will query the db and than convert that data into "something" to send across the line. My question really is what is the "something" that it should be converting into? 2. Everyone talks about odata services. Is this suited for applications where complex read and writes are needed? Ive created a simple iphone app before that talked to an sql server db (i just sent my own structured xml across the line) but now with this app the data calls are going to be a lot larger so efficiency is key.

    Read the article

  • Thumbnail image saved with worse quality on Windows Server 2003

    - by Angelo
    Hello, In asp.net 2.0 application I am trying to create thumbnails from uploaded images. However when I test the application on my PC under Windows7 it works fine, but on the real Windows 2003 Server the resized image has worse quality. Where this difference could come from? Different JPEG codec or what, if Yes how it can be updated on Win 2003 Server? Thanks! Here is the code: Resize of the Image: Bitmap newBmp = new Bitmap(imgWidth, imgHeight, PixelFormat.Format24bppRgb); newBmp.SetResolution(inputBmp.HorizontalResolution, inputBmp.VerticalResolution); //Create a graphics object attached to the new bitmap Graphics newBmpGraphics = Graphics.FromImage(newBmp); newBmpGraphics.InterpolationMode = InterpolationMode.HighQualityBicubic; newBmpGraphics.SmoothingMode = SmoothingMode.HighQuality; newBmpGraphics.PixelOffsetMode = PixelOffsetMode.HighQuality; newBmpGraphics.DrawImage(inputBmp, new Rectangle(0, 0, imgWidth, imgHeight), new Rectangle(0, 0, inputBmp.Width, inputBmp.Height), GraphicsUnit.Pixel); Save of the Image: System.IO.Stream imgStream = new System.IO.MemoryStream(); //Get the ImageCodecInfo for the desired target format ImageCodecInfo destCodec = FindCodecForType(ImageMimeTypes.JPEG); if (destCodec == null) { //No codec available for that format throw new ArgumentException("The requested format image/jpeg does not have an available codec installed", "destFormat"); } //Create an EncoderParameters collection to contain the //parameters that control the dest format's encoder EncoderParameters destEncParams = new EncoderParameters(1); EncoderParameter qualityParam = new EncoderParameter(System.Drawing.Imaging.Encoder.Quality,(long)quality); destEncParams.Param[0] = qualityParam; //Save w/ the selected codec and encoder parameters inputBmp.Save(imgStream, destCodec, destEncParams); Bitmap destBitmap = new Bitmap(imgStream);

    Read the article

  • How to run stored procedures and ad-hoc scripts asynchronously with "loosely" connected SQL Server 2

    - by sanga
    Is there a way to initiate a script against an instance of SQL server when it is not connected then have it run on the instance the next time it connects? This needs to happen without any intervention from me. Background situation if you are interested: We have about 120 machines each with their own instance of SQL Server 2000. Most of them are laptops. We have merge replication set up with each one. From time to time, there is a need to delete "rogue" guids from some tables in some instances that overwrite legitimate records on the main publisher as well as perform administrative tasks via stored procedure or adhoc sql statements. The problem is there is no telling when each machine is going to be connected to the network. Some folks turn their machines completely off at the end of the day. Others disconnect their machines and take them on business trips, home for the weekend etc. Did I mention that about 35 of these machines are in utility trucks and "attempt" to sync over a wireless connection. Thanks in advance for any assistance or suggestions. Sanga

    Read the article

< Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >