Search Results

Search found 100454 results on 4019 pages for 'sql server 2011'.

Page 551/4019 | < Previous Page | 547 548 549 550 551 552 553 554 555 556 557 558  | Next Page >

  • How to communicate between Client and Server in a Client-Server Application?

    - by Sanoj
    I would like to implement an Client-Server Application, where the business-logic, security validations and a database are at the server and the user interface are at the client. I would like to implement clients in different languages i.e. one in WPF/.NET, one Swing/Java , one in Android/Java and maybe one HTML/JavaScript client. The server will be on Internet, so I would like to be able to have encrypted communication. The client will send some lists of items to be added to the database, or update items, and do some transactions. The server will check if the items are already updated by another client, or update the item, add new items or delete items. How do I solve the communication between clients and the server in such a system? I have been thinking about: http/https webserver, and sending messages in JSON or XML and use Web Sockets for bi-directional communication. Use http in a RESTful way, except when WebSockets are needed. But I guess there are better solutions for native desktop applications than http? CORBA - I have just heard about it, and it's old and complex. Not much talk about it these days. XMPP/Jabber - I have just heard about it and I don't know if it fits me at all. EJabberd seams to be a popular implementation. AMQP - I have just heard about it and I don't know if it fits me at all. RabbitMQ seams to be a popular implementation. Windows Communication Foundation, Java RMI, Java Message Service - but are they language independent? I guess some of these alternatives are on different levels, maybe I can have i.e xmpp or amqp in web sockets over https? What technologys are used for this problem in companies today? and what is recommended to use? I have no experience of them other than webservers and http. Please give me some guidance in this jungle. What are the pros and cons of these technologies in my situation?

    Read the article

  • What kinds of job scheduler framework or solution do you recomend on window server system

    - by Samuel.P
    Hi Guys My Company is running a lots of batch jobs to process data for partners. We used to use sql server agent to execute batch process. I found it's very difficult to get batch process information like log or status when i working on sql server's agent. So I'd like to change my company's job scheduling process to another stable solution But I dind't know which solutions to choose. I tried to find as same solution as CA Autosys. My Company use window server system. We shoud consider framework or solution should be configured using solution's api because i plan to make a asp.net web application to manage our job shceduler. What kinds of job scheduler Framework or inexpensive solution do you suggest? Regards, Park PS: I think Quartz Framework has a good reputation on J2EE system. But I am not sure Quartz.NET is as good as orginal java version.

    Read the article

  • How to I serialize a large graph of .NET object into a SQL Server BLOB without creating a large bu

    - by Ian Ringrose
    We have code like: ms = New IO.MemoryStream bin = New System.Runtime.Serialization.Formatters.Binary.BinaryFormatter bin.Serialize(ms, largeGraphOfObjects) dataToSaveToDatabase = ms.ToArray() // put dataToSaveToDatabase in a Sql server BLOB But the memory steam allocates a large buffer from the large memory heap that is giving us problems. So how can we stream the data without needing enough free memory to hold the serialized objects. I am looking for a way to get a Stream from SQL server that can then be passed to bin.Serialize() so avoiding keeping all the data in my processes memory. Likewise for reading the data back... Some more background. This is part of a complex numerical processing system that processes data in near real time looking for equipment problems etc, the serialization is done to allow a restart when there is a problem with data quality from a data feed etc. (We store the data feeds and can rerun them after the operator has edited out bad values.) Therefore we serialize the object a lot more often then we de-serialize them. The objects we are serializing include very large arrays mostly of doubles as well as a lot of small “more normal” objects. We are pushing the memory limit on a 32 bit system and make the garage collector work very hard. (Effects are being made elsewhere in the system to improve this, e.g. reusing large arrays rather then create new arrays.) Often the serialization of the state is the last straw that courses an out of memory exception; our peak memory usage is while this serialization is being done. I think we get large memory pool fragmentation when we de-serialize the object, I expect there are also other problem with large memory pool fragmentation given the size of the arrays. (This has not yet been investigated, as the person that first looked at this is a numerical processing expert, not a memory management expert.) Are customers use a mix of Sql Server 2000, 2005 and 2008 and we would rather not have different code paths for each version of Sql Server if possible. We can have many active models at a time (in different process, across many machines), each model can have many saved states. Hence the saved state is stored in a database blob rather then a file. As the spread of saving the state is important, I would rather not serialize the object to a file, and then put the file in a BLOB one block at a time. Other related questions I have asked How to Stream data from/to SQL Server BLOB fields? Is there a SqlFileStream like class that works with Sql Server 2005?

    Read the article

  • Linq to Sql, Repositories, and Asp.Net MVC ViewData: How to remove redundancy?

    - by Dr. Zim
    Linq to SQL creates objects which are IQueryable and full of relations. Html Helpers require specific interface objects like IEnumerable<SelectListItem>. What I think could happen: Reuse the objects from Linq to SQL without all the baggage, i.e., return Pocos from the Linq to SQL objects without additional Domain Model classes? Extract objects that easily convert to (or are) Html helper objects like the SelectListItem enumeration? Is there any way to do this without breaking separation of concerns? Some neat oop trick to bridge the needs? For example, if this were within a repository, the SelectListItem wouldn't be there. The select new is a nice way to cut out an object from the Linq to SQL without the baggage but it's still referencing a class that shouldn't be referenced: IEnumerable<SelectListItem> result = (from record in db.table select new SelectListItem { Selected = record.selected, Text= record.Text, Value= record.Value } ).AsEnumerable();

    Read the article

  • Deploying service from development server to iis7 server

    - by MindWorX
    I have a service which works perfectly on the local development server, but once moved to the remote iis7 server, it fails. I've been browsing the service in a browser manually. Here's the steps I've been taking: Open up Service.svc Open up Service.svc?wsdl Open up Service.svc?wsdl0 Open up Service.svc?xsd=xsd0 Step 4. is where it fails. If i browse on the development server it works. If i browse on the iis7 server, I get a connection reset error. Any help appreciated.

    Read the article

  • How to deploy SQL Reporting 2005 when Data Sources are locked?

    - by spoulson
    The DBAs here maintain all SQL Server and SQL Reporting servers. I have a custom developed SQL Reporting 2005 project in Visual Studio that runs fine on my local SQL Database and Reporting instances. I need to deploy to a production server, so I had a folder created on a SQL Reporting 2005 server with permissions to upload files. Normally, a deploy from within Visual Studio is all that is needed to upload the report files. However, for security purposes, data sources are maintained explicitly by DBAs and stored in a separated locked down common folder on the reporting server. I had them create the data source for me. When I attempt to deploy from VS, it gives me the error "The item '/Data Sources' already exists." I get this whether I'm deploying the whole project or just a single report file. I already set OverwriteDataSources=false in the project properties. The TargetServer URL and folder are verified correct. I suppose I could copy the files manually, but I'd like to be able to deploy from within VS. What could I be doing wrong?

    Read the article

  • SSIS 2008 - How to read from SQL Server Compact Edition file?

    - by Gustavo Cavalcanti
    I can see "SQL Server Compact Destination" under Data Flow Destinations, but I am looking for its source counterpart. If I choose ADO.Net source and create a new connection, there's no provider for SQL CE. What am I missing? Thanks! Update: I am able to create a "Data Source" (under "Data Sources" folder in my SSIS project") that connects to an existing Sql CE file. But how can I use this Data Source in my data flow?

    Read the article

  • Dump Linq-To-Sql now that Entity Framework 4.0 has been released?

    - by DanM
    The relative simplicity of Linq-To-Sql as well as all the criticism leveled at version 1 of Entity Framework (especially, the vote of no confidence) convinced me to go with Linq-To-Sql "for the time being". Now that EF 4.0 is out, I wonder if it's time to start migrating over to it. Questions: What are the pros and cons of EF 4.0 relative to Linq-To-Sql? Is EF 4.0 finally ready for prime time? Is now the time to switch over?

    Read the article

  • What is the best way to configure a SQL Server for 50 developers?

    - by Lakhlani Prashant
    Hi, If I am running an organization that has 50 .net developers and all are using SQL Server, what is the best way to make a single SQL Server available to them? Here is some of the concerns that I want to be careful about Should I configure database users per project or per user? or both? Should I provide single SQL Server instance? There are some more concerns but I think getting answer of these two will be a good starting point.

    Read the article

  • EXPORT AS INSERT STATEMENTS: But in SQL Plus the line overrides 2500 characters!

    - by The chicken in the kitchen
    Hello, I have to export an Oracle table as INSERT STATEMENTS. But the INSERT STATEMENTS so generated, override 2500 characters. I am obliged to execute them in SQL Plus, so I receive an error message. This is my Oracle table: CREATE TABLE SAMPLE_TABLE ( C01 VARCHAR2 (5 BYTE) NOT NULL, C02 NUMBER (10) NOT NULL, C03 NUMBER (5) NOT NULL, C04 NUMBER (5) NOT NULL, C05 VARCHAR2 (20 BYTE) NOT NULL, c06 VARCHAR2 (200 BYTE) NOT NULL, c07 VARCHAR2 (200 BYTE) NOT NULL, c08 NUMBER (5) NOT NULL, c09 NUMBER (10) NOT NULL, c10 VARCHAR2 (80 BYTE), c11 VARCHAR2 (200 BYTE), c12 VARCHAR2 (200 BYTE), c13 VARCHAR2 (4000 BYTE), c14 VARCHAR2 (1 BYTE) DEFAULT 'N' NOT NULL, c15 CHAR (1 BYTE), c16 CHAR (1 BYTE) ); ASSUMPTIONS: a) I am OBLIGED to export table data as INSERT STATEMENTS; I am allowed to use UPDATE statements, in order to avoid the SQL*Plus error "sp2-0027 input is too long(2499 characters)"; b) I am OBLIGED to use SQL*Plus to execute the script so generated. c) Please assume that every record can contain special characters: CHR(10), CHR(13), and so on; d) I CAN'T use SQL Loader; e) I CAN'T export and then import the table: I can only add the "delta" using INSERT / UPDATE statements through SQL Plus.

    Read the article

  • How to select the nth row in a SQL database table?

    - by Charles Roper
    I'm interested in learning some (ideally) database agnostic ways of selecting the nth row from a database table. It would also be interesting to see how this can be achieved using the native functionality of the following databases: SQL Server MySQL PostgreSQL SQLite Oracle I am currently doing something like the following in SQL Server 2005, but I'd be interested in seeing other's more agnostic approaches: WITH Ordered AS ( SELECT ROW_NUMBER() OVER (ORDER BY OrderID) AS RowNumber, OrderID, OrderDate FROM Orders) SELECT * FROM Ordered WHERE RowNumber = 1000000 Credit for the above SQL: Firoz Ansari's Weblog Update: See Troels Arvin's answer regarding the SQL standard. Troels, have you got any links we can cite?

    Read the article

  • DDNS Not Creating Journal (Dhcpd and Named)

    - by user130094
    * EDIT 1 * After monkeying with additional debug logging I see some log entries of interest. 27-Jul-2012 23:45:26.537 general: error: zone example.lan/IN/internal: journal rollforward failed: no more 27-Jul-2012 23:45:26.537 general: error: zone example.lan/IN/internal: not loaded due to errors. ^^^ If I can remedy the above messages I think I'll be good to go ^^^ * EDIT 2 * Grasping at straws I touched a forward and a reverse zone journal file and restarted named. Boom! Works. Despite documentation stating the files are created automatically and what I have seen before... dunno why but that did the trick. Also re-checked perms on the dir the files live in. As certain as I was, they were correct with named having rw. CentOS 6 (final) dhcpd 4.1.1-P1 named BIND 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6 Basic DHCP and DNS functionality are in place on 192.168.111.2. Clients are assigned addresses as intended and can resolve local DNS names as well as Internet names. My problem is that named's zone journal files are not created. chroot: /var/named/chroot I tried placing the zone files in various directories (/var/named/data, /var/named, /var/named/dynamic - no matter which dir with named owning and wide open perms I now get nowhere). Along the way I, at one point, got a permission denied when named tried to create the journal. Resolved the issue by: chown --recursive named:named /var/named chmod --recursive 777 /var/named The journal was then created and here's where things fell apart. I attempted to tame permissions to something more sane and broke it. Once changed and having restarted named it threw an error indicating the journal was out of sync (or something to that affect)... didn't matter since this is a new setup so I deleted it and now it is not recreated. Now though I see no errors in /var/log/messages, my chrooted /var/log/named.log, or chrooted /var/log/named.debug. I increased the debug level with 'rndc trace' - no love. Increased trace to 10, still nothing. SELinux is disabled... [root@server temp]# sestatus SELinux status: disabled dhcpd.conf... allow client-updates; ddns-update-style interim; subnet 192.168.111.0 netmask 255.255.255.224 { ... key dhcpudpate { algorithm hmac-md5; secret LDJMdPdEZED+/nN/AGO9ZA==; } zone example.lan. { primary 192.168.111.2; key dhcpudpate; } } named.conf... key dhcpudpate { algorithm hmac-md5; secret "LDJMdPdEZED+/nN/AGO9ZA=="; }; zone "example.lan" { type master; file "/var/named/dynamic/example.lan.db"; allow-transfer { none; }; allow-update { key dhcpudpate; }; notify false; check-names ignore; }; The following shows /var/log/named.log output of named starting up - no errors. 27-Jul-2012 21:33:39.349 general: info: zone 111.168.192.in-addr.arpa/IN/internal: loaded serial 2012072601 27-Jul-2012 21:33:39.349 general: info: zone example.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.350 general: info: zone example2.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.350 general: info: zone example3.lan/IN/internal: loaded serial 2012072601 27-Jul-2012 21:33:39.350 general: info: zone example4.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.351 general: info: zone example5.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.351 general: info: managed-keys-zone ./IN/internal: loaded serial 0 27-Jul-2012 21:33:39.351 general: info: zone example.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example1.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example2.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example3.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.353 general: info: managed-keys-zone ./IN/external: loaded serial 0 27-Jul-2012 21:33:39.353 general: notice: running 27-Jul-2012 21:34:03.825 general: info: received control channel command 'trace 10' 27-Jul-2012 21:34:03.825 general: info: debug level is now 10 ...and /var/log/messages for a named start... Jul 27 23:02:04 server named[9124]: ---------------------------------------------------- Jul 27 23:02:04 server named[9124]: BIND 9 is maintained by Internet Systems Consortium, Jul 27 23:02:04 server named[9124]: Inc. (ISC), a non-profit 501(c)(3) public-benefit Jul 27 23:02:04 server named[9124]: corporation. Support and training for BIND 9 are Jul 27 23:02:04 server named[9124]: available at https://www.isc.org/support Jul 27 23:02:04 server named[9124]: ---------------------------------------------------- Jul 27 23:02:04 server named[9124]: adjusted limit on open files from 4096 to 1048576 Jul 27 23:02:04 server named[9124]: found 2 CPUs, using 2 worker threads Jul 27 23:02:04 server named[9124]: using up to 4096 sockets Jul 27 23:02:04 server named[9124]: loading configuration from '/etc/named.conf' Jul 27 23:02:04 server named[9124]: using default UDP/IPv4 port range: [1024, 65535] Jul 27 23:02:04 server named[9124]: using default UDP/IPv6 port range: [1024, 65535] Jul 27 23:02:04 server named[9124]: listening on IPv4 interface eth0, 192.168.111.2#53 Jul 27 23:02:04 server named[9124]: generating session key for dynamic DNS Jul 27 23:02:04 server named[9124]: sizing zone task pool based on 12 zones Jul 27 23:02:04 server named[9124]: set up managed keys zone for view internal, file 'dynamic/3bed2cb3a3acf7b6a8ef408420cc682d5520e26976d354254f528c965612054f.mkeys' Jul 27 23:02:04 server named[9124]: set up managed keys zone for view external, file 'dynamic/3c4623849a49a53911c4a3e48d8cead8a1858960bccdea7a1b978d73ec2f06d7.mkeys' Jul 27 23:02:04 server named[9124]: command channel listening on 127.0.0.1#953 What can I do to troubleshoot this further? It almost seems as though dhcpd is not triggering the update. Maybe I should troubleshoot here and, if so, how? Many thanks.

    Read the article

  • Good SMTP server on Windows for a production server

    - by Waleed Eissa
    I'm going to have my website hosted soon on a VPS or dedicated server (with Windows 2008), so I'm trying to plan ahead. I wonder whether the built-in SMTP server that comes with IIS7 is reliable enough for a production server or should I look for an alternative? I heard good things about hmailserver and best of all it's free, do you have any experience with using the bulit-in SMTP on a high traffic website. Thanks a lot for any suggestions

    Read the article

  • What sql server isolation level should I choose to prevent concurrent reads?

    - by Brian Bolton
    I have the following transaction: SQL inserts a 1 new record into a table called tbl_document SQL deletes all records matching a criteria in another table called tbl_attachment SQL inserts multiple records into the tbl_attachment Until this transaction finishes, I don't want others users to be aware of the (1) new records in tbl_document, (2) deleted records in tbl_attachment, and (3) modified records in tbl_attachment. Would Read Committed Isolation be the correct isolation level?

    Read the article

  • How to make Emacs sql-mode recognize MySQL #-style comments?

    - by Ken
    I'm reading a bunch of MySQL files that use # (to end-of-line) comments, but my sql-mode doesn't support them. I found the syntax-table part of sql.el that defines /**/ and -- comments, but according to this, Emacs syntax tables support only 2 comment styles. Is there a way to add support for # comments in sql.el easily?

    Read the article

  • Uploading a csv file to sql server - Identity problem.

    - by Doozer1979
    Given a column structure in a CSV file of: First_Name, Last_Name, Date_Of_Birth And a SQL Server table with a structure of ID(PK) | First_Name | Last_Name | Date_Of_Birth (Field ID is an Identity with an auto-increment of 1) How do i arrange it so that SQL Server does not attempt to insert the First_Name column from the csv file into the ID field? For info the csv is loaded into a DataTable and then copied to SQL Server using SqlBulkCopy Should i be modifying the csv file before the import add the ID column (The destination table is truncated prior to import, so no need to worry about duplicate key values.) Or perhaps adding an id column to the Datatable? Or Is there a setting in Sql Server that i may have missed?

    Read the article

  • What can map database tables like LINQ to SQL did?

    - by trnTash
    A good thing in LINQ to SQL was a fast and reliable way to map database tables and convert them into classes accessible from c# project. However it is no longer recommended to create projects using LINQ to SQL. What is its substitute? What kind of tool should I use in VS 2010 today if I want to have the same functionality as I had with LINQ to SQL?

    Read the article

  • Microsoft SQL Server 2008 Web Edition: is it suitable for "closed" websites?

    - by micha12
    Can Microsoft SQL Server 2008 Web Edition be used in "closed" websites, which are hosted on the Internet, but require users to log in? We are developing a web application for banks. This is a website for clients of the bank; it allows clients to log in and view information on their personal banking accounts, stock portfolios, etc. Can this web app use SQL Server 2008 Web Edition? Here is information on this edition of SQL Server: http://www.microsoft.com/sqlserver/2008/en/us/web.aspx It is said on this page that Web Edition can be used only on "public and Internet accessible ... Web applications". Technically, the web app we are developing is public and Internet accessible - although it requires authentication. Won't using Web Edition in our web app violate SQL Server license terms? Thank you.

    Read the article

  • Best Ruby ORM for Wrapping around Legacy SQL Server Database?

    - by Technocrat
    Hi. I found this answer and it sounds like almost exactly what I'm doing. I have heard mixed answers about whether or not datamapper can support SQL Server through dataobjects. Basically, we have an app that uses a consistently structured database, consistently named tables, etc in SQL Server. We're making all kinds of tools and stuff that have to interact with it, some of them remotely and so I decided that we need to create some common, simple access point to do read/write operations on the SQL Server app since it's API is all C# and other things I despise. Now my question is if anyone has any examples or projects they know of where a ruby ORM can essentially create models for another application's legacy database by defining the conventions of each model's pkeys, fkeys, table names, etc. Sequel is the only ORM I've used with SQL Server but never to do anything quite like this. Any suggestions?

    Read the article

< Previous Page | 547 548 549 550 551 552 553 554 555 556 557 558  | Next Page >