Search Results

Search found 49224 results on 1969 pages for 'database mirroring sql se'.

Page 252/1969 | < Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >

  • Transferring an SQL Processor License to a virtual hosted environment

    - by Andrew Shepherd
    My company is currently hosting a service in-house, and we want to move to an externally hosted environment. We would then be using a virtual server. I understand that this might be spread across multiple machines, but from my perspective as a customer, this layer is abstracted away - I shouldn't know or care about the hardware that the OS is hosted on. We have a licensed edition of SQL Server 2008. This is one Processor license. Will it be a violation of the licensing agreement to use this in a virtual environment. From the reference guide here it says When licensed Per Processor With Workgroup, Web, and Standard editions, for each server to which you have assigned the required number of per processor licenses, you may run, at any one time, any number of instances of the server software in physical and virtual operating system environments on the licensed server. However, the total number of physical and virtual processors used by those operating system environments cannot exceed the number of software licenses assigned to that server For enterprise edition there is an added option: if all physical processors in a machine have been licensed, then you may run unlimited instances of SQL server 2008 in one physical and an unlimited number of virtual operating environments on that same machine. I'm having trouble getting my head around this. Would I theoretically have to get a license for every processor in this virtual environment (which is effectively impossible because I have no way of knowing how many processors there actually are)? Or can I just say that it's hosted on one "virtual" server, so that's OK?

    Read the article

  • SQL Server Installation: Is it 32 or 64 bit?

    - by CapBBeard
    Hi, Recently I was performing an OS upgrade on one of our DB servers, moving from Server 2003 to Server 2008. The DBMS is SQL Server 2005. While reinstalling SQL on the new Windows installation, I went to another of our DB servers to verify a couple of settings. Now, I always thought this second server was Server 2003 x64 + SQL 2005 x64 (from what I'd been told), but I now have my doubts about this. I now suspect that it is in fact only 32 bit SQL, however I'd like to verify this. Here's some details: The OS is definitely 64 bit. xp_msver shows Platform as NT INTEL X86 SELECT @@VERSION shows Microsoft SQL Server 2005 - 9.00.4035.00 (Intel X86)... However sqlservr.exe is not shown with '* 32' in taskmgr, does anyone know why this is the case, if it is in fact 32 bit as claimed? Despite this, it does seem to be running out of the x86 program files folder. If I do the same checks on a confirmed 64 bit installation, it does give back the expected 64 bit readings, which can only prove that this server in question is only running in 32 bit. Now, that being the case, the question arises about how much memory this '32 bit' install can use. Task manager reports about 3.5GB memory usage for sqlservr.exe (The server has 16GB physical). I suspect that AWE has not been configured at all, and therefore the server will be significantly under-utilised (remembering that the OS is 64 bit) if SQL is simply using a 32bit address space. Is this assumption correct? I feel the server should have SQL reinstalled as 64 bit in order to fully utilise the hardware platform, however it is currently heavily in production; this will be no easy task. I suspect we may just have to configure AWE correctly and let it be for the time being (Unless this is a bad idea?). I apologise that this question is a little vague/lost; I'm no SQL expert, just trying to get a handle on what's going on here.

    Read the article

  • What's throttling the database?

    - by Troels Arvin
    Hardware: Intel x86_64 with 192GB of RAM. OS: CentOS 5.4 x86_64. DBMS: DB2 v. 9.7.1 64 bit. During certain special workloads (e.g. parallel REORGs/RUNSTATs), I've seen the server transporting 450MB/s with 25000IO/s (yes, there is probably some storage system caching happening here) while all CPU cores were happily working in an even mix of usermode/wait. And disk benchmark tools can also bring some very satisfying bandwith and IO/s numbers to the table. On the other hand, we also have another scenario: A single rather complex query with at least one large table scan. db2's "list applications" reports that the query is Executing (not locked). IO: At most 10MB/s, 500 IO/s; CPU: two cores in 99.9% wait state, all other cores 100% idle. The tables which the query reads from have been altered to have LOCKSIZE=TABLE, so I would think that lock list work is zero. What's going on in such a situation? What tools/snapshots/... can I use to gain better insight in such a case?

    Read the article

  • Error when mount the database in exchange 2010 SP1

    - by user64060
    Hi, My company have two exchange 2010 SP1 servers with DAG configuration with OS widows server 2008 R2 in testing entironment. Today i want to test my backup possibility, so i restore the backup data to another location not original location. I dismount the database and then delete the all files under the database location. last I copy back the files from back up location to database location. When i want to mount the database. It will come out the below error! -------------------------------------------------------- Microsoft Exchange Error -------------------------------------------------------- Failed to mount database 'mail2'. mail2FailedError: Couldn't mount the database that you specified. Specified database: mail2; Error code: An Active Manager operation failed. Error: The database action failed. Error: Operation failed with message: MapiExceptionCallFailed: Unable to mount database. (hr=0x80004005, ec=1011) [Database: mail2, Server: mail2.e0594.cn]. An Active Manager operation failed. Error: The database action failed. Error: Operation failed with message: MapiExceptionCallFailed: Unable to mount database. (hr=0x80004005, ec=1011) [Database: mail2, Server: mail2.e0594.cn] An Active Manager operation failed. Error: Operation failed with message: MapiExceptionCallFailed: Unable to mount database. (hr=0x80004005, ec=1011) [Server: mail2.e0594.cn] MapiExceptionCallFailed: Unable to mount database. (hr=0x80004005, ec=1011) Any suggestion? Thanks!

    Read the article

  • Mirroring svn repository

    - by cardy
    I have an svn repository and I'd like to have it duplicated over multiple machines for availability purpose. By now when my vps goes down, I'm unable to connect to repository and this is very annoying. Easiest (and expansive) solution is to setup two identical machine and make them work like clones. I'd like to know if there are any alternative (involving 2 machines). Ideally I would have two vps in different datacenters, so if one goes down I can rely on the other. Thanks. I need a mirror both for read and write not only for read. Svn Repos are berkley-db based

    Read the article

  • SQL Server Unattended Install through SSH

    - by Samuel
    I'm trying to install SQL Server from the command line through Cygwin open-ssh. The install works when I log onto the server as Administrator and execute the script through a Cygwin shell, but the install doesn't work when I SSH into the machine using Administrator's credentials and run the exact same command. I've already verified that the SSHD process is running as the Admistrator, and I've verified that the install script is indeed starting under Administrator. Is there something different with the terminal in SSH vs. the Cygwin terminal on the machine that would cause this problem? Specifically what's failing is Sql Server install runs for a while then hangs with a MSI error 1622. "Error opening installation log file. Verify that the specified log file location exists and is writable." If I run both installs, I've noticed that they have different authentication id's in ProcMon, but they have the exact same command line parameters. There has to be something in SSH that is causing permissions issues... Any ideas?

    Read the article

  • Port mirroring on multiple switches

    - by Matt
    So here is the deal, I have a server on switch A where port 3 is monitoring traffic for most of the ports on switch A. However I have other users on switch B that needs to have port 3 on switch A monitor as well. Is this possible? I have been reading about rspan but doesnt seem to work. Switch A: monitor session 1 source interface fast0/1 - 2 monitor session 1 source interface fast0/4 - 46 monitor session 1 destination interface fast0/3 (this works great for switch A, I need a solution to get switch B to also have some ports sent to port 3 on switch A for monitoring.) Onxx, All the traffic on switch A is fine, there will be about 10-15 ports on switch B that I need to send to fa0/3 on switch A as the destination. I have the switches connected with a ethernet cable with a trunk port on both switches on port 48 on switch B and A and port 47 on A connects to our sonicwall. So I am assuming they are daisy chained? What if I did the following: Switch A monitor session 1 source interface fast0/1 - 2 monitor session 1 source interface fast0/4 - 46 monitor session 1 destination interface fast0/3 Put all of the ports on vlan 10 because I made an rspan vlan 10 On switch B monitor the ports I need will say 1-10 monitor session 1 source interface fast0/1 - 10 monitor session 1 destination remote vlan 10 as a prerequisite I would have created vlan 10 as a rspan vlan on switch B. Switch A Monitor session 1 destination remote vlan 10 Would this work? By the way I am working with cisco catalyst 3560 switches.

    Read the article

  • Possible attack on my SQL server?

    - by erizias
    Checking my SQL Server log I see several entries like this: Date: 08-11-2011 11:40:42 Source: Logon Message: Login failed for user 'sa'. Reason: Password did not match for the login provided. [CLIENT: 56.60.156.50] Date: 08-11-2011 11:40:42 Source: Logon Message: Error: 18456. Severity: 14. State: 8. Date: 08-11-2011 11:40:41 Source: Logon Message: Login failed for user 'sa'. Reason: Password did not match for the login provided. [CLIENT: 56.60.156.50] Date: 08-11-2011 11:40:41 Source: Logon Message: Error: 18456. Severity: 14. State: 8. And so on.. Is this a possible attack on my SQL Server from the chineese???! I looked up the IP adress, at ip-lookup.net which stated it was chineese. And what to do? - Block the IP adress in the firewall? - Delete the user sa? And how do I protect my web server the best?! :) Thanks in advance!

    Read the article

  • Realtime file-level mirroring from local NTFS to network drive

    - by hurfdurf
    We have some data collection machines running WinXP. After a new file is written, we would like to immediately copy the new file to network storage (a NetApp CIFS share) automagically. We need realtime or near realtime copies generated (copy upon filehandle close would be fine -- these are not long-running system logs). Two commercial applications I've found so far are MirrorFile and IBM's Tivoli CDP. Are there any reliable open source programs or simple ways to get Shadow Copy to do something similar? Bonus points if it runs as a service.

    Read the article

  • DBCC CHECKDB fails and quits job, ambiguous error message.

    - by ddono25
    I received a notice that one of our servers' DBCC CHECKDB for all databases has been failing the past four times it has been run. We don't have any data prior to that, but it doesn't look like it has been succeeding for awhile. There are no errors in the log file only: DBCC results for 'sys.sysxmlfacet'. [SQLSTATE 01000] Msg 0, Sev 0, State 1: Unspecified error occurred on SQL Server. Connection may have been terminated by the server. [SQLSTATE HY000] There are 112 rows in 1 pages for object "sys.sysxmlfacet". [SQLSTATE 01000] I ran a DBCC CHECKDB using sp_MSForEachDB to get more accurate results and had the same error on the same DB but at a separate point: DBCC results for 'NameValuePair_Greek_CI_AS'. [SQLSTATE 01000] Msg 0, Sev 0, State 1: Unspecified error occurred on SQL Server. Connection may have been terminated by the server. [SQLSTATE HY000] There are 0 rows in 0 pages for object "NameValuePair_Greek_CI_AS". [SQLSTATE 01000] Also, the error-log states that the DBCC completed without errors for this database. I can't figure out how to track down this ambiguous issue that only happens on this database out of the dozens on this server. Any help is appreciated!

    Read the article

  • Distributed filesystem for automated offline data mirroring

    - by Petr Pudlák
    I'd like to achieve the following setup: Every time I connect my laptop to a local network, my partition gets automatically mirrored to a partition on my local server. I only want to mirror what has changed from the last time. (I understand that it is not a proper backup solution since there is no history of the changes, it'd be more like a non-persistent network RAID.) Is there a distributed file system that allows such a setup? I've done some searching and it seems to me that most distributed file-systems are focused on data availability and distribution, not duplicating them. I'd be thankful for suggestions. Edit: Sorry, I forgot to mention: I'm using Linux.

    Read the article

  • Efficient mirroring of directories using hardlinks

    - by zoqaeski
    I'm backing up my music collection on to a number of NTFS-formatted external hard-drives; however, as I store my main collection in FLAC and have my library on my laptop as MP3s to save space, I want to be able to back up both sets, because mass conversion between formats is time-consuming. The "music" directory can contain any format; the "mp3s" directory contains only MP3s converted from files in the "music" directory. The music collection on the laptop contains only MP3s, but they come from both sources. When I backup my laptop's library to the "mp3s" directory, I want to only copy across MP3 files that don't exist in the "music" directory; those that do should be hard-linked to the "music" directory. All directories have an identical hierarchy, sorted by artist, album, date, discnumber if applicable, etc, and I use a tagging editor to ensure consistency across all these locations. I'm also using a Linux computer, but keeping the music collections on NTFS-formatted partitions so that they are readable by both Linux and Windows. At the moment, I use the following command to perform the backups, but this is time-consuming due to the expensive nature of finding hard links. rsync -avu --progress --relative --ignore-existing --link-dest=../music/ **/*.mp3 /media/ntfspocket/mp3s Is there a way to perform this backup more efficiently, taking advantage of the directory hierarchy?

    Read the article

  • Database implementation question?

    - by gundam
    consider a disk with a sector size of 512 bytes, 2000 tracks/surface, 50 sectors/track, 5 doubled sided platters, average seek time is 10 msec. Assume a block size of 1024-byte is selected. Assume a file that contains 100,000 records of 100-byte each is to be stored on the disk, and NONE of the reocd can be spanned 2 blocks. How many blocks are needed to store the entire file?? If the file is arranged sequentially on disk, how many surfaces are required?? Now, i have calculated that 10,000 blocks are needed to store 100,000 records. But i am not sure how to find out the answer of the surfaces required. I only calculated the capacity of track is 25KB and capacity of surface is 50,000 KB But I don't know how to calculate the number of surfaces... Could anyone help me how to get the answer? Thanks a lot!!

    Read the article

  • Installing grub2 on ubutntu with software raid mirroring

    - by Marko
    Hi guys, Can someone help me out on this? I accidentally installed grub on usb flash drive during ubuntu server installation. Now I cant boot system without drive attached to server. I want to install grub on hard drive with grub-install but i don't know what to set as location for boot loader? my fstab looks like this: file system mount point type options dump pass proc /proc proc nodev,noexec,nosuid 0 0 /dev/mapper/pdc_jdbeghhjg1 / ext4 errors=remount-ro 0 1 /dev/mapper/pdc_jdbeghhjg5 none swap sw 0 0 and partition tables for hard drives as this: Device Boot Start End Blocks Id System /dev/sda1 2048 1215662079 607830016 83 Linux /dev/sda2 1215664126 1249998847 17167361 5 Extended /dev/sda5 1215664128 1249998847 17167360 82 Linux swap / Solaris Device Boot Start End Blocks Id System /dev/sdb1 1 75672 607830016 83 Linux /dev/sdb2 75672 77809 17167361 5 Extended /dev/sdb5 75672 77809 17167360 82 Linux swap / Solaris ?

    Read the article

  • Installing grub2 on ubuntu with software raid mirroring

    - by Marko
    Can someone help me out on this? I accidentally installed grub on usb flash drive during ubuntu server installation. Now I cant boot system without drive attached to server. I want to install grub on hard drive with grub-install but i don't know what to set as location for boot loader? my fstab looks like this: file system mount point type options dump pass proc /proc proc nodev,noexec,nosuid 0 0 /dev/mapper/pdc_jdbeghhjg1 / ext4 errors=remount-ro 0 1 /dev/mapper/pdc_jdbeghhjg5 none swap sw 0 0 and partition tables for hard drives as this: Device Boot Start End Blocks Id System /dev/sda1 2048 1215662079 607830016 83 Linux /dev/sda2 1215664126 1249998847 17167361 5 Extended /dev/sda5 1215664128 1249998847 17167360 82 Linux swap / Solaris Device Boot Start End Blocks Id System /dev/sdb1 1 75672 607830016 83 Linux /dev/sdb2 75672 77809 17167361 5 Extended /dev/sdb5 75672 77809 17167360 82 Linux swap / Solaris ?

    Read the article

  • Database implementation question? [closed]

    - by gundam
    consider a disk with a sector size of 512 bytes, 2000 tracks/surface, 50 sectors/track, 5 doubled sided platters, average seek time is 10 msec. Assume a block size of 1024-byte is selected. Assume a file that contains 100,000 records of 100-byte each is to be stored on the disk, and NONE of the reocd can be spanned 2 blocks. How many blocks are needed to store the entire file?? If the file is arranged sequentially on disk, how many surfaces are required?? Now, i have calculated that 10,000 blocks are needed to store 100,000 records. But i am not sure how to find out the answer of the surfaces required. I only calculated the capacity of track is 25KB and capacity of surface is 50,000 KB But I don't know how to calculate the number of surfaces... Could anyone help me how to get the answer? Thanks a lot!!

    Read the article

  • Mirroring a drive in Debian

    - by James Willson
    I have a drive with data on it. I dont want to use RAID, instead I want to do hourly backups to a second drive. I basically want to mirror the data drive and resync every hour. It is inefficient to re-move the data each time so really I only want to move across what changes. I.e if I add a new file to the data drive only that file will be moved across. What tools are there for doing this on the command line? I used to use luckybackup on Ubuntu but now im on commandline debian.

    Read the article

  • Efficient mirroring of directories using hard links [closed]

    - by zoqaeski
    I'm backing up my music collection on to a number of NTFS-formatted external hard-drives; however, as I store my main collection in FLAC and have my library on my laptop as MP3s to save space, I want to be able to back up both sets, because mass conversion between formats is time-consuming. The "music" directory can contain any format; the "mp3s" directory contains only MP3s converted from files in the "music" directory. The music collection on the laptop contains only MP3s, but they come from both sources. When I backup my laptop's library to the "mp3s" directory, I want to only copy across MP3 files that don't exist in the "music" directory; those that do should be hard-linked to the "music" directory. All directories have an identical hierarchy, sorted by artist, album, date, discnumber if applicable, etc, and I use a tagging editor to ensure consistency across all these locations. I'm also using a Linux computer, but keeping the music collections on NTFS-formatted partitions so that they are readable by both Linux and Windows. At the moment, I use the following command to perform the backups, but this is time-consuming due to the expensive nature of finding hard links. rsync -avu --progress --relative --ignore-existing --link-dest=../music/ **/*.mp3 /media/ntfspocket/mp3s Is there a way to perform this backup more efficiently, taking advantage of the directory hierarchy?

    Read the article

  • Using SQLXML Bulk Load in .NET Environment - Error with One to Many relationship on Complex Type

    - by user331111
    Hi, I have an error when I am importing an XML file using SQLXMLBulkLoad, wondering if anyone could help. Error: Data mapping to column 'Attribute' was already found in the data. Make sure that no two schema definitions map to the same column Full files and details can be found here http://www.experts-exchange.com/Microsoft/Development/MS-SQL-Server/SQL-Server-2005/Q_26102239.html Exert from XSD: <sql:relationship name="EnvironmentDECAttributes" parent="Environment" parent-key="intEnvironmentID" child="DECAttributes" child-key="intEnvironmentID"/> <complexType name="Environment"> <sequence> <element name="ESANumber" minOccurs="0"> <annotation> <documentation> Environmentally Sensitive Area Number </documentation> </annotation> <simpleType> <restriction base="string"> <maxLength value="15"/> <whiteSpace value="collapse"/> </restriction> </simpleType> </element> <element name="Conditions" minOccurs="0" sql:relation="Conditions" sql:relationship="EnvironmentConditions"> <complexType> <sequence> <element name="Condition" type="vms:EnvironmentalConditions" minOccurs="0" maxOccurs="5"/> </sequence> </complexType> </element> <element name="DECDistrict" minOccurs="0"> <annotation> <documentation> Department of Environment &amp; Conservation District </documentation> </annotation> <simpleType> <restriction base="string"> <maxLength value="31"/> <whiteSpace value="collapse"/> </restriction> </simpleType> </element> <element name="DECAttributes" minOccurs="0" maxOccurs="1" sql:relation="DECAttributes" sql:relationship="EnvironmentDECAttributes"> <complexType> <sequence> <element name="Attribute" type="vms:DECAttributes" minOccurs="0" maxOccurs="unbounded" sql:field="Attribute"> <annotation> <documentation> Department of Environment &amp; Conservation attributes. </documentation> </annotation> </element> </sequence> </complexType> </element> </sequence> </complexType> Exert from XML: <Environment> <DECAttributes> <Attribute>WA</Attribute> <Attribute>SA</Attribute> </DECAttributes> </Environment> Any help/ comments would be appreciated Thanks C

    Read the article

  • SQL CLR Assembly Error 80131051 when late binding to a registered C# COM .dll

    - by Shanubus
    I must have hit an unusual one, because I can't find any reference to this specific failing anywhere... Scenario: I have a legacy SQL function used to transform(encrypt) data. This function is called from within many stored procedures used by multiple applications. I say this, because the obvious answer of 'just call it from your code' is not really an option (or at least one I'd prefer not explore). The legacy function used sp_OA with an ActiveX dll on SQL2000 to perform its work. The new function is targeted at SQL2008 x64. I am ditching the sp_OA call in favor of CLR assembly; and am getting rid of the ActiveX dll and using a COM+ .dll (3rd party) to perform the same work. This 3rd party COM+ is required to be used based on spec given to me, so can't get rid of this piece either. Problem: After multiple attempts at getting this to work I have eliminated the following approaches 1) Create a Sql Assembly to call the local COM+ directly -- Can't do this as it requires a reference to System.EnterpriseServices. Including this requires that a whole slew of unsupported assemblies be registered which I don't want. The COM+ requires it's methods to be accessed via an Interface, so my attempts at late binding to it directly have not been successful (late binding would allow me to drop the unsupported references). 2) Create a Sql Assembly which references a C# class library that then calls the COM+. -- Same issue as #1; since the referenced dll uses System.EnterpriseServices and will be added as a dependency when referenced in the Sql Assembly, again trying to load all the unsupported libraries 3) Create a Sql Assembly which late binds to an ActiveX COM dll that calls the COM+. -- Worked in my dev environment, but can't go to x64 in production with ActiveX dll's written in VB6 (not to mention I hate backtracking anyway)... again failure... I am now onto an approach that is almost working, with of course one last hangup. I now have -a Sql Assembly that late binds to a C# COM dll, eliminating the need for including System.EnterpriseServices and eliminating the need to reference the C# COM in the SqlAssembly itself. The C# COM does reference System.EnterpriseServices to call the COM+, but since I am late binding to it from the SqlAssembly, I bypass the need for Sql to actually load them as referenced assemblies. Works in debugger.. Works on my dev box when the SqlAssembly dll is referenced in a test console app and called directly Installs to Sql2008 just fine Executing the actual UDF works, but returns no data due to a failure reporting from the late bound dll! So the SqlAssembly is instanciated just fine. It actually fails on it's late binding to the C# COM, which is working from a test console app on the same machine. It appears to be a difference in behavior based on whether called from within the SQL UDF or not. Since it is working on the same box from my console app, I am assuming it's on the SQL side. My steps to install were. --Install the COM+ dll and ensure it can be called successfully (as from with in the console app) --Register the C# COM dll (which calls the COM+) and get it to the GAC (again proofed to be working from console app) --Create my Assymetric Key CREATE ASYMMETRIC KEY SqlCryptoKey FROM EXECUTABLE FILE = 'D:\SqlEx.dll' CREATE LOGIN SqlExLogin FROM ASYMMETRIC KEY SqlExKey GRANT UNSAFE ASSEMBLY TO SqlExLogin GO --Add the assembly CREATE ASSEMBLY SqlEx FROM 'D:\SqlEx.dll' WITH PERMISSION_SET = UNSAFE; GO --Create the function CREATE FUNCTION dbo.f_SqlEx( @clearText [nvarchar](512) ) RETURNS nvarchar(512) WITH EXECUTE AS CALLER AS EXTERNAL NAME SqlEx.[SqlEx.SqlEx].Ex GO With all that done, I can now call my function SELECT dbo.f_SqlEx('test') But get this error in the event log... Retrieving the COM class factory for component with CLSID {F69D6320-5884-323F-936A-7657946604BE} failed due to the following error: 80131051. I can't really provide direct code examples, due to internal security implications; but all the code itself seems to work, I am suspecting perms or something of the like... I just find it odd that I can't find any reference to error 80131051. If someone out there believe some 'indirect' code samples will help, I will be happy to provide. Any assistance is appreciated.

    Read the article

  • Using Excel VBA to Create SQL Tables

    - by user307655
    Hi All, I am trying to use Excel VBA to automate the creation of a SQL table in an existing SQL Database. I have come across the following code on this side. Private Sub CreateDatabaseFromExcel() Dim dbConnectStr As String Dim Catalog As Object Dim cnt As ADODB.Connection Dim dbPath As String Dim tblName As String 'Set database name in the Excel Sheet dbPath = ActiveSheet.Range("B1").Value 'Database Name tblName = ActiveSheet.Range("B2").Value 'Table Name dbConnectStr = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" & dbPath & ";" 'Create new database using name entered in Excel Cell ("B1") Set Catalog = CreateObject("ADOX.Catalog") Catalog.Create dbConnectStr Set Catalog = Nothing 'Connect to database and insert a new table Set cnt = New ADODB.Connection With cnt .Open dbConnectStr .Execute "CREATE TABLE tblName ([BankName] text(50) WITH Compression, " & _ "[RTNumber] text(9) WITH Compression, " & _ "[AccountNumber] text(10) WITH Compression, " & _ "[Address] text(150) WITH Compression, " & _ "[City] text(50) WITH Compression, " & _ "[ProvinceState] text(2) WITH Compression, " & _ "[Postal] text(6) WITH Compression, " & _ "[AccountAmount] decimal(6))" End With Set cnt = Nothing End Sub However i can't successfully get it to work? What I am trying to do is actually use Excel to create a table not a database? The database already exists. I would just like to create a new table. The name of the table will be referenced from cell A1 in Sheet 1. Can somebody please help. Thanks Regards Gerard

    Read the article

  • Sql Server Replication: Snapshot vs Merge

    - by Zyphrax
    Background information Let's say I have two database servers, both SQL Server 2008. One is in my LAN (ServerLocal), the other one is on a remote hosting environment (ServerRemote). I have created a database on ServerLocal and have an exact copy of that database on ServerRemote. The database on ServerRemote is part of a web application and I would like to keep it's data up-to-date with the data in the database ServerLocal. ServerLocal is able to communicate with ServerRemote, this is one-way traffic. Communication from ServerRemote to ServerLocal isn't available. Current solution I thought it would be a nice solution to use replication. So I've made ServerLocal a publisher and subscriptions are pushed to the ServerRemote. This works fine, when a snapshot is transfered to ServerRemote the existing data will be purged and the ServerRemote database is once again an exact replica of the database on ServerLocal. The problem Records that exist on ServerRemote that don't exist on ServerLocal are removed. This doesn't matter for most of my tables but in some of my tables I'd like to keep the existing data (aspnet_users for instance), and update the records if necessary. What kind of replication fits my problem?

    Read the article

  • Something wrong on my very first LINQ to SQL c # code

    - by user334813
    using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; namespace Advanced_LinQ_Query { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private DataClasses1DataContext database = new DataClasses1DataContext(); private void Form1_Load(object sender, EventArgs e) { database.Log= Console.Out; comboBox.SelectedIndex=0; } private void titleBindingNavigatorSaveItem_Click(object sender, EventArgs e) { Validate(); titleBindingSource.EndEdit(); database.SubmitChanges(); comboBox.SelectedIndex=0; } private void comboBox_SelectedIndexChanged(object sender, EventArgs e) { switch (comboBox.SelectedIndex) { case 0: titleBindingSource.DataSource = from Title in database.Titles orderby Title.BookTitle select Title; break; case 1: titleBindingSource.DataSource = from Title in database.Titles where Title.Copyright == "2008" orderby Title.BookTitle select Title; break; case 2: titleBindingSource.DataSource = from Title in database.Titles where Title.BookTitle.EndsWith("How to Program") orderby Title.BookTitle select Title; break; } titleBindingSource.MoveFirst(); } } } no connection seems to built after debugging between Title table in my database (book.mdf) and titleBindingSource! Where is the problem?

    Read the article

  • Using jQuery and OData to Insert a Database Record

    - by Stephen Walther
    In my previous blog entry, I explored two ways of inserting a database record using jQuery. We added a new Movie to the Movie database table by using a generic handler and by using a WCF service. In this blog entry, I want to take a brief look at how you can insert a database record using OData. Introduction to OData The Open Data Protocol (OData) was developed by Microsoft to be an open standard for communicating data across the Internet. Because the protocol is compatible with standards such as REST and JSON, the protocol is particularly well suited for Ajax. OData has undergone several name changes. It was previously referred to as Astoria and ADO.NET Data Services. OData is used by Sharepoint Server 2010, Azure Storage Services, Excel 2010, SQL Server 2008, and project code name “Dallas.” Because OData is being adopted as the public interface of so many important Microsoft technologies, it is a good protocol to learn. You can learn more about OData by visiting the following websites: http://www.odata.org http://msdn.microsoft.com/en-us/data/bb931106.aspx When using the .NET framework, you can easily expose database data through the OData protocol by creating a WCF Data Service. In this blog entry, I will create a WCF Data Service that exposes the Movie database table. Create the Database and Data Model The MoviesDB database is a simple database that contains the following Movies table: You need to create a data model to represent the MoviesDB database. In this blog entry, I use the ADO.NET Entity Framework to create my data model. However, WCF Data Services and OData are not tied to any particular OR/M framework such as the ADO.NET Entity Framework. For details on creating the Entity Framework data model for the MoviesDB database, see the previous blog entry. Create a WCF Data Service You create a new WCF Service by selecting the menu option Project, Add New Item and selecting the WCF Data Service item template (see Figure 1). Name the new WCF Data Service MovieService.svc. Figure 1 – Adding a WCF Data Service Listing 1 contains the default code that you get when you create a new WCF Data Service. There are two things that you need to modify. Listing 1 – New WCF Data Service File using System; using System.Collections.Generic; using System.Data.Services; using System.Data.Services.Common; using System.Linq; using System.ServiceModel.Web; using System.Web; namespace WebApplication1 { public class MovieService : DataService< /* TODO: put your data source class name here */ > { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { // TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc. // Examples: // config.SetEntitySetAccessRule("MyEntityset", EntitySetRights.AllRead); // config.SetServiceOperationAccessRule("MyServiceOperation", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } } First, you need to replace the comment /* TODO: put your data source class name here */ with a class that represents the data that you want to expose from the service. In our case, we need to replace the comment with a reference to the MoviesDBEntities class generated by the Entity Framework. Next, you need to configure the security for the WCF Data Service. By default, you cannot query or modify the movie data. We need to update the Entity Set Access Rule to enable us to insert a new database record. The updated MovieService.svc is contained in Listing 2: Listing 2 – MovieService.svc using System.Data.Services; using System.Data.Services.Common; namespace WebApplication1 { public class MovieService : DataService<MoviesDBEntities> { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("Movies", EntitySetRights.AllWrite); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } } That’s all we have to do. We can now insert a new Movie into the Movies database table by posting a new Movie to the following URL: /MovieService.svc/Movies The request must be a POST request. The Movie must be represented as JSON. Using jQuery with OData The HTML page in Listing 3 illustrates how you can use jQuery to insert a new Movie into the Movies database table using the OData protocol. Listing 3 – Default.htm <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>jQuery OData Insert</title> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/json2.js" type="text/javascript"></script> </head> <body> <form> <label>Title:</label> <input id="title" /> <br /> <label>Director:</label> <input id="director" /> </form> <button id="btnAdd">Add Movie</button> <script type="text/javascript"> $("#btnAdd").click(function () { // Convert the form into an object var data = { Title: $("#title").val(), Director: $("#director").val() }; // JSONify the data var data = JSON.stringify(data); // Post it $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", url: "MovieService.svc/Movies", data: data, dataType: "json", success: insertCallback }); }); function insertCallback(result) { // unwrap result var newMovie = result["d"]; // Show primary key alert("Movie added with primary key " + newMovie.Id); } </script> </body> </html> jQuery does not include a JSON serializer. Therefore, we need to include the JSON2 library to serialize the new Movie that we wish to create. The Movie is serialized by calling the JSON.stringify() method: var data = JSON.stringify(data); You can download the JSON2 library from the following website: http://www.json.org/js.html The jQuery ajax() method is called to insert the new Movie. Notice that both the contentType and dataType are set to use JSON. The jQuery ajax() method is used to perform a POST operation against the URL MovieService.svc/Movies. Because the POST payload contains a JSON representation of a new Movie, a new Movie is added to the database table of Movies. When the POST completes successfully, the insertCallback() method is called. The new Movie is passed to this method. The method simply displays the primary key of the new Movie: Summary The OData protocol (and its enabling technology named WCF Data Services) works very nicely with Ajax. By creating a WCF Data Service, you can quickly expose your database data to an Ajax application by taking advantage of open standards such as REST, JSON, and OData. In the next blog entry, I want to take a closer look at how the OData protocol supports different methods of querying data.

    Read the article

  • Product Naming Conventions - Does it make sense

    - by NeilHambly
    Maybe it’s just me, but with some of the MS Products being released in 2010 with "2010" in their product name, is the naming of the SQL Server product suite being released with product name that doesn’t make sense, our latest SQL Server Release which is now just about to be released is "SQL Server 2008 R2" My question is do you think this product name is ? Good, Bad or just plain confusing IMHO I think we could have been better placed if this was named "SQL Server 2010"...(read more)

    Read the article

< Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >