Search Results

Search found 14708 results on 589 pages for 'sqlserver 2008'.

Page 322/589 | < Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >

  • Copying 6000 tables and data from sqlserver to oracle ==> fastest method?

    - by nazer555
    i need to copy the tables and data (about 5 yrs data, 6200 tables) stored in sqlserver, i am using datastage and odbc connection to connect and datstage automatically creates the table with data, but its taking 2-3 hours per table as tables are very large(0.5 gig, 300+columns and about 400k rows). How can i achieve this the fastes as at this rate i am able to only copy 5 tables per day but within 30 days i need move over these 6000 tables.

    Read the article

  • Which command would replace IDENTITY INSERT ON/OFF from SQLServer in Oracle?

    - by rodrigoq
    Hello, I have to migrate this query (simplified here) from T-SQL to ORACLE SET IDENTITY_INSERT table ON INSERT INTO table (id, value) VALUES (1, 2) SET IDENTITY_INSERT table OFF id being an Identity field in SQLServer. I have the same table with a sequence in ORACLE, I couldn't find a snippet that shows how to disable the sequence and set it to start again with the MAX(id) + 1. Any ORACLE expert can help me with this? Thanks, Rodrigo.

    Read the article

  • Setup MSSQL replication with peer to peer topology: problem setting up Conflict Detection

    - by Roel
    Hi, I'm setting up a SQL Replication strategy, using MSSQL2008 with peer-to-peer publications (2 servers, each one subscribes to the other). I followed this HOWTO from MSDN, and the setup seems to be working fine: add a record to one table on server A, query on server B shows the new record. So far, so good. So far I only have one table 'Templates': Id PK (calculated field) NodeId int default 1/2 (Server A = 1, Server B = 2) LocalId int autoid Name nvarchar(100) Now, I would like to enable 'Conflict detection', which should be enabled by default. But every time I try to save the 'Conflict Detection' feature in the Publication Properties I get the following error: Cannot save Peer conflict detection properties. An exception occurred while executing a Transact-SQL statement or batch.(Microsoft.SqlServer.ConnectionInfo) Program Location: at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand, ExecutionTypes executionType) at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand) at Microsoft.SqlServer.Replication.ReplicationObject.ExecCommand(String commandIn) at Microsoft.SqlServer.Replication.TransPublication.SetPeerConflictDetection(Boolean enablePeerConflictDetection, Int32 peerOriginatorID) at Microsoft.SqlServer.Management.UI.PubPropSubscriptionOptions.SaveP2PConflictDetection() at Microsoft.SqlServer.Management.UI.PubPropSubscriptionOptions.SaveProperties(ExecutionMode& executionResult) Column name 'Id' does not exist in the target table or view. Changed database context to 'TestDB'. (.Net SqlClient Data Provider) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.00.2531&EvtSrc=MSSQLServer&EvtID=1911&LinkId=20476 Server Name: SERVER_A Error Number: 1911 Severity: 16 State: 1 Line Number: 2 Program Location: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.RunExecuteNonQueryTds(String methodName, Boolean async) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand, ExecutionTypes executionType) Now, I googled the hell out of this error, and nothing shows up. I also can't seem to find out what the exact target table of the error "Column name 'Id' does not exist..." is. Has anyone every done this successfully? Am I missing something? Having this setup without conflict detection feels pretty useless... EDIT OK, so after some more research and setting up with different databases etc, I found out that the calculated 'Id' column of the Templates table is the culprit. I don't know why, but the replication doesn't seem to allow calculated columns (which are also primary key). It works now too, without the 'Id' column, and using the NodeId and LocalId as a combined PK. So now the question is, why isn't it allowed to have a calculated column as PK for replication with conflict detection?

    Read the article

  • TFS 2010 Basic Concepts

    - by jehan
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Here, I’m going to discuss some key Architectural changes and concepts that have taken place in TFS 2010 when compared to TFS 2008. In TFS 2010 Installation, First you need to do the Installation and then you have to configure the Installation Feature from the available features. This is bit similar to SharePoint Installation, where you will first do the Installation and then configure the SharePoint Farms. 1) Installation Features available in TFS2010: a) Basic: It is the most compact TFS installation possible. It will install and configure Source Control, Work Item tracking and Build Services only. (SharePoint and Reporting Integration will not be possible). b) Standard Single Server: This is suitable for Single Server deployment of TFS. It will install and configure Windows SharePoint Services for you and will use the default instance of SQL Server. c) Advanced: It is suitable, if you want use Remote Servers for SQL Server Databases, SharePoint Products and Technologies and SQL Server Reporting Services. d) Application Tier Only: If you want to configure high availability for Team Foundation Server in a Load Balanced Environment (NLB) or you want to move Team Foundation Server from one server to other or you want to restore TFS. e) Upgrade: If you want to upgrade from a prior version of TFS. Note: One more important thing to know here about  TFS 2010 Basic is that,  it can be installed on Client Operations Systems(Windows 7 and Windows Vista SP3), Where as  earlier you cannot Install previous version of TFS (2008 and 2005) on client OS. 2) Team Project Collections: Connect to TFS dialog box in TFS 2008:  In TFS 2008, the TFS Server contains a set of Team Projects and each project may or may not be independent of other projects and every checkin gets a ever increasing  changeset ID  irrespective of the team project in which it is checked in and the same applies to work items  also, who also gets unique Work Item Ids.The main problem with this approach was that there are certain things which were impossible to do; those were required as per the Application Development Process. a)      If something has gone wrong in one team project and now you want to restore it back to earlier state where it was working properly then it requires you to restore the Database of Team Foundation Server from the backup you have taken as per your Maintenance plans and because of this the other team projects may lose out on the work which is not backed up. b)       Your company had a merge with some other company and now you have two TFS servers. One TFS Server which you are working on and other TFS server which other company was working and now after the merge you want to integrate the team projects from two TFS servers into one, which is almost impossible to achieve in TFS 2008. Though you can create the Team Projects in one server manually (In Source Control) which you want to integrate from the other TFS Server, but will lose out on History of Change Sets and Work items and others which are very important. There were few more issues of this sort, which were difficult to resolve in TFS 2008. To resolve issues related to above kind of scenarios which were mainly related TFS Maintenance, Integration, migration and Security,  Microsoft has come up with Team Project Collections concept in TFS 2010.This concept is similar to SharePoint Site Collections and if you are familiar with SharePoint Architecture, then it will help you to understand TFS 2010 Architecture easily. Connect to TFS dialog box in TFS 2010: In above dialog box as you can see there are two Team Project Collections, each team project can contain any number of team projects as you can see on right side it shows the two Team Projects in Team Project Collection (Default Collection) which I have chosen. Note: You can connect to only one Team project Collection at a time using an instance of  TFS Team Explorer. How does it work? To introduce Team Project Collections, changes have been done in reorganization of TFS databases. TFS 2008 was composed of 5-7 databases partitioned by subsystem (each for Version Control, Work Item Tracking, Build, Integration, Project Management...) New TFS 2010 database architecture: TFS_Config: It’s the root database and it contains centralized TFS configuration data, including the list of all team projects exist in TFS server. TFS_Warehouse: The data warehouse contains all the reporting data of served by this server (farm). TFS_* : This contains individual team project collection data. This database contains all the operational data of team project collection regardless of subsystem.In additional to this, you will have databases for SharePoint and Report Server. 3) TFS Farms:  As TFS 2010 is more flexible to configure as multiple Application tiers and multiple Database tiers, so it will be more appropriate to call as TFS Farm if you going for multi server installation of TFS. NLB support for TFS application tiers – With TFS 2010: you can configure multiple TFS application tier machines to serve the same set of Team Project Collections. The primary purpose of NLB support is to enable a cleaner and more complete high availability than in TFS 2008. Even if any application tier in the farm fails then farm will automatically continue to work with hardly any indication to end users of a problem. SQL data tiers: With 2010 you can configure many SQL Servers. Each Database can be configured to be on any SQL Server because each Team Project Collection is an independent database. This feature can also be used to load balance databases across SQL Servers.These new capabilities will significantly change the way enterprises manage their TFS installations in the future. With Team Project Collections and TFS farms, you can create a single, arbitrarily large TFS installation. You can grow it incrementally by adding ATs and SQL Servers as needed.

    Read the article

  • How To Configure Remote Desktop To Hyper-V Guest Virtual Machines

    - by Brian Jackett
    Configuring Remote Desktop (RDP) from a host Hyper-V machine to a guest virtual machine can be tricky, so this post is dedicated to the issues and resolution steps I went through to allow RDP.  Cutting to the point, below are the things to look for followed by some explanation about my scenario if you care to read.  This is not an exhaustive list of what is required, just the items that were causing problems for my particular scenario. Requirements Allow Remote Desktop Connections in guest OS. The network adapter type must allow communication with host machine (e.g. use an “Internal” virtual adapter.) If running Server 2008 R2 on guest, network discovery mode must be turned on. If running Server 2008 R2 on guest, the services supporting network discovery mode must be running: - DNS Client - Function Discovery Resource Publication - SSDP Discovery - UPnP Device Host My Environment     A quick word about my environment.  I am running Windows Server 2008 R2 with Hyper V on my laptop and numerous guest VMs running Windows Server 2003 R2 or Windows Server 2008 R2.  I run a domain controller VM and then 1 or 2 SharePoint servers depending on my work needs.  I’ve found this setup to work well except when it comes to the display window for my VMs. The Issue     Ever since I began running Hyper-V I haven’t been able to RDP to my guest VMs which means the resolution for my connection windows ha been limited to what the native Hyper-V connections allow.  During personal use I can put the resolution up to 1152 x 864, but during presentations I am usually limited to a measly 800 x 600.  That is until today when I decided to fully investigate why I couldn’t connect via RDP.     First a thank you to John Ross (@johnrossjr), Christina Wheeler (@cwheeler76) and Clayton Cobb (@warrtalon) for various suggestions while I was researching tonight.  As it turns out I had not 1, not 2, but 3 items preventing me from using RDP.  Let’s dig into the requirements above. Allow RDP Connection     This item I had previously taken care of, but it bears repeating because by default Windows Server 2008 R2 does not allow RDP connections.  Change the setting from “Don’t allow…” to whichever “Allow connections…” setting suits your needs.  I chose the less secure option as this is just my dev laptop. Network Adapter Type     When I originally configured my VMs I configured each to use 2 network adapters: one using the physical ethernet adapter for internet use and a virtual private adapter for communication between the VMs.  The connection for the ethernet adapter is an "”External” adapter and thus doesn’t connect between the host and guest.  The virtual private adapter allowed communication ONLY between the VMs and not to my host.  There is a third option “Internal” which allows communication between VMs as well as to the host.  After finding out this distinction I promptly created an Internal network adapter and assigned that to my VMs. Turn On Network Discovery     Seems like a pretty common sense thing, but in order to allow remote desktop connections the target computer must able to be found by the source computer (explained here.)  One of the settings that controls if a computer can be found on the network is aptly named Network Discovery.  By default Windows Server 2008 R2 turns Network Discovery off for security purposes.  To enable it open up the Network and Sharing Center.  Click “Change Advanced Sharing Settings” on the left.  On the following screen select “Turn on network discovery” for the currently used profile and click Save Settings.  You may notice though that your selection to turn on network discovery doesn’t save.  If this is the case then you most likely don’t have the supporting services running (as was my case.) Network Discovery Supporting Services     There are a total of 4 services (listed again below) that need to be running before you can turn on network discovery (explained here.)  The below images highlight these services.  In my guest VM I found that I had DNS Client already running while the other 3 were disabled.  I set them all to enabled and started the ones that were stopped.  After this change I returned to the Sharing settings screen and found that Network Discovery was turned on.  I’m not sure whether this was picking up my attempt to turn it on previously or if starting those services turned it on.  Either way the end result was a success. - DNS Client - Function Discovery Resource Publication - SSDP Discovery - UPnP Device Host Before and After Results     The first image is the smaller square shaped viewing window used by the Hyper-V native connection.  The second is the full-screen RDP connection in all its widescreen glory. Conclusion     Over the past few months I’ve found Hyper-V to be very useful for virtualizing my development environments, but I’ve also had a steep learning curve to get various items configured just right.  Allowing RDP connections to guest VMs was one area that I hadn’t been able to get right for the longest time.  Now that I resolved these issues I hope that others can avoid the pitfalls that I ran into.  If you know of any other items I left off feel free to let me know.        -Frog Out   Links Turning on Network Discovery http://sqlblog.com/blogs/john_paul_cook/archive/2009/08/15/remote-desktop-connection-on-windows-server-2008-r2.aspx Services required for Network Discovery http://social.technet.microsoft.com/Forums/en-US/winservergen/thread/2e1fea01-3f2b-4c46-a631-a8db34ed4f84

    Read the article

  • Unable to set TestContext property

    - by Brandon
    I have a visual studio 2008 Unit test and I'm getting the following runtime error: Unable to set TestContext property for the class JMPS.PlannerSuite.DataServices.MyUnitTest. Error: System.ArgumentException: Object of type 'Microsoft.VisualStudio.TestTools.TestTypes.Unit.UnitTestAdapterContext' cannot be converted to type 'Microsoft.VisualStudio.TestTools.UnitTesting.TestContext' I have read that VS 2008 does not properly update the references to the UnitTestFramework when converting 2005 projects. My unit test was created in 2008 but it inherits from a base class built in VS 2005. Is this where my problem is coming from? Does my base class have to be rebuilt in 2008? I would rather not do this as it will affect other projects. In other derived unit tests built in 2005, all that we needed to do was comment out the TestContext property in the derived unit test. I have tried this in the VS 2008 unit test with no luck. I have also tried to "new" the TestContext property which gives me a different runtime error. Any ideas?

    Read the article

  • GDI+ is giving me errors

    - by user146780
    I want to use GDI + just to load a png. I included the headers and lib file then I do: Bitmap b; b.fromfile(filename); I get this from the compiler though. Error 1 error C2146: syntax error : missing ';' before identifier 'b' c:\users\josh\documents\visual studio 2008\projects\vectorizer project\vectorizer project\vectorizer project.cpp 23 Error 2 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\users\josh\documents\visual studio 2008\projects\vectorizer project\vectorizer project\vectorizer project.cpp 23 Error 3 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\users\josh\documents\visual studio 2008\projects\vectorizer project\vectorizer project\vectorizer project.cpp 23 Error 4 error C2440: '=' : cannot convert from 'const char [3]' to 'WCHAR *' c:\users\josh\documents\visual studio 2008\projects\vectorizer project\vectorizer project\vectorizer project.cpp 172 Error 5 error C2228: left of '.FromFile' must have class/struct/union c:\users\josh\documents\visual studio 2008\projects\vectorizer project\vectorizer project\vectorizer project.cpp 179 What is the correct way to do this? Thanks

    Read the article

  • How to structure a Visual Studio project for the data access layer

    - by Akk
    I currently have a project that uses various DB access technologies mainly for showcasing or for demos. Currently we have: Namespace App.Data (App.Data.dll) Folder NHibernate Folder EntityFramework Folder LinqToSql The above structure is ok as we only use Sql Server as the DB. But going forward we will be including Oracle, MySql etc. So what would be a better structure with this in mind? I thought about: Namespace App.Data.SqlServer (App.Data.SqlServer.dll) Folder NHibernate Folder EntityFramework Folder LinqToSql Or would it just be better to have separate assemblies for each database and access technology?: Namespace App.Data.SqlServer.NHibernate (App.Data.SqlServer.NHibernate.dll) Namespace App.Data.SqlServer.EntityFramework(App.Data.SqlServer.EntityFramework.dll) Namespace App.Data.Oracle.NHibernate (App.Data.Oracle.NHibernate.dll) Namespace App.Data.MySql.NHibernate (App.Data.MySql.Oracle.dll)

    Read the article

  • Clustering for Mere Mortals (Pt2)

    - by Geoff N. Hiten
    Planning. I could stop there and let that be the entirety post #2 in this series.  Planning is the single most important element in building a cluster and the Laptop Demo Cluster is no exception.  One of the more awkward parts of actually creating a cluster is coordinating information between Windows Clustering and SQL Clustering.  The dialog boxes show up hours apart, but still have to have matching and consistent information. Excel seems to be a good tool for tracking these settings.  My workbook has four pages: Systems, Storage, Network, and Service Accounts.  The systems page looks like this:   Name Role Software Location East Physical Cluster Node 1 Windows Server 2008 R2 Enterprise Laptop VM West Physical Cluster Node 2 Windows Server 2008 R2 Enterprise Laptop VM North Physical Cluster Node 3 (Future Reserved) Windows Server 2008 R2 Enterprise Laptop VM MicroCluster Cluster Management Interface N/A Laptop VM SQL01 High-Performance High-Security Instance SQL Server 2008 Enterprise Edition x64 SP1 Laptop VM SQL02 High-Performance Standard-Security Instance SQL Server 2008 Enterprise Edition x64 SP1 Laptop VM SQL03 Standard-Performance High-Security Instance SQL Server 2008 Enterprise Edition x64 SP1 Laptop VM Note that everything that has a computer name is listed here, whether physical or virtual. Storage looks like this: Storage Name Instance Purpose Volume Path Size (GB) LUN ID Speed Quorum MicroCluster Cluster Quorum Quorum Q: 2     SQL01Anchor SQL01 Instance Anchor SQL01Anchor L: 2     SQL02Anchor SQL02 Instance Anchor SQL02Anchor M: 2     SQL01Data1 SQL01 SQL Data SQL01Data1 L:\MountPoints\SQL01Data1 2     SQL02Data1 SQL02 SQL Data SQL02Data1 M:\MountPoints\SQL02Data1       Starting at the left is the name used in the storage array.  It is important to rename resources at each level, whether it is Storage, LUN, Volume, or disk folder.  Otherwise, troubleshooting things gets complex and difficult.  You want to be able to glance at a resource at any level and see where it comes from and what it is connected to. Networking is the same way:   System Network VLAN  IP Subnet Mask Gateway DNS1 DNS2 East Public Cluster1 10.97.230.x(DHCP) 255.255.255.0 10.97.230.1 10.97.230.1 10.97.230.1 East Heartbeat Cluster2   255.255.255.0       West Public Cluster1 10.97.230.x(DHCP) 255.255.255.0 10.97.230.1 10.97.230.1 10.97.230.1 West Heartbeat Cluster2   255.255.255.0       North Public Cluster1 10.97.230.x(DHCP) 255.255.255.0 10.97.230.1 10.97.230.1 10.97.230.1 North Heartbeat Cluster2   255.255.255.0       SQL01 Public Cluster1 10.97.230.x(DHCP) 255.255.255.0       SQL02 Public Cluster1 10.97.230.x(DHCP) 255.255.255.0       One hallmark of a poorly planned and implemented cluster is a bunch of "Local Network Connection #n" entries in the network settings page.  That lets me know that somebody didn't care about the long-term supportabaility of the cluster.  This can be critically important with Hyper-V Clusters and their high NIC counts.  Final page:   Instance Service Name Account Password Domain OU SQL01 SQL Server SVCSQL01 Baseline22 MicroAD Service Accounts SQL01 SQL Agent SVCSQL01 Baseline22 MicroAD Service Accounts SQL02 SQL Server SVC_SQL02 Baseline22 MicroAD Service Accounts SQL02 SQL Agent SVC_SQL02 Baseline22 MicroAD Service Accounts SQL03 (Future) SQL Server SVC_SQL03 Baseline22 MicroAD Service Accounts SQL03 (Future) SQL Agent SVC_SQL03 Baseline22 MicroAD Service Accounts             Installation Account           administrator            Yes.  I write down the account information.  I secure the file via NTFS, but I don't want to fumble around looking for passwords when it comes time to rebuild a node. Always fill out the workbook COMPLETELY before installing anything.  The whole point is to have everything you need at your fingertips before you begin.  The install experience is so much better and more productive with this information in place.

    Read the article

  • is it possible to have an sqlite database in a sqlserver field?

    - by Behrooz
    I think my question seems to be vague. I am trying to save user settings in SQLServer, but the problem can be expressed in this term:"it needs 20 tables with circular dependencies. and i have enough tables to fill 3 database diagrams", so the best way encountered to my brain is to save it as a sqlite database in a field, like this: Index |Name |Data 1 |Behrooz |*sqlite database here* 2 |User1 |*sqlite database here* ... is this way the right way?is it stupid? should i create more tables instead of doing all these? does it increase database fragmention?

    Read the article

  • How to Generate a Create Table DDL Script Along With Its Related Tables

    - by Compudicted
    Have you ever wondered when creating table diagrams in SQL Server Management Studio (SSMS) how slickly you can add related tables to it by just right-clicking on the interesting table name? Have you also ever needed to script those related tables including the master one? And you discovered you have dozens of related tables? Or may be no SSMS at your disposal? That was me one day. Well, creativity to the rescue! I Binged and Googled around until I found more or less what I wanted, but it was all involving T-SQL, yeah, a long and convoluted CROSS APPLYs, then I saw a PowerShell solution that I quickly adopted to my needs (I am not referencing any particular author because it was a mashup): 1: ########################################################################################################### 2: # Created by: Arthur Zubarev on Oct 14, 2012 # 3: # Synopsys: Generate file containing the root table CREATE (DDL) script along with all its related tables # 4: ########################################################################################################### 5:   6: [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null 7:   8: $RootTableName = "TableName" # The table name, no schema name needed 9:   10: $srv = new-Object Microsoft.SqlServer.Management.Smo.Server("TargetSQLServerName") 11: $conContext = $srv.ConnectionContext 12: $conContext.LoginSecure = $True 13: # In case the integrated security is not used uncomment below 14: #$conContext.Login = "sa" 15: #$conContext.Password = "sapassword" 16: $db = New-Object Microsoft.SqlServer.Management.Smo.Database 17: $db = $srv.Databases.Item("TargetDatabase") 18:   19: $scrp = New-Object Microsoft.SqlServer.Management.Smo.Scripter($srv) 20: $scrp.Options.NoFileGroup = $True 21: $scrp.Options.AppendToFile = $False 22: $scrp.Options.ClusteredIndexes = $False 23: $scrp.Options.DriAll = $False 24: $scrp.Options.ScriptDrops = $False 25: $scrp.Options.IncludeHeaders = $True 26: $scrp.Options.ToFileOnly = $True 27: $scrp.Options.Indexes = $False 28: $scrp.Options.WithDependencies = $True 29: $scrp.Options.FileName = 'C:\TEMP\TargetFileName.SQL' 30:   31: $smoObjects = New-Object Microsoft.SqlServer.Management.Smo.UrnCollection 32: Foreach ($tb in $db.Tables) 33: { 34: Write-Host -foregroundcolor yellow "Table name being processed" $tb.Name 35: 36: If ($tb.IsSystemObject -eq $FALSE -and $tb.Name -eq $RootTableName) # feel free to customize the selection condition 37: { 38: Write-Host -foregroundcolor magenta $tb.Name "table and its related tables added to be scripted." 39: $smoObjects.Add($tb.Urn) 40: } 41: } 42:   43: # The actual act of scripting 44: $sc = $scrp.Script($smoObjects) 45:   46: Write-host -foregroundcolor green $RootTableName "and its related tables have been scripted to the target file." Enjoy!

    Read the article

  • Backup database from default (unnamed) sql server instance with powershell.

    - by sparks
    Trying to connect to an instance of SQL Server 2008 on a server we'll call Sputnik. There are no firewalls in between the two devices. Right now I'm just trying to list databases [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | Out-Null [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoExtended") | Out-Null [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.ConnectionInfo") | Out-Null [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoEnum") | Out-Null $servername = "Sputnik" $remoteServer = New-Object("Microsoft.SqlServer.Management.Smo.Server") $servername $remoteServer.databases The following error message occurs: The following exception was thrown when trying to enumerate the collection: "Failed to connect to server Sputnik.". At line:1 char:15 + $remoteServer. <<<< databases + CategoryInfo : NotSpecified: (:) [], ExtendedTypeSystemException + FullyQualifiedErrorId : ExceptionInGetEnumerator

    Read the article

  • Sampling SQL server batch activity

    - by extended_events
    Recently I was troubleshooting a performance issue on an internal tracking workload and needed to collect some very low level events over a period of 3-4 hours.  During analysis of the data I found that a common pattern I was using was to find a batch with a duration that was longer than average and follow all the events it produced.  This pattern got me thinking that I was discarding a substantial amount of event data that had been collected, and that it would be great to be able to reduce the collection overhead on the server if I could still get all activity from some batches. In the past I’ve used a sampling technique based on the counter predicate to build a baseline of overall activity (see Mikes post here).  This isn’t exactly what I want though as there would certainly be events from a particular batch that wouldn’t pass the predicate.  What I need is a way to identify streams of work and select say one in ten of them to watch, and sql server provides just such a mechanism: session_id.  Session_id is a server assigned integer that is bound to a connection at login and lasts until logout.  So by combining the session_id predicate source and the divides_by_uint64 predicate comparator we can limit collection, and still get all the events in batches for investigation. CREATE EVENT SESSION session_10_percent ON SERVER ADD EVENT sqlserver.sql_statement_starting(     WHERE (package0.divides_by_uint64(sqlserver.session_id,10))), ADD EVENT sqlos.wait_info (        WHERE (package0.divides_by_uint64(sqlserver.session_id,10))), ADD EVENT sqlos.wait_info_external (        WHERE (package0.divides_by_uint64(sqlserver.session_id,10))), ADD EVENT sqlserver.sql_statement_completed(     WHERE (package0.divides_by_uint64(sqlserver.session_id,10))) ADD TARGET ring_buffer WITH (MAX_DISPATCH_LATENCY=30 SECONDS,TRACK_CAUSALITY=ON) GO   There we go; event collection is reduced while still providing enough information to find the root of the problem.  By the way the performance issue turned out to be an IO issue, and the session definition above was more than enough to show long waits on PAGEIOLATCH*.        

    Read the article

  • Is it possible to use WinMerge inside Visual Studio 2008 and Visual Source Safe 6.0?

    - by Tim Santeford
    I would like to use WinMerge as the default diff tool inside VS2008 from the solutions explorer in place of the "Compare..." context menu item. Is this possible? I'm looking for a quick replacement of the current bland diff tool an I just like WinMerge better. I'm not interested in going to the folder explorer and doing the comparison there. Im using Visual Source Safe 6.0 not TFS Is this possible? Thanks in advanced!

    Read the article

  • Share code between projects in a solution in Visual Studio 2008, when building a common assembly is

    - by Binary255
    Hi, I create an add-on for the product Foo. There are different versions of Foo, namely version 1, 2, 3 and 4. These versions have a mostly compatible API, but not fully. I currently have 5 projects: DotNetCommon - here are the common methods which could be used if I create an add-on or something other than the Foo product. FooOne FooTwo FooThree FooFour The Foo*-projects contains the add-in for version 1-4 of Foo. There are a lot of duplicated files in the Foo*-projects, as there are a lot of things in the API which are identical for all versions of Foo. It would be nice to separate out everything which is common for all Foo-versions. Why not just create a common assembly for all versions of Foo called FooCommon? If I would put all classes which are common for all versions of Foo into a new library project, I would still have to choose which version of Foo the new FooCommon should reference. As said, they are not identical.

    Read the article

  • How to view Session Variables in Visual Studio 2008 Debugger?

    - by davemackey
    Usually using Visual Studio's debugger is a breeze. Scanning through Locals quickly shows the values of variables, etc. However, I'm at a loss how to find out the values contained in session state variables? Can anyone give me a hand? Lets say I put a breakpoint right after: Session("first_name") = "Rob Roy" How do I view the value contained in Session("first_name") from locals?

    Read the article

  • How to backup database (MS SQl Server 2008) in C# without using SMO (having proplems) ?

    - by SzamDev
    Hi I have this code and it is not working but I don't why? try { saveFileDialog1.Filter = "SQL Server database backup files|*.bak"; saveFileDialog1.Title = "Database Backup"; if (saveFileDialog1.ShowDialog() == DialogResult.OK) { SqlCommand bu2 = new SqlCommand(); SqlConnection s = new SqlConnection("Data Source=M1-PC;Initial Catalog=master;Integrated Security=True;Pooling=False"); bu2.CommandText = String.Format("BACKUP DATABASE LA TO DISK='{0}'", saveFileDialog1.FileName); s.Open(); bu2.ExecuteNonQuery(); s.Close(); MessageBox.Show("ok"); } } catch (Exception ex) { MessageBox.Show(ex.ToString()); } and I get this error : What is the proplem?

    Read the article

  • How to make a web browser with tabbed browsing with vb 2008?

    - by felixd68
    I've tried multiple times to create a web browser with tabbed browsing. I know that I have to use "tab control". I have succeeded in creating a semi-tabbed browsing. People are able to add new tabs, but the webbrowser component only appears in one tab page. Coding: Form1_Load: Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load Dim Browse As New WebBrowser TabControl1.TabPages.Add(1, "TabPage" & i) TabControl1.SelectTab(1 - 1) Browse.Name = "wb" Browse.Dock = DockStyle.Fill TabControl1.SelectedTab.Controls.Add(Browse) i = i + 1 End Sub Web Browser Component Private Sub WebBrowser1_DocumentCompleted(ByVal sender As System.Object, ByVal e As System.Windows.Forms.WebBrowserDocumentCompletedEventArgs) url.Text = e.Url.ToString() 'your url text box will show the actual url of the page after the page is fully loaded url.Text = e.Url.ToString Me.Text = CType(TabControl1.SelectedTab.Controls.Item(0), WebBrowser).DocumentTitle & " - Webbrowser's name" TabControl1.SelectedTab.Text = CType(TabControl1.SelectedTab.Controls.Item(0), WebBrowser).DocumentTitle End Sub

    Read the article

  • How do I delay email delivery using Entourage 2008 with Exchange, e.g. using the X.400 Deferred-Delivery header?

    - by Matt McClure
    I'd like to delay the delivery of email that I send so that I can time delivery when the recipient is unlikely to be reading email and I can reduce the likelihood of getting into a chat-like conversation. I'm using Entourage 2008 and Exchange hosted by Rackspace. I tried naively adding a Deferred-Delivery header after reading http://www.faqs.org/rfcs/rfc2156.html and www.itu.int/rec/T-REC-F.400-199906-I/en , but my mail was delivered immediately. Ideally the delay would occur on the MTA instead of my MUA so that delivery would still occur even if my laptop were disconnected from the network at the delivery time I specify. My best workaround at the moment is to habitually use Entourage's Send Later button when composing mail and then click Send/Receive at the end of the day. This is less than ideal because recipients are often reading mail at the end of my day, and I often get immediate replies. Matt

    Read the article

< Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >