Search Results

Search found 100454 results on 4019 pages for 'sql server 2011'.

Page 559/4019 | < Previous Page | 555 556 557 558 559 560 561 562 563 564 565 566  | Next Page >

  • How to install DBD::mysql on OS X Server 10.6?

    - by Zoran Simic
    Trying to install DBD::mysql on OS X Server 10.6 (mac mini server). But I'm missing the mysql headers apparently. Since mysql is already part of OS X Server 10.6, I would like to NOT install anything else (no fink or darwin ports installs), just whatever's needed to get DBD::mysql installed and working. Do you know how I could do that? Do I have to install the headers somewhere? And if so, where? (again: I don't want to install another version of mysql on the box, want to use the version it came with). Is there a way to install DBD::mysql without compiling any C files? This is the error I get (the actual error is much longer, but these are the most meaningful bits, this is the first error reported). Checking if your kit is complete... Looks good Unrecognized argument in LIBS ignored: '-pipe' Note (probably harmless): No library found for -lmysqlclient Multiple copies of Driver.xst found in: /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ /System/Library/Perl/Extras/5.10.0/darwin-thread-multi-2level/auto/DBI/ at Makefile.PL line 907 Using DBI 1.611 (for perl 5.010000 on darwin-thread-multi-2level) installed in /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ Writing Makefile for DBD::mysql cp lib/DBD/mysql.pm blib/lib/DBD/mysql.pm cp lib/DBD/mysql/GetInfo.pm blib/lib/DBD/mysql/GetInfo.pm cp lib/DBD/mysql/INSTALL.pod blib/lib/DBD/mysql/INSTALL.pod cp lib/Bundle/DBD/mysql.pm blib/lib/Bundle/DBD/mysql.pm gcc-4.2 -c -I/Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI -I/usr/include -fno-omit-frame-pointer -pipe -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DIGNORE_SIGHUP_SIGQUIT -DDBD_MYSQL_INSERT_ID_IS_GOOD -g -arch x86_64 -arch i386 -arch ppc -g -pipe -fno-common -DPERL_DARWIN -fno-strict-aliasing -I/usr/local/include -Os -DVERSION=\"4.014\" -DXS_VERSION=\"4.014\" "-I/System/Library/Perl/5.10.0/darwin-thread-multi-2level/CORE" dbdimp.c In file included from dbdimp.c:20: dbdimp.h:22:49: error: mysql.h: No such file or directory dbdimp.h:23:45: error: mysqld_error.h: No such file or directory dbdimp.h:25:49: error: errmsg.h: No such file or directory

    Read the article

  • Get IP or MAC addresses of Windows Multipoint Server 2012 stations?

    - by user1454265
    Is it possible to programmatically retrieve the IP or MAC address of a station assigned to a Windows MultiPoint Server 2012 host, using PowerShell or any other .NET or Windows API? Background: I'm developing a application to help set up USB-over-Ethernet zero clients in a WMS 2012 setup, bridging the PowerShell "WmsCmdlets" module (Microsoft.WindowsServerSolutions.MultipointServer.PowerShell.Commands.Library.WmsStation) and a third-party vendor API for configuring zero client IP address, etc. So far, I do not know any means of matching up the "stations" of the WmsCmdlets with the zero client objects in the vendor's API. Finding out the IP or MAC associated with a WMS station would do nicely, since I have this on the zero client API side. However, I haven't found any information I could use in the PowerShell WmsCmdlets module, such as Get-WmsStation which returns the following: Id : 1 Name : <my station name> IsAutoLogOn : False IsSplit : False CollabId : 0 RemoteConnectionServerName : VirtualMachineName : VirtualMachineId : AutoLogOnUserName : AutoLogOnPassword : DeviceTypes : {DT_Mouse, DT_Keyboard, DT_Audio, DT_MassStorage...} DeviceCounts : {2, 2, 0, 0...} ComputerName : <my WMS host server name> SessionId : 4294967295 SessionHostServer : <my WMS host server name>

    Read the article

  • How to deal with the extremely big *.ost files in a Terminal Server environment which is running out of space

    - by Wolfgang Kuehne
    Our Terminal Server is running out of hard disk space, and the major files which occupy most of the space are *.ost files of the Outook, which come form the users which use the Terminal Server all the time through remote desktop. The Outlook is installed on the Terminal Server and various users can use it. What would be a solution in this case. Is there a way to limit the size of the *.ost files? I read in forums that having the Outlook 2010 set up in Cached Exchange Mode isn't the best practice for an environment where the hdd space is a major constraint. First thing that came to my mind is using folder redirection, and place the ost files (together with the AppData forlder) in a network share, but this does not help, because the ost files are saved a part of the AppData folder which can not be redirected. Then I thought if it is possible to limit the size of the ost file? Or limit the time that it keeps emailed cached, say just emails from the last 6 months are sufficient. Another solutions that came to my mind, moving the ost files somewhere else, this required the old ost file to be removed, and creation of a new one. I am not quite sure if the new OST file will still have cached the emails which where available in the old ost, or will it start caching from where the other one left. What do you suggest?

    Read the article

  • Windows Server 2008 backup VHD's - is it possible to mount/open in Windows 7?

    - by Simon
    Hi All, Is it possible to mount the VHD files created by the Windows Server 2008 backup utility onto a Windows 7 (release) client? Following an array failure I was very worried that there was a problem with both the backup sets on different USB drives as attaching the VHD to a Win 7 box did not show the expected structure (instead they behaved like unformatted disk space). Subsequently, I've attached the backup drive to a 2008r2 machine that I'd intended to be the replacement and the backup set can be browsed without issue (seemingly). When the new disks arrive I'll go through the recovery process and see where we are, but it looks promising so far. Is it simply the case that you can't take server created VHD's and mount them on desktop machines? (Rather than hyper-ventilating at the thought of years of lost photos and email, I'm now just mildly curious) Edit:One thing that has confused things is that the backup utility on Win7 is more restrictive about restoring from external devices than the equivilent on 2008r2. With r2, I can restore files 'from another server' and browse to external storage. Win7 only allows the back to be located on a network share. Once my box of new disks arrive and I've got something to restore onto, I'll move the smaller of the backup VHDs onto network storage reachable by Win7 and see if the VHD is readable. I haven't read up on the VHD process used by the backup app - I'm assuming it's a base VHD and differencing files used for incremental backups and that the restore app understands this. Finally: In retrospect the question should have been, 'can I restore a 2008r2 backup set via a Win 7 client' Thanks

    Read the article

  • How to set up port forwarding on a dedicated server running CentOS 5.4 to use Ubuntu 9.0.4

    - by mairtinh
    The basic situation that I have is a dedicated server running CentOS 5.4 At the moment I have one VM running Ubuntu 9.0.4. Later on, I will want to add another VM running Windows Server 2003 but at the moment I am focusing on getting Ubuntu up and running. The Ubuntu installation is working fine but I'm seriously struggling to get port forwarding working so that I can access websites to be hosted on the Ubuntu VM. As a newbie to Linux, I am confused about the relationship between IPTables and VMWare's own port forwarding. Here's what I've tried so far. The IP of my server is xxx.xxx.xxx.xxx and the provider support have told me that the subnet mask is 255.255.255.0, the gateway address is xxx.xxx.xxx.1 and the network address is xxx.xxx.xxx.0. (Those latter two surprise me a bit, I expected private gateway/network address rather than public ones.) First of all I tried Bridged Networking but had no success at all in communicating with the machine other than through the VMware console. I tried pinging it from the host (using ssh into the host) but no joy; also no Inernet access from the VM. I changed the interfaces configuration from DHCP to Static, using a static address of 192.168.1.100 and setting the gateway to xxx.xxx.xxx.1 as advised by the provider. No real difference, still cannot ping the guest from the host or vice versa and no Internet access from the guest. Then I tried NAT. The host automatically set the IP address to 192.168.132.128 with a gateway of 192.168.132.2 Now the guest has Internet access out and when I do a VNC to the host and open Firefox with 192.168.132.128 I can see the hosted website okay but I still cannot get into it from outside. I mentioned that I'm a bit confused about IPtables and VMware port forwarding, what I meant is that I'm not sure whether IPtable forwarding should be set to the IP address of the guest interface (192.168.132.128 in this case) or the gateway address 192.168.132.2 . I have a feeling that I'm missing something very simple here, can anybody tell me what it is?

    Read the article

  • Windows Server 2008 can't start postgresql-x64-9.0 service: could not create any TCP/IP sockets

    - by Rob
    After rebooting a Windows Server 2008 machine to apply system updates, we recently we began having some issues running PostgreSQL 9.0. When we noticed the problem, we reverted the Windows updates, but the issue persists: From services.msc, attempting to start the postgresql-x64-9.0 service fails. Half-way through starting the progress bar becomes very slow, and eventually responds with error 1053; "the service did not respond in a timely fashion." Interestingly enough, bringing up the task manager shows multiple instances of postgres.exe have been started, and looking at the log file shows: 2011-02-10 14:44:02 ESTLOG: database system is ready to accept connections I then tried killing the processes, and starting via the command-line (as the user postgres), but I receive a different error: C:/Program Files/PostgreSQL/9.0/bin/pg_ctl.exe start -N "postgresql-x64-9.0" -D "F:/SHARE/postgres" -w waiting for server to start............................................................... pg_ctl: could not start server ESTWARNING: could not create listen socket for "192.168.0.101" ESTFATAL: could not create any TCP/IP sockets The log file again indicates that the database is ready to accept connections. Also, using netstat indicates that no other processes are using port 5432; I can't think of any other obvious reason that opening the listen socket might fail. Any help would be greatly appreciated.

    Read the article

  • How do I upgrade Windows Server 2008 R2 Standard (OEM Key) to Enterprise (MSDN Key) using DISM?

    - by Tom Crane
    (Originally asked as After upgrading to 2008 R2 Enterprise and installing more RAM, Windows can only see 4.00 GB but now I know what the question really is...) My Dell server came preinstalled with 2008 R2 Standard. I upgraded to Enterprise to take advantage of more than 32GB RAM. This server is purely for dev and testing, so I want to use my MSDN product key for the upgrade. I originally tried to uprade using the MSDN Enterprise key, but it wouldn't have it: dism /online /Set-Edition:ServerEnterprise /ProductKey:[MSDN key] => Error DISM DISM Transmog Provider: PID=5728 Product key is keyed to [], but user requested transmog to [ServerEnterprise] - CTransmogManager::ValidateTransmogrify I tried several things, including changing the current product key to the MSDN one. Eventually I used a KMS generic key which can be found in several technet forum posts. dism /online /Set-Edition:ServerEnterprise /ProductKey:[KMS Generic Key] ... and this appeared to work. I then changed the product key again (using the control panel) to the MSDN key, thinking that was the end of the matter. Only later when tried to start up VMs did I realise I only had 4GB of usable RAM. I didn't make the connection with the licensing changes at this point and went off on a wild goose chase of BIOS settings, memory configurations and the like. Only later when I saw this... http://social.technet.microsoft.com/Forums/en/winserverTS/thread/6debc586-0977-4731-b418-ca1edb34fe8b ...did I make the connection and reapply the KMS Generic key - which gave me all the RAM back. But now I have a system that isn't properly licensed, presumably I won't be able to activate it as it is, so I've got 2 days to enjoy it. With the MSDN key applied, only 4GB RAM is usable. Is there a way round this without a) rebuilding the server from scratch with the MSDN key from the start or b) buying a retail Enterprise license

    Read the article

  • Server 2003 and XP Client; Why are HTTP connections being silently dropped.

    - by Asa Yeamans
    On my network, my edge-router, a windows 2003 r2 server router with all the latest updates, will drop packets, but only under specific circumstances. I have troubleshot and isolated it down to the most simple configuration i can. There is NO NAT involved. Only fully-public IP addresses. No Firewalls are running either, all ahve been disabled. no packet filters on any interfaces anywhere either. I have a single Windows XP virtual machine and my edge-router(the windows 2003 r2 server, and also a virtual machine) running on a windows 2008 x64 r2 system (running virtual server 2005 as i dont have Intel-VT compatible chip yet). The edge router can access any external http site just fine, no issues. However the windows XP machine is only able to access certain sites. These work: www.google.com www.txstate.edu www.workintexas.com www.thedailywtf.com . These Dont: www.yahoo.com www.utexas.edu en.wikipedia.org slashdot.org www.bing.com. I have removed all possibility of DNS issues by connecting with net-cat from the XP box and sending GET /\r\nHost: \r\n\r\n and that connection replicates the issue as well. The network setup: My statically assigned IP block: x.x.x.168/29 DSL Modem -----PPPoE Connection---- x.x.x.169[EdgeRouter] [EdgeRouter]x.x.x.170 -----Virtual Ethernet----- x.x.x.174 [Test2] Test2's Default gateway is x.x.x.170 and test2 can ping any and every valid, accessible, public IP address with no packet loss what-so-ever. If i connect directly over PPPoE from test2 (the XP box) everything works just fine... Im at my wits end, i have NO IDEA whats causing this.

    Read the article

  • What can I do to determine the root cause of a Windows server hanging/freezing?

    - by Aaronaught
    We set up a new server here a few weeks ago that I am informally responsible for managing. Almost everything works perfectly except for one thing: Every so often it hangs without warning. To clarify: When I say hangs, I mean completely. None of the services respond and I'm unable to even get onto a local console - the display acts as though there's no VGA signal. One time, the server actually responded to pings, another time I got the "destination host unreachable" response, but most of the time the pings just time out, as one would expect for a hung server. Event logs don't show anything after a reboot. I don't mean that they don't show anything interesting, I mean that they don't show anything at all from before the failure occurs to after the reboot. And there are never any performance problems, strange errors, or other obvious signs of impending doom before it happens. I don't expect any easy answers here. What I'd like to know his I can methodically determine the root cause of this problem, be it a misbehaving service, defective hardware, or something else. Is there any kind of logging I can set up that will help me get to the bottom of this? Any hardware diagnostics or remote monitoring? Anything else I can do to help me discover what's actually happening, or at least be able to eliminate what isn't wrong? Just to reiterate, I really don't want to start speculating about possible causes and take a trial-and-error approach, because it's going to be at least several days at a time before I would have conclusive results. I'm looking for solutions to reliably trace the problem to its source.

    Read the article

  • Active Directory: trouble adding new DC

    - by ethrbunny
    I have a domain with 3 DCs. One is starting to fail so I brought up a new one. All are running Win 2003. Problem: there appear to be replication issues between the 4 machines but I can't figure out what's causing this. All are registered with the DNS as identically as I can make them. How do I know there is a problem? Nagios is telling me that the other 3 DCs are having KCCEvent errors and the new machine is reporting "failed connectivity" errors. Doing dcdiag on the new machine reports: the host could not be resolved to an IP address. This seems crazy as I log into it using the DNS name. I can ping it from the other three machines using this DNS name as well. repadmin /showreps from the new machine says its seeing the other 3 machines. Doing the same from one of the older machines doesn't show the new machine. I've tried netdiag /repair numerous times. No luck. There are no firewalls running on any of the machines. If I look at Domain info via MMC (on the new machine) it appears that all the information is current. Users, computers, DCs.. its all there. Im puzzled as to what step(s) I've missed in adding this new machine. Suggestions? EDIT: dcdiag from non-working: C:\Documents and Settings\Administrator.BME>dcdiag Domain Controller Diagnosis Performing initial setup: Done gathering initial info. Doing initial required tests Testing server: Default-First-Site-Name\YELLOW Starting test: Connectivity The host 312ce6ea-7909-4e15-aff6-45c3d1d9a0d9._msdcs.server.edu could not be resolved to an IP address. Check the DNS server, DHCP, server name, etc Although the Guid DNS name (312ce6ea-7909-4e15-aff6-45c3d1d9a0d9._msdcs.server.edu) couldn't be resolved, the server name (yellow.server.edu) resolved to the IP address (10.127.24.79) and was pingable. Check that the IP address is registered correctly with the DNS server. ......................... YELLOW failed test Connectivity Doing primary tests Testing server: Default-First-Site-Name\YELLOW Skipping all tests, because server YELLOW is not responding to directory service requests Running partition tests on : Schema Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Running partition tests on : Configuration Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Running partition tests on : bme Starting test: CrossRefValidation ......................... bme passed test CrossRefValidation Starting test: CheckSDRefDom ......................... bme passed test CheckSDRefDom Running enterprise tests on : server.edu Starting test: Intersite ......................... server.edu passed test Intersite Starting test: FsmoCheck ......................... server.edu passed test FsmoCheck dcdiag from working: P:\>dcdiag Domain Controller Diagnosis Performing initial setup: Done gathering initial info. Doing initial required tests Testing server: Default-First-Site-Name\AD1 Starting test: Connectivity ......................... AD1 passed test Connectivity Doing primary tests Testing server: Default-First-Site-Name\AD1 Starting test: Replications ......................... AD1 passed test Replications Starting test: NCSecDesc ......................... AD1 passed test NCSecDesc Starting test: NetLogons ......................... AD1 passed test NetLogons Starting test: Advertising ......................... AD1 passed test Advertising Starting test: KnowsOfRoleHolders ......................... AD1 passed test KnowsOfRoleHolders Starting test: RidManager ......................... AD1 passed test RidManager Starting test: MachineAccount ......................... AD1 passed test MachineAccount Starting test: Services ......................... AD1 passed test Services Starting test: ObjectsReplicated ......................... AD1 passed test ObjectsReplicated Starting test: frssysvol ......................... AD1 passed test frssysvol Starting test: frsevent ......................... AD1 passed test frsevent Starting test: kccevent ......................... AD1 passed test kccevent Starting test: systemlog ......................... AD1 passed test systemlog Starting test: VerifyReferences ......................... AD1 passed test VerifyReferences Running partition tests on : Schema Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Running partition tests on : Configuration Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Running partition tests on : bme Starting test: CrossRefValidation ......................... bme passed test CrossRefValidation Starting test: CheckSDRefDom ......................... bme passed test CheckSDRefDom Running enterprise tests on : server.edu Starting test: Intersite ......................... server.edu passed test Intersite Starting test: FsmoCheck ......................... server.edu passed test FsmoCheck P:\>

    Read the article

  • Why am I missing 4GB of RAM on Windows Server 2008 R2 64bit?

    - by Nick G
    I noticed today that a server was very low on memory. It physically has 8GB installed and runs Windows 2008 R2 Standard 64bit. It also hosts 2 virtual machines using HyperV. Server is Dell Poweredge R510. However the host OS reports in task manager that it only has 4GB of RAM, despite actually having 8GB and it being a 64bit OS. Computer properties shows Installed memory: 8.00GB (3.99GB usuable). Why would "usable" be half the real RAM installed under a 64bit OS? Additionally nearly all of the 4GB of visible RAM on the host OS is being used by something without anything showing up in task manager (presumably HyperV as it's allocated 3.6GB to the virtual machines its hosting). However that doesn't explain where the other 4GB has gone which Windows can't even see. Where is my missing 4GB of RAM? Update: Dell OpenManage says this: Total Installed Capacity 8192 MB Total Installed Capacity Available to the OS 4096 MB So looks like Nathan's suggestion of memory mirroring might be correct. I'll have to reboot to check this (I think?) Update 2 OK. So I reboot and I get a message saying "the amount of system memory has changed" (despite not having touched the hardware in a year). Once Windows has booted, all 8GB is visible again. Looks like I probably have a hardware RAM issue (I'll perhaps try reseating it whenever I can chuck everyone off the server next). Thanks for your answers and comments. I was hoping it was going to be the mirrored-RAM option but it seems not - that's not even mentioned in the BIOS.

    Read the article

  • What are secure ways of sharing a server (ssh+LAMP) with friends?

    - by Bran the Blessed
    What is the best way to share a virtual server with friends? More precisely, I have the following assets: A virtual private server (Debian Lenny) with root access for myself, running... SSH apache2 mysql Some unused disk space Some friends in need of hosting The problem I would now like to do the following: Hosting one or several domains per friend My friends should have full access to their domains, including running PHP scripts, for example My friends should not be able to poke around in other directories The security of my server should not be compromised by faulty PHP scripts To clarify: I do trust my friends in the sense that they are not trying to do something evil with their access. I just do not trust the programs they are going to run. So, what are your recommendations for establishing such a scenario? Partial solution I already came up with the following plan: Add chrooted SSH users for my friends Add Apache vhosts per user (point the directories to subdirectories of the homedirectories, i.e. /home/alice/example.com, /home/bob/example.net, etc. But how can I enforce a chroot-like environment for the scripts they are running within these vhosts? Any pointers would be appreciated.

    Read the article

  • 20 1TB drives vs. 10 2TB drives in RAID5/6 server

    - by Hunter
    Hi everyone, I will be setting up a server at work and I need some advice on some details. The setup will be one blade-type server (8-core, 16GB RAM) with two subsystems - one for the main storage the other to back it up. I'm shooting for a 20TB array (I know it'll be less after formatting and parity drives). So is there any advantage one way or the other with either 20 1TB drives or 10 2TB drives? I'm not sure right now how many controllers I should have either (in the quote I have is a dual-port controller). I would think two controllers for a server of this size would be a better choice than the dual-port controller (but I really don't know). And would an array of this size have any performance issues in RAID 5 or 6 (I know RAID 5 or 6 are "slower" because of all the parity calculations). Also, these will be either WD RE3 (1TB) or the RE4 (2TB). Oh, also, for the backup array would it be ok to use the WD 2TB green drives (also in RAID5 or 6)?

    Read the article

  • Things to consider when building a continuous integration server?

    - by Dave
    I'm new to continuous integration, but immediately realize its value, and I want to get one set up right away. I have played with TeamCity and have it working in a VM great. Now, I don't want to spend money on another system, so I was planning on just doing the VM again on a faster machine (i.e. my dev system). There are a few questions that come to mind with this: Hard disk allocation - how big should it be? Sure, 60GB seems like more than enough, but people also used to think that we'd never need more than 64KB of RAM Backups - is it even important to back up the integration server? Sure, I guess it's nice so that one doesn't have to go through the entire configuration process again, but I would think that's about it. I could snapshot my VM every time I do a configuration change, and then do a backup of applications only (ignore the buildAgent stuff). Migration - if I want to go away from a VM on my dev system, to a new server, which maybe even runs Windows Server 2003, is it easy enough? Perhaps this is a particular point best suited for StackOverflow.

    Read the article

  • Adding a 2008 server to a 2003 Domain with DNS devolution?

    - by mvdwege
    I'm running into a problem adding a 2008 server to our existing 2003 domain, and as I am not a Windows admin, I'm not getting the problem here. Some reading around on Technet seems to indicate that DNS devolution is the issue. Here's the setup: DNS for the entire company is hosted on a Unix server running Bind, including the service records for the Windows domain. Our toplevel is company.local, and functional domains are in subdomains, such as mgt.company.local (our management servers). Our Windows servers live mostly in office.company.local, but some of them live in .mgt.company.local and .customers.company.local. The 2003 servers all succesfully authenticate against company.local as the Windows domain. Their position in the infrastructure is set by setting the primary DNS suffix under the network settings and the computer name dialog. Trying to do the same with a brand new 2008 install throws an error though: "Changing the Primary Domain DNS name of this computer to office.company.local failed [...] The specified server cannot perform the requested operation" I tried googling, but the closest I came was the Technet article on DNS Devolution, and I can't make heads nor tails on how to apply that to my case. Addendum 2012-10-23: The problem is not joining the domain, that works, the problem is that it joins with the wrong name, as .company.local, instead of .office.company.local. So far everything works, but I'm rather afraid to run production like this, because sooner or later something is going to complain about the AD name not matching DNS.

    Read the article

  • transfer database from local machine to hosting server

    - by c11ada
    hey all, im trying to transfer my database from local machine to server, im using the publish to provider wizard in visual web developer to generate a scrip, im then using the generated script on the serever database. i keep getting the following error can some one please tell where im going wrong Msg 468, Level 16, State 9, Procedure aspnet_UsersInRoles_RemoveUsersFromRoles, Line 53 Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation. Msg 468, Level 16, State 9, Procedure aspnet_UsersInRoles_RemoveUsersFromRoles, Line 58 Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation. Msg 468, Level 16, State 9, Procedure aspnet_UsersInRoles_RemoveUsersFromRoles, Line 87 Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation. Msg 468, Level 16, State 9, Procedure aspnet_UsersInRoles_RemoveUsersFromRoles, Line 92 Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation. Msg 468, Level 16, State 9, Procedure aspnet_UsersInRoles_AddUsersToRoles, Line 48 Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation. Msg 468, Level 16, State 9, Procedure aspnet_UsersInRoles_AddUsersToRoles, Line 52 Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation. Msg 468, Level 16, State 9, Procedure aspnet_UsersInRoles_AddUsersToRoles, Line 79 Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation. Msg 468, Level 16, State 9, Procedure aspnet_UsersInRoles_AddUsersToRoles, Line 83 Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation. Msg 468, Level 16, State 9, Procedure aspnet_UsersInRoles_AddUsersToRoles, Line 93 Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation. Msg 15151, Level 16, State 1, Line 1 Cannot find the object 'aspnet_UsersInRoles_AddUsersToRoles', because it does not exist or you do not have permission. Msg 15151, Level 16, State 1, Line 1 Cannot find the object 'aspnet_UsersInRoles_RemoveUsersFromRoles', because it does not exist or you do not have permission. thanks

    Read the article

  • Enable Automatic Code First Migrations On SQL Database in Azure Web Sites

    - by Steve Michelotti
    Now that Azure supports .NET Framework 4.5, you can use all the latest and greatest available features. A common scenario is to be able to use Entity Framework Code First Migrations with a SQL Database in Azure. Prior to Code First Migrations, Entity Framework provided database initializers. While convenient for demos and prototypes, database initializers weren’t useful for much beyond that because, if you delete and re-create your entire database when the schema changes, you lose all of your operational data. This is the void that Migrations are meant to fill. For example, if you add a column to your model, Migrations will alter the database to add the column rather than blowing away the entire database and re-creating it from scratch. Azure is becoming increasingly easier to use – especially with features like Azure Web Sites. Being able to use Entity Framework Migrations in Azure makes deployment easier than ever. In this blog post, I’ll walk through enabling Automatic Code First Migrations on Azure. I’ll use the Simple Membership provider for my example. First, we’ll create a new Azure Web site called “migrationstest” including creating a new SQL Database along with it:   Next we’ll go to the web site and download the publish profile:   In the meantime, we’ve created a new MVC 4 website in Visual Studio 2012 using the “Internet Application” template. This template is automatically configured to use the Simple Membership provider. We’ll do our initial Publish to Azure by right-clicking our project and selecting “Publish…”. From the “Publish Web” dialog, we’ll import the publish profile that we downloaded in the previous step:   Once the site is published, we’ll just click the “Register” link from the default site. Since the AccountController is decorated with the [InitializeSimpleMembership] attribute, the initializer will be called and the initial database is created.   We can verify this by connecting to our SQL Database on Azure with SQL Management Studio (after making sure that our local IP address is added to the list of Allowed IP Addresses in Azure): One interesting note is that these tables got created with the default Entity Framework initializer – which is to create the database if it doesn’t already exist. However, our database did already exist! This is because there is a new feature of Entity Framework 5 where Code First will add tables to an existing database as long as the target database doesn’t contain any of the tables from the model. At this point, it’s time to enable Migrations. We’ll open the Package Manger Console and execute the command: PM> Enable-Migrations -EnableAutomaticMigrations This will enable automatic migrations for our project. Because we used the "-EnableAutomaticMigrations” switch, it will create our Configuration class with a constructor that sets the AutomaticMigrationsEnabled property set to true: 1: public Configuration() 2: { 3: AutomaticMigrationsEnabled = true; 4: } We’ll now add our initial migration: PM> Add-Migration Initial This will create a migration class call “Initial” that contains the entire model. But we need to remove all of this code because our database already exists so we are just left with empty Up() and Down() methods. 1: public partial class Initial : DbMigration 2: { 3: public override void Up() 4: { 5: } 6: 7: public override void Down() 8: { 9: } 10: } If we don’t remove this code, we’ll get an exception the first time we attempt to run migrations that tells us: “There is already an object named 'UserProfile' in the database”. This blog post by Julie Lerman fully describes this scenario (i.e., enabling migrations on an existing database). Our next step is to add the Entity Framework initializer that will automatically use Migrations to update the database to the latest version. We will add these 2 lines of code to the Application_Start of the Global.asax: 1: Database.SetInitializer(new MigrateDatabaseToLatestVersion<UsersContext, Configuration>()); 2: new UsersContext().Database.Initialize(false); Note the Initialize() call will force the initializer to run if it has not been run before. At this point, we can publish again to make sure everything is still working as we are expecting. This time we’re going to specify in our publish profile that Code First Migrations should be executed:   Once we have re-published we can once again navigate to the Register page. At this point the database has not been changed but Migrations is now enabled on our SQL Database in Azure. We can now customize our model. Let’s add 2 new properties to the UserProfile class – Email and DateOfBirth: 1: [Table("UserProfile")] 2: public class UserProfile 3: { 4: [Key] 5: [DatabaseGeneratedAttribute(DatabaseGeneratedOption.Identity)] 6: public int UserId { get; set; } 7: public string UserName { get; set; } 8: public string Email { get; set; } 9: public DateTime DateOfBirth { get; set; } 10: } At this point all we need to do is simply re-publish. We’ll once again navigate to the Registration page and, because we had Automatic Migrations enabled, the database has been altered (*not* recreated) to add our 2 new columns. We can verify this by once again looking at SQL Management Studio:   Automatic Migrations provide a quick and easy way to keep your database in sync with your model without the worry of having to re-create your entire database and lose data. With Azure Web Sites you can set up automatic deployment with Git or TFS and automate the entire process to make it dead simple.

    Read the article

  • Visio 2010 forward engineer add-in for office 2010

    - by Ryan Ternier
    I have been scouring the internet for ages trying to see if there was a usable add-on for Visio 2010 that could export SQL Scripts. MS stopping putting that functionality in Visio since 2003 – which is a huge shame. Today I found an open source project from Alberto Ferrari. It’s an add-in for Visio 2010 that allows you to generate SQL Scripts from your DB diagram. It’s still in beta, and the source is available.   Check it out here:http://sqlblog.com/blogs/alberto_ferrari/archive/2010/04/16/visio-forward-engineer-addin-for-office-2010.aspx This saves me from having to do all my diagramming in SQL Server / VS 2010. And brings back the much needed functionality that has been lost.

    Read the article

  • Ten Things I Wish I’d Known When I Started Using tSQLt and SQL Test

    The open-source Unit Test framework tSQLt is a great way of writing unit tests in the same language as the one being tested. In retrospect, after using tSQLt for a while, what are the 'gotchas'; those things that you'd have been better off knowing about before you get started? David Green lists a few tips he wished he'd read beforehand. Learn Agile Database Development Best PracticesAgile database development experts Sebastian Meine and Dennis Lloyd are running day-long classes designed to complement Red Gate’s SQL in the City US tour. Classes will be held in San Francisco, Chicago, Boston and Seattle. Register Now.

    Read the article

  • Get non-overlapping dates ranges for prices history data

    - by Anonymouse
    Hello, Let's assume that I have the following table: CREATE TABLE [dbo].[PricesHist]( [Product] varchar NOT NULL, [Price] [float] NOT NULL, [StartDate] [datetime] NOT NULL, [EndDate] [datetime] NOT NULL ) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D2C00000000 AS DateTime), CAST(0x00009D2C00000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D2D00000000 AS DateTime), CAST(0x00009D2D00000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 2.5, CAST(0x00009D2E00000000 AS DateTime), CAST(0x00009D2E00000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3000000000 AS DateTime), CAST(0x00009D3000000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3100000000 AS DateTime), CAST(0x00009D3100000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3400000000 AS DateTime), CAST(0x00009D3400000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 2.5, CAST(0x00009D3500000000 AS DateTime), CAST(0x00009D3500000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3600000000 AS DateTime), CAST(0x00009D3600000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3700000000 AS DateTime), CAST(0x00009D3700000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3800000000 AS DateTime), CAST(0x00009D3800000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3A00000000 AS DateTime), CAST(0x00009D3A00000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3B00000000 AS DateTime), CAST(0x00009D3B00000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 2.5, CAST(0x00009D3C00000000 AS DateTime), CAST(0x00009D3C00000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3D00000000 AS DateTime), CAST(0x00009D3D00000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3E00000000 AS DateTime), CAST(0x00009D3E00000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3F00000000 AS DateTime), CAST(0x00009D3F00000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D4100000000 AS DateTime), CAST(0x00009D4100000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D4200000000 AS DateTime), CAST(0x00009D4200000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 2.5, CAST(0x00009D4300000000 AS DateTime), CAST(0x00009D4300000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D4400000000 AS DateTime), CAST(0x00009D4400000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D4500000000 AS DateTime), CAST(0x00009D4500000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D4600000000 AS DateTime), CAST(0x00009D4600000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D4800000000 AS DateTime), CAST(0x00009D4800000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 2.5, CAST(0x00009D4A00000000 AS DateTime), CAST(0x00009D4A00000000 AS DateTime)) As you can see, there are two prices on that month for Apples. 4.90 and 2.50. In order to tidy this table up, I need to get this information as a date range rather than a row per day as it currently is. I can obviously do this with Min and Max aggregates easily but the ranges overlap and other business code expect non-overlapping ranges. I also tried to achieve this with self joins and row_number(), but without much success... Here is what I'm trying to achieve as the output: Product | StartDate | EndDate | Price ------------------------------------------- Apples | 01 Mar 2010 | 02 Mar 2010 | 4.90 Apples | 03 Mar 2010 | 03 Mar 2010 | 2.50 Apples | 05 Mar 2010 | 09 Mar 2010 | 4.90 Apples | 10 Mar 2010 | 10 Mar 2010 | 2.50 Apples | 11 Mar 2010 | 16 Mar 2010 | 4.90 Apples | 17 Mar 2010 | 17 Mar 2010 | 2.50 Apples | 18 Mar 2010 | 23 Mar 2010 | 4.90 Apples | 24 Mar 2010 | 24 Mar 2010 | 2.50 Apples | 25 Mar 2010 | 30 Mar 2010 | 4.90 Apples | 31 Mar 2010 | 31 Mar 2010 | 2.50 What would please be the best approach to get this done? Thanks a lot in advance,

    Read the article

  • Scheduling parameterized reports in Crystal Reports Server

    - by SarekOfVulcan
    I'm trying to set up a report to run monthly in Crystal Reports Server 2008 that will give me the next month's Affordable Care Plan termination dates. However, as far as I can tell, I can only give it a particular date string, not "7 days after the report is scheduled". How do I do this? (Same question for CR2008, actually, but the server is the one I'm interested in right now.) Thanks!

    Read the article

  • Report Model; problem regarding many-to-many relations

    - by Koen
    I'm having trouble setting up a report model to create reports with report builder. I guess I'm doing something wrong when configuring the report model, but it might also due to change of primary entity in report builder. I have 3 tables: Client, Address and Product. The Client has PK ClientNumber. The Address and Product both have a FK relation on ClientNumber. The relation between Client and Address is 1-to-many and also between Client and Product: Product-(many:1)-Client-(1:many)-Address. I've created a report model (mostly auto generate) with these 3 tables, for each table I've made an Entity. Now on the Client Entity , I've got 2 roles, Address and Product. They both have a cardinality of 'OptionalMany', because Client can have multiple Addresses or Products. On both Address and Product I have a Client Role with cardinality 'One', because for each Address or Product, there has to be a Client (tried OptionalOne as well...). Now I'm trying to create a report in Report Builder (2.0) where I select fields from these three entities. I'd like an overview of Clients with their main address and their products, but I don't seem to be able to create a report with fields from both Address and Products in it. I start by selecting attributes from Client, and as soon as I add Product for example the Primary entity changes as if I'm selecting Products (instead of Clients). This is a basic example of a problem I'm facing in a much more complex model. I've tried lots of different things for 2 days, but I can't get it to work. Does anyone have an idea how to cope with this? (Using SSRS 2008)

    Read the article

  • PostgreSQL-Server doesn´t start

    - by Jan-Frederik Carl
    Hello, I would like to use PostgreSQL locally on my computer and have installed it. I use Windows 7. I am not able to start the PostgreSQL-Server. When using the "Start Server"-program, I get the following output in the dos command window: Start DoCmd(net start postgresql-8.4)... System error 2 (my translation) System cannot find the specified file. (my translation) Please ask, if I should give additional infos.

    Read the article

  • Visual Studio 2010 and discovery of advantage server error

    - by Tina Nipe
    I installed VS 2010 on a Windows 7 64 bit machine. When I try and connect to an advantage database through the server explorer using the Advanatage OLEDB driver I get a cannot discover advantage database server error. I can connect to the database using the ARC just fine. I was able to connect in VS 2008 just fine. Any ideas on why I can't connect in VS 2010?

    Read the article

< Previous Page | 555 556 557 558 559 560 561 562 563 564 565 566  | Next Page >