Search Results

Search found 31483 results on 1260 pages for 'database migration'.

Page 570/1260 | < Previous Page | 566 567 568 569 570 571 572 573 574 575 576 577  | Next Page >

  • Problems executing check_mysql plugin

    - by Brian
    I'm attempting to test out the check_mysql plugin for Nagios using the command line. It works great when I'm just checking localhost but when I try to specify a different server (using the -H argument) I keep getting the following error message: CRITICAL - Unable to connect to mysql://nagios_user@{remoteServerHostname}/ - Access denied for user 'nagios_user'@'{localServerHostName}' (using password: YES) It looks like it's trying to connect to the database using the localhost server even though I've specified a different host name. I've already set up this user on the remote database and they have correct permissions. I'm just trying to figure out why the script keeps inserting the localhost server name instead of the remote one.

    Read the article

  • Deleted one membership table. Possible to import without breaking relationship?

    - by superexsl
    Hey, I hope this isn't going to be tricky/time consuming, so fingers crossed. I'm working with the ASP.NET membership table. However, I've got quite a few other tables that I've built, and most of them have a relationship with the dbo.aspnet_Membership table. I've accidentally deleted the dbo.aspnet_Membership table and can't get it back. There was no major data on it, (as it's on my local machine), but I would really like to copy and paste that one table from another database I have, mainly for the sake of not breaking the schema. Is this possible? I'm worried if I run the Aspnet_regsql.exe tool, it's going to break the schema and remove all data from the tables as well the relationships (which would take a while to re-establish). Is there any way I can import just the dbo.aspnet_Membership table into my current database? Thanks for the any advice!

    Read the article

  • PostgreSQL RAID configuration

    - by Yoldar-Zi
    I'm stuck how best to configure disk array. We have Hp P2000 G3 disk array with 24 SAS physical disks 300Gb each. We need to configure this array got 2 copies of PostgreSQL 9.2 because two different system. As we know it's recommended to store database and transaction logs(pg_xlog) files on separate disks. So we must setup 4 logical disk: 2 for transaction logs with RAID 1 2 for database with RAID 10 Is this right scheme of distribution? Or may be it is best to just make one big RAID 10 with 4 logical disks?

    Read the article

  • concrete uses of LDAP?

    - by ajsie
    im new to LDAP. i wonder what are some concrete examples of using LDAP. things that are MUCH more easier to do when you got 3-7 linux computers in a small company network. one use that is very important for me seems to be that you configure LDAP to handle system authentication. then you dont have to create same accounts in all computers. are there other things that are a MUST DO for a small network to save more time? my small network is for apache servers and database servers. and should LDAP be in an own machine? cause i guess its not good to put it in apache or database servers since these are performance dependent.

    Read the article

  • What are some good, IP Address Management solutions for IPv6? [closed]

    - by Russell Heilling
    There are a number of open source IPAM tools available for IPv4 address management; however there seems to be a distinct lack of actively updated tools available for IPv6. Other than FreeIPdb (code no longer maintained) or the RIPE Database (I have seen some customisations to the RIPEdb that allow for enterprise/ISP IPAM but it seems like overkill for a system that will probably only ever handle one /32 worth of space). Are there any other options that I'm missing? (Database only please. I know vi can be used for flat text IPAM, that's how I'm handling our /32 at the moment, but I don't see it scaling for much longer) It doesn't have to be open source but what are folks doing to manage IPv6 in a dual stack environment

    Read the article

  • Shared resources in Windows Server 2008 were lost

    - by user316687
    We have an Oracle database in Windows Server 2003, which has its archived redo logs stored on a shared resource of a Windows Server 2008: \\192.168.1.189\d$\folder_for_archivedlogs However, according to Oracle's alert.log, at 10:01 p.m that shared resource got lost and the database was inaccessible. From my Windows Server 2003, on Windows Explorer, I couldn't access that shared resource, but I got a response when I did ping 192.168.1.189. I reviewed all the Event Logs on that Windows 2008, but there is no error at 10:00pm or 11:00pm. Has anyone seen some similar case before? (Shared resources get lost, but you still can ping the server and there are no error events in the Event Logs).

    Read the article

  • Using sed to convert hex characters in postgresql dump file

    - by Bernt
    I am working on moving several databases from a Postgresql 8.3 server to a Postgresql 8.4 server. It has worked fine so far, but one base has given me some trouble. The database is listed as unicode-encoded in the 8.3-server, but somehow a client program has managed to inject some invalid unicode data into it. When I do a normal dump and restore using postgres' custom format, the new server won't accept it, complaining about unicode errors. My plan is to do a plain text dump of the database, then use sed to replace the invalid characters with nothing (they are not needed). But how do you make sed work on hex/binary values in a file?

    Read the article

  • Why is my DB read-only when attached to SQL Express, but not with SQL Web?

    - by David Rubin
    I have an .mdf/.ldf pair, originally created in 2008 R2 Standard, and well under 10GB, with ACLs: d:\db snapshot\DB_NAME.mdf SERVERNAME\SQLServerMSSQLUser$ACCOUNT$MSSQLSERVER:F OWNER RIGHTS:F BUILTIN\Administrators:F d:\db snapshot\DB_NAME_log.ldf SERVERNAME\SQLServerMSSQLUser$ACCOUNT$MSSQLSERVER:F OWNER RIGHTS:F BUILTIN\Administrators:F When I attach the database to an instance of SQL Express 2008 R2, it comes up as read-only. When exactly the same acls and user-accounts and SQLCMD statements are set up with SQL Web 2008 R2, it comes up writable. I looked at MSDN's comparison page but nothing jumped out at me. Why on earth is this happening? Thanks! UPDATE I just noticed that the name of the attached databases are different. On SQL Express (read-only) it matches the filename (e.g. DB_NAME); on SQL Web (writable) it matches the CUSTOM_NAME that I gave it in the attach command: CREATE DATABASE [CUSTOM_NAME] ON (FILENAME = 'PATH_TO_MDF'), (FILENAME = 'PATH_TO_LDF') FOR ATTACH

    Read the article

  • Mysql Servers for Attendance System

    - by foo
    I'm building an attendance system. There are about 20 places where people will check in and check out using Mifare 1K Card. It will use MySQL as the database. The system will display something like "#ID IN: 800AM" when the first time the user checks in and "#ID OUT: 400PM" when the user checks out. For this to work, all the databases need to be synchronized with each other all the times. For an example, if user A went to location #1 to check in but by the time he wants to return home, the server at location #1 went down, he needs to go to location #2 or the nearest server to check out. The server at location #2 should display '#ID OUT: 400PM" and not "#ID IN: 400PM" since he's already checked in. So, what should I use to ensure this idea will work? My main concern is with the network (other department manages it) which is very unpredictable. It just love to go down anytime it wants to. Update LOL, didn't realize my question is not clear, just noticed it when you guys pointed it out, sorry about that. My real question is, how can I configure my MySQL to be synchronized with each other (20 servers)? MySQL cluster ? (tried reading about it, but I'm not sure if it's the right thing to do) My current setup (first phase): Local database for each server OS: Slackware A main server that keeps track which staff is at which server A web based front end for the user to see their history (which connects to the server based on their records) Main Pros No worries about network problems since it is a local database Main Cons A user can only check in and out at the same server. Databases/Servers are not connected with each others. Have to add the user to each server if the users want to check in at different locations. Which means, if he wants to go to location A, he must be checked out from location A first and then check in at location B. The server at location B didn't know that the user has checked in before at A. By the way, I've already centralized my NTP to a local server. About the network, let's just say, I don't have the authority to make changes so that the network will be better. The network won't effect all 20 servers at once, usually, just a few of them for several times a week. If there are anything else you would like me to answer, please just ask.

    Read the article

  • Criteria strings, how many different criteria can be entered to retrieve specific data?

    - by Janet
    For our membership database we are currently using an old DOS program "Arclist". The program is old but the one feature we desperately need in a database program is to be able to enter multiple criteria at one time for more of a "one time" extraction of the data meeting all the various criteria entered in what I call a "criteria string". An example may be extracting only those records with zip codes matching (67893, 54235, 54323, 54201, 54302, 54303, 54301, 67894, 67895). Another set of criteria might be to omit records, not equal to, one type of criteria in one field and also extract records matching criteria in another field. So we would want records "not equal to" in one field, but whose information equals requested information in another field.

    Read the article

  • Scaling a video processing application on EC2?

    - by Stpn
    I am approaching the need to scale a video-processign application that runs on EC2. So far the setup is one machine: Backbonejs frontend Rails 3.2 Postgresql Resque + S3 for storage The flow of the app is as follows: 1) Request from frontend. Upload a video. 2) Storing video 3) Quering external APIs. 4) Processing / encoding videos. 5) Post to frontend. I can separate the backend and frontend without any problems, but when it comes to distributing the backend between several servers I am a bit puzzled. I can probably come up with a temporary solution (like just duplicating apps making several instances), but since I don't really have expertise in backend system administration, there can be some fundamental mistakes.. Also I would rather have something that is scalable. I wonder if anyone can give some feedback on the following plan: A) Frontend machine. Just frontend, talks to backend via REST Api of sorts. B) Backend server (BS), main database. Gets request from 1), posts to 2) saves uploads to 3) C) S3 storage. D) Server for quering APIs. Basically just a Resque workers, that post info back to 2) E) Server for video encoding. Processes videos uploaded on 3) and uploads them back. So I will have: A)frontend \ \ B)MAIN_APP/DB ----- C)S3 Storage (Files) / \ / / \ / D)ExternalAPI_queries E)Video_Processing (redundant DB) (redundant DB) All this will supposedly talk to each other via HTTP requests. My reason for this is that Video Processing part is really the most resource-intensive and I would just run barebones application that accepts requests and starts processing them. Questions: 1) In this setup I will have the main database at B) and all other servers will communicate with it via HTTP requests (and store duplicates of databases also I guess..for safety reasons). Is it the right approach or should I have 1 database that everyone connects to (how then?) 2) Is it a good idea to separate API queries from Video Processing part? Logically they are very close (processing is determined by the result of API queries), but resource-wise Video Processing is waaay more intensive. 3) what should I use to distribute calls between backend apps based on load?

    Read the article

  • writing data onto a linux live-dvd

    - by stanleyxu2005
    I have a server machine with a dvd-writer. I want to burn a linux live-dvd (openSUSE is preferred) with a pre-configured web server, so that after booting the web server should be ready to serve. The web server has a sqlite database (with very less data). But after rebooting the system, all data in the database will get lost. Is it possible to store all necessary data onto this live-dvd as well? If it were a usb drive. I would create two partition and mount the second partition with read-write permission. But I have no idea how to create two partition onto a dvd Any hint is appreciated

    Read the article

  • Clone MySQL DB - errors with CREATE VIEW/SHOW VIEW privileges

    - by user43537
    Running MySQL 5.0.32 on Debian 4.0 (Etch). I'm trying to clone a WordPress MySQL database completely (structure and data) on the same server. I tried a dump to an .sql file and an import into a new empty database from the command line, but the import fails with errors saying the user does not have the "SHOW VIEW" or "CREATE VIEW" privilege. Trying it with PHPMyAdmin doesn't work either. I also tried doing this with the MySQL root user (not named "root" though) and it shows an "Access Denied" error. I'm terribly confused as to where the problem is. Any pointers on cloning a MySQL DB and granting all privileges to a user account would be great (specifically for MySQL 5.0.32). Thanks!

    Read the article

  • Equivalent of phpMyAdmin for MSSQL?

    - by Tedd Hansen
    Is there any webinterface for administrating MSSQL similar to phpMyAdmin (for MySQL)? I want a self-service setup where developers can create a database through webinterface and upload/download backups of the database without local access. I've considered phpMSAdmin, but it hasn't had a release since 2006 so I'm not sure its worth the effort of setting it up. If there is something else (free or not-so-free) that would be great. My question is similar to this one posted 2 years ago, but no good webinterface was found back then. SQL Web Data Administrator seems interesting, but it lacks a few features - most notably creating new databases (also, not updated since 2007).

    Read the article

  • IIS Hangs on SQL Connections when running ASP.net applications

    - by PaulWaldman
    We have a database server running SQL 2000 and two web servers hosting ASP.net applications. All three servers are running Windows Server 2003 SP2. Our issue is repeatable after about 2 weeks, IIS on one web server is no longer able to establish SQL connections. Static content loads fine. Other non-IIS applications are still able to contact the SQL database server. ODBC functionality also still works. While running SQL profiler a connection is never established from IIS when it is in this state. The only way to fix this situation is to restart the web server. There are no firewalls installed on any machines.

    Read the article

  • I am trying to setup phpMyAdmin to use with a remote MySQL databases on Scientific Linux release 6.2

    - by techsjs2012
    I am trying to setup phpMyAdmin to use with a remote MySQL databases on Scientific Linux release 6.2. If I use the mysql command line to connect to the remote database it works great but if I use mysqladmin I am getting "#2002 Cannot log in to the MySQL server". I have found if I do a: setenforce 0 It will work from myphpadmin to my remote database but once I reboot or set Scientific Linux setenforce back to one it stops working again.. I know setenforce 0 is not the right thing to do but can someone please give me details steps on how to get this working the right way... thanks I am new to Scientific Linux and been having some issues.. thanks

    Read the article

  • Why is domU faster than dom0 on IO?

    - by Paco
    I have installed debian 7 on a physical machine. This is the configuration of the machine: 3 hard drives using RAID 5 Strip element size: 1M Read policy: Adaptive read ahead Write policy: Write Through /boot 200 MB ext2 / 15 GB ext3 SWAP 10GB LVM rest (~500GB) emphasized text I installed postgresql, created a big database (over 1GB). I have an SQL request that takes a lot of time to run (a SELECT statement, so it only reads data from the database). This request takes approximately 5.5 seconds to run. Then, I installed XEN, created a domU, with another debian distro. On this OS, I also installed postgresql, with the same database. The same SQL request takes only 2.5 seconds to run. I checked the kernel on both dom0 and domU. uname-a returns "Linux debian 3.2.0-4-amd64 #1 SMP Debian 3.2.41-2+deb7u2 x86_64 GNU/Linux" on both systems. I checked the kernel parameters, which are approximately the same. For those that are relevant, I changed their values to make them match on both systems using sysctl. I saw no changes (the requests still take the same amount of time). After this, I checked the file systems. I used ext3 on domU. Still no changes. I installed hdparm, and ran hdparm -Tt on both systems, on all my partitions on both systems, and I get similar results. Now, I am stuck, I don't know what is different, and what could be the cause of such a big difference. Additional Info: Debian runs on a Dell server PowerEdge 2950 postgresql: 9.1.9 (both dom0 and domU) xen-linux-system: 3.2.0 xen-hypervisor: 4.1 Thanks EDIT: As Krzysztof Ksiezyk suggested, it might be due to some file caching system. I ran the dd command to test both the read and write speed. Here is domU: root@test1:~# dd if=/dev/zero of=/root/dd count=5MB bs=1MB ^C2020+0 records in 2020+0 records out 2020000000 bytes (2.0 GB) copied, 18.8289 s, 107 MB/s root@test1:~# dd if=/root/dd of=/dev/null count=5MB bs=1MB 2020+0 records in 2020+0 records out 2020000000 bytes (2.0 GB) copied, 15.0549 s, 134 MB/s And here is dom0: root@debian:~# dd if=/dev/zero of=/root/dd count=5MB bs=1MB ^C1693+0 records in 1693+0 records out 1693000000 bytes (1.7 GB) copied, 8.87281 s, 191 MB/s root@debian:~# dd if=/root/dd of=/dev/null count=5MB bs=1MB 1693+0 records in 1693+0 records out 1693000000 bytes (1.7 GB) copied, 0.501509 s, 3.4 GB/s What can be the cause of this caching system? And how can we "fix" it? Can we apply it to dom0? EDIT 2: I switched my virtual disk type. To do so I followed this article. I did a dd if=/dev/vg0/test1-disk of=/mnt/test1-disk.img bs=16M Then in /etc/xen/test1.cfg, I changed the disk parameter to use file: instead of phy: it should have removed the file caching, but I still get the same numbers (domU being much faster for Postgres)

    Read the article

  • Openfire Installation Issue - Can't Login to admin panel

    - by Lobe
    I am trying to get Openfire to install on an Ubuntu virtual machine, however upon completing the web based installer, I am unable to login to the admin panel. So far I: downloaded Debian installer Installed using stock options Added database and built the structure using supplied SQL file Completed web based installer I am now trying to login using username: admin and my password, however I constantly get a wrong username/password error. There is a record generated in the MySQL database showing the admin user with an encrypted password, and changing to an unencoded password doesn't work. What is the problem here?

    Read the article

  • OWA no longer accessing 1 backend exchange server

    - by Morchuboo
    We have IIS hosting OWA that is the web frontend to 3 backend exchange servers. Yesterday we got a lot of event 9791 warnings: "Cleanup of the DeliveredTo table for database 'Second Storage Group\Mailbox Store EUROPE 2' was pre-empted because the database engine's version store was growing too large. 0 entries were purged. At this point the server was crawling. Our Mail admin is currently away and not contactable so we rebooted the server. Everything seems ok when reading mail from outlook and evolution-mapi clients but OWA and active-sync connections cannot access. When logging into OWA, users whos mailboxes are not on this backend server are fine but users on this server can log into the OWA frontend but once submitting their credentials the page returns a 503 service unavailable error. We have since rebstarted the affected exchange server and the IIS server as well as iisreset /noforce but problem persists. Can anyone suggest what we should look at...

    Read the article

  • Putting our OLTP and OLAP services on the same cluster

    - by Dynamo
    We're currently in a bit of a debate about what to do with our scattered SQL environment. We are setting up a cluster for our data warehouses for sure and are now in the process of deciding if our OLTP databases should go on the same one. The cluster will be active/active with database services running on one node and reporting and analytical services on the other node. From a technical standpoint I don't see an issue here. With the services being run on different nodes they shouldn't compete too heavily for resources. The only physical resource that may be an issue would be the shared disk space. Our environment is also quite small. Our biggest OLAP database at the moment is only about 40GB and our OLTP are all under 10GB. I see a potential political issue here as different groups are involved but I'm just strictly wondering if there would be any major technical issues that could arise from this setup.

    Read the article

  • Shared volume for data (multiple MDF) and another shared volume for logs (multiple LDF)

    - by hagensoft
    I have 3 instances of SQL Server 2008, each on different machines with multiple databases on each instance. I have 2 separate LUNS on my SAN for MDF and LDF files. The NDX and TempDB files run on the local drive on each machine. Is it O.K. for the 3 instances to share a same volume for the data files and another volume for the log files? I don't have thin provisioning on the SAN so I would like to not constaint disk space creating multiple volumes because I was adviced that I should create a volume (drive letter) for each instance, if not for each database. I am aware that I should split my logs and data files at least. No instance would share the actual database files, just the space on drive. Any help is appretiated.

    Read the article

  • MSDTC Port 135 open bi-directionally

    - by Stephen Lacy
    I have two servers, a web application server and an SQL Server database running on its own server. I have a firewall between these two servers. Do I have to open port 135 on both the SQL Server and the Web Application Server. Does the SQL Server open its own connection to the Web Application Server on port 135 or any other port? Do I have to in component services point the Web Application Server MSDTC at the SQL Database Server? If the firewall is completely open, the settings in component services set to allow remote connections, remote administration etc is there any other settings that need to be changed in order to allow remote connections to the SQL Server MSDTC?

    Read the article

  • Trying to get DNS services running on Windows Server 2008 R2, what am I getting wrong ?

    - by LaserBeak
    Ok, So I am basically trying to get a home server pc up that will provide Domain name services, act as Mail server and web server. I have one static IP, well it's not officially static but hasn't changed in two years so I'll call it static. I have done the following: Configured router NAT/virtual port forward UDP/TCP port 53 to the internal IP of my server 192.168.1.16, in adapter settings specified the manual settings: 192.168.1.16 IP, gateway 192.168.1.1, Subnet: 255.255.255.0 and loopback DNS: 127.0.0.1 Using my public my public IP Checked using http://www.canyouseeme.org/ that port 53 is open and is not being blocked by my ISP. It can see services on this port. Registered Domain name (mydomain.com.au) Updated whois database through the domain registrars site and registered NameServer names: ns0.mydomain.com.au and ns2.mydomain.com.au, both have been associated with my single public IP. (Waited 24 hours) Update the nameserver for mydomain.com.au: primary ns0.mydomain.com.au secondary: ns2.mydomain.com.au (waited 24+ hours) Installed Server 2008 R2, install web server role and DNS role. Webserver works when I enter my public IP into browser of any PC/mobile, get IIS7 welcome page. In DNS server: Created new forward lookup zone: ; ; Database file mydoman.com.au.dns for mydomain.com.au zone. ; Zone version: 10 ; @ IN SOA mydomain.com.au. mydomain.testdomain.com. ( 10 ; serial number 900 ; refresh 600 ; retry 86400 ; expire 3600 ) ; default TTL ; ; Zone NS records ; @ NS ns0.mydomain.com.au. @ NS ns1.mydomain.com.au. ; ; Zone records ; @ A 192.168.1.16 www A 192.168.1.16 The Domain name services will however not work, the whois database updated with ns0.mydomain.com.au etc. but when I type in my site name www.mydomain.com.au from an external machine it will not open site and I can't even ping it (Can't find host) When I check the ns0.mydomain.com.au NS record using a tool Like: http://www.squish.net/dnscheck/ I get: Security: Server ns0.mydomain.com.au (XXX.XXX.XXX.XX <- my public IP) is recursive Domain exists but there is no such record Any ideas, thanks...

    Read the article

  • System Center 2012 Service Manager change request status stuck at new

    - by Chuck Herrington
    The guy that built and setup this system left rather abruptly and I've taken over. My current issues are I have several change requests that are stuck at New. They do not move to Pending or In Progress. The system is not sending emails when incidents are getting assigned to people. This used to work on this system. I have done a lot of searching and the usual solution to this of stopping and restarting the system center services does not help. Can anyone give me any ideas of where else to look? Update: From all the searching I have done it seemed like I was at the point of re-installing. My initial installation of SCSM 2012 was on a machine that was upgraded from SCSM 2010 and also hosted SCCM 2007 and WSUS. We decided to give it a fresh start on a new server by installing a second instance of the SCSM server on a brand new 2008 R2 server then promoting the new server to the workflow master using the procedures outlined in this article - Dealing with Multiple management Servers. I've gotten to the point where we have both the old and the new server up and the new server has been promoted. I had hopped to get spammed by emails all the sudden due to the workflow taking off, but no such luck. Once all the clients are reconfigured to point to the new server we still plan to decommission the old server but at this point it seems to be that the problem is in the database. Short of any other input from the community, my next plan is to install a 180 day trial on a test server, complete with a separate database so that I can do a side by side comparison between a completely fresh install and what I have now and see if I can find any differences. While that install is running I also plan on investigating the event logs to see if there is anything in there that can shed some light on what is happening on the new server. Update 2: So I've now got a test SCSM server up with a completely fresh install including Database and it seems to be able to transition Change Requests from New to In Progress. I'm attempting to find differences between the two. Stay Tuned! Update 3: In looking through the event log on the new SCSM machine i discovered: Log Name: Operations Manager Source: OpsMgr Root Connector Date: 10/9/2013 3:48:18 PM Event ID: 28000 Task Category: None Level: Warning Keywords: Classic User: N/A Computer: scsm02 Description: The Root connector received an exception from the SDK Service while submitting task status: Cannot set availability on a health service that doesn't exist. This lead me to Event ID 2800 logged after installing secondary server for System Center 2012 Service Manager SP1. I contacted MS to obtain the hotfix, BIG warning here, turns out the hotfix is not so "hot". In order to apply this hotfix, you have to uninstall then reinstall using the files they supply. :( This is where I am at now ... Update 4: Not much luck after the re-install. The errors in the event log have gone away on the new server but the workflows still aren't running and neither the event log nor the workflow status screen seem to indicate why. I've done a comparison of the Activity and the Change Request Event Workflows and I've removed everything from the production system that is not in my fresh test system (which is everything), shut down the services, cleared out the cache folders and restarted the services and still no joy. At the moment the only thing I can think to do is either a)nuke the entire system including the database and start over, losing all of our data in the process or b)contact MS (which is probably going to cost us a butt load of money and time in the end to only advise us to do the same thing. Maybe more idea's will come after coffee ... No answers came after coffee. Attempting to contact MS. Managed to get to their first line of defense, gave them our SA number and someone is supposed to call me back. I am trying to log into my incident on their site to update my ticket with the link to this thread but when i click on the link in the email they sent me it goes to a "Sorry, the page you requested is not available" page ... Linux is looking better and better all the time.

    Read the article

  • How can I determine what is causing SQL Server 2008 Express to hang Windows 7?

    - by thelaughingdm
    I have SQL Server Express 2008 installed on a Windows 7 (32-bit) developer workstation. Whenever I run an application that accesses SQL Server the Windows 7 shell hangs when the application closes. Applications like Windows Explorer and Task Manager become completely unresponsive. The task bar will not allow any interaction. The only way to recover the system is to power cycle. Two of the applications in use when this happens are NUnit and SQL Server Management Studio. NUnit always runs the unit tests that interact with the database fine. SQL Server Management Studio will sometimes cause the problem while trying to explore the database. The Windows event log does not show any events that are obviously connected to the problem. I have reverted and reinstalled SQL Server Express 2008 several times. What can be done to identify what is causing SQL Server Express 2008 to hang the Windows 7 shell?

    Read the article

< Previous Page | 566 567 568 569 570 571 572 573 574 575 576 577  | Next Page >