Search Results

Search found 55134 results on 2206 pages for 'argument error'.

Page 1032/2206 | < Previous Page | 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039  | Next Page >

  • Cannot find "IIS APPPOOL\{application pool name}" user account in Windows Server 2008

    - by MacGyver
    Normally when setting up IIS 7, I'm used to allowing permissions to user IIS APPPOOL\{application pool name} on the root folder of my web application(s). I also give permissions to IUSR (or the IIS_IUSRS user group. (Note, in Windows Server 2008, I found that IUSR isn't in that group by default, so I added it). In Windows Server 2008, I cannot find user IIS APPPOOL\{application pool name} under Security under the Windows Folder Properties. I'm using Windows Authentication in ASP.NET. I'm receiving a 401.1 on the page in Internet Explorer 8 after getting the authentication prompt. Mozilla Firefox also gave me a Windows authentication prompt, and got me into the site fine. Same with Google Chrome. How can I solve this one? HTTP Error 401.1 - Unauthorized You do not have permission to view this directory or page using the credentials that you supplied. Specific page information: Module: WindowsAuthenticationModule Notification: AuthenticateRequest Handler: PageHandlerFactory-ISAPI-4.0_32bit Error Code: 0x8009030e Requested URL: http://.....aspx Physical Path: C:\.........aspx Logon Method: Not yet determined Logon User: Not yet determined

    Read the article

  • How can I connect integrated webcam with virtualbox

    - by Mike Stumpf
    I am trying to use a Windows XP VM for VirtualBox on my Windows 8.1 laptop. I have tried the usual attaching USB device but I get an error saying "USB device is busy with previous request". My webcam is not active in any applications and this happens after a clean reboot of the host, the guest, and VirtualBox. Here are the details: Host -HP Pavilion 17 Notebook PC (stock) -Windows 8.1 -AMD A10-5750M APU -HP Truevision HD (integrated webcam) VM I got the VM here: http://www.modern.ie/en-us/virtualization-tools VirtualBox -VirtualBox 4.3.12 installed -VirtualBox Extension pack installed -Guest additions are installed for 4.3.12 -Enable USB Controller is checked -It does not matter if enable 2.0 controller is checked or not -It does not matter if a USB device filter is set up for the webcam or not -Here is the error message: Failed to attach the USB device DDFEQ01G45BFBV HP Truevision HD [0004] to the virtual machine IE8 - WinXP. USB device 'DDFEQ01G45BFBV HP Truevision HD' with UUID {7a2e2a45-974d-482b-9b4e-9f9abbcd0ebb} is busy with a previous request. Please try again later. Result Code: E_INVALIDARG (0x80070057) Component: HostUSBDevice Interface: IHostUSBDevice {173b4b44-d268-4334-a00d-b6521c9a740a} Callee: IConsole {8ab7c520-2442-4b66-8d74-4ff1e195d2b6} I read on some VirtualBox forums that disabling USB 2.0 support in the host BIOS solved their issue but I wanted to know if there were any other ideas before I muck around in there. Thanks

    Read the article

  • VNC failure on Xen

    - by BCable
    The following config works and creates a good VM in Xen: # Kernel Setup kernel = "/boot/vmlinuz-2.6.18.8-xenU" # Memory memory = "256" # Disk disk = [ "file:/opt/xen/domains/110/sda1.img,sda1,w", "file:/opt/xen/domains/110/swap.img,sda2,w" ] # container name name = "110" hostname = "boo" # Networking vif = ["type=ieomu, bridge=xenbr0"] # VNC vnc = 1 #vfb = [ 'type=vnc,vncdisplay=2,vnclisten=0.0.0.0,vncpasswd=110' ] # Behavior Settings root = "/dev/sda1" extra = "fastboot" But when I uncomment the VFB line, I get the following error after it hangs for at least 30 seconds: [root@customer 110]# xm create boo.cfg Using config file "./boo.cfg". Error: Device 0 (vkbd) could not be connected. Hotplug scripts not working. Any ideas? Part two of this question: Sometimes it actually works, and a port is opened. When this happens, nmap shows the VNC ports open and I can connect via the VNC client, but it just hangs at "Connection established." and no VNC display shows up. I've tried multiple VNC clients (TightVNC, TightVNC Java Console, RealVNC), but they all fail to connect. Does VNC through Xen require X to be started in order to function? I was under the impression that it would show the console screen, so I'm confused as to why all these issues are occurring. Thanks!

    Read the article

  • What permission(s) does an application pool identity required to manage other application pools?

    - by Mr Shoubs
    I have a web site (used to manage various parts of our software) that needs the permissions required to start/stop other application pools. I've created a user and set the app pool identity to custom, however the web app still can't start/stop the app pools. I get the following Error: System.UnauthorizedAccessException: Filename: redirection.config Error: Cannot read configuration file due to insufficient permissions at Microsoft.Web.Administration.Interop.AppHostWritableAdminManager.GetAdminSection(String bstrSectionName, String bstrSectionPath) at Microsoft.Web.Administration.Configuration.GetSectionInternal(ConfigurationSection section, String sectionPath, String locationPath) at Microsoft.Web.Administration.ServerManager.get_ApplicationPoolsSection() at Microsoft.Web.Administration.ServerManager.get_ApplicationPools() Discussion here suggests setting the application pool to local system or administrator, this does work, but I don't want to do this for security reasons (external support will need access this site). I did give the user higher permissions (as suggested here), starting by making it part of the local administrators group, but initially this didn't work, and giving the user read/write/mod permission on C:\Windows\System32\inetsrv\config also didn't work. I must have done something wrong as local administrator now works, however this still isn't what I want. So can anyone suggest the permissions I need to add to this user, and how can I apply them? An answer my problem (but different question) is here, but to clarify, I think I need to give an individual user "IIS Runtime Operation Permissions", does anyone know how to do this, if indeed this is the permissions I require?

    Read the article

  • Sun T5120: Memory issue with 8GB DIMMs, "Unsupported memory configuration"

    - by watain
    I'm trying to upgrade RAM in a Sun T5120 server, replacing 2GB (Sun P/N: 501-7953-01) with 8GB DIMMs (Sun P/N: 511-1262-01). When bringing up the host system, I get the following errors on the ILOM: -> show faulty Target | Property | Value --------------------+------------------------+--------------------------------- /SP/faultmgmt/0 | fru | /SYS/MB /SP/faultmgmt/0/ | timestamp | Dec 14 15:29:42 faults/0 | | /SP/faultmgmt/0/ | sp_detected_fault | /SYS/MB/CMP0/MCU3 Forced fail faults/0 | | (IBIST) /SP/faultmgmt/0/ | timestamp | Dec 14 15:29:28 faults/1 | | /SP/faultmgmt/0/ | sp_detected_fault | /SYS/MB/CMP0/MCU2 Forced fail faults/1 | | (IBIST) /SP/faultmgmt/0/ | timestamp | Dec 14 15:29:13 faults/2 | | /SP/faultmgmt/0/ | sp_detected_fault | /SYS/MB/CMP0/MCU1 Forced fail faults/2 | | (IBIST) /SP/faultmgmt/0/ | timestamp | Dec 14 15:28:59 faults/3 | | /SP/faultmgmt/0/ | sp_detected_fault | /SYS/MB/CMP0/MCU0 Forced fail faults/3 | | (IBIST) /SP/faultmgmt/1 | fru | /SYS /SP/faultmgmt/1/ | timestamp | Dec 14 15:29:42 faults/0 | | /SP/faultmgmt/1/ | sp_detected_fault | Dec 14 15:29:42 ERROR: faults/0 | | Unsupported memory | | configuration As you can see, the only error message I get is "Unsupported memory configuration". Note that I'm absolutely sure that I placed in the DIMMs in the correct slots. Might this issue be related to the Voltage of the DIMMs? Any suggestions on how to trouble-shoot this issue? This issue seems to be similar to the one explained at "Inserted disabled" while upgrading Sun Sparc t5120 memory. However the given link http://docs.sun.com/source/820-4445-10/chapter1.html seems to point to an inexistent page...

    Read the article

  • Database mirroring login failure attempts on mirror server

    - by Chandan
    I have configured database mirroring between two servers at a distance 40 miles away from each other. Server specifications: SQL Server 2008,Standard Edition 64-bit This is same for principal,mirror and witness. The configuration is high-safety with automatic failover Initially we tested our .net application(web application) on both the principal and mirror and made sure that the login is not orpahned. Things run fine generally.But sometimes on the mirror server,I see login failed attempts: Login failed for user 'd0main\user'. Reason: Failed to open the explicitly specified database. [CLIENT: xx.xx.x.x] Message Error: 18456, Severity: 14, State: 38. This error appears 3-4 times a day but not more than that. My question to the experts is:If the principal is alive so why the application tries to connect to mirror.The default time-out for a .net webpage is 30 seconds,so is it possible that the application tries to connect principal and after 30 seconds even if principal is alive,it assumes that it is dead and thus tries to open a connection to mirror where it fails. Please help me with this problem.

    Read the article

  • SSH X11 forwarding does not work. Why?

    - by Ole Tange
    This is a debugging question. When you ask for clarification please make sure it is not already covered below. I have 4 machines: Z, A, N, and M. To get to A you have to log into Z first. To get to M you have to log into N first. The following works: ssh -X Z xclock ssh -X Z ssh -X Z xclock ssh -X Z ssh -X A xclock ssh -X N xclock ssh -X N ssh -X N xclock But this does not: ssh -X N ssh -X M xclock Error: Can't open display: The $DISPLAY is clearly not set when logging in to M. The question is why? Z and A share same NFS-homedir. N and M share the same NFS-homedir. N's sshd runs on a non standard port. $ grep X11 <(ssh Z cat /etc/ssh/ssh_config) ForwardX11 yes # ForwardX11Trusted yes $ grep X11 <(ssh N cat /etc/ssh/ssh_config) ForwardX11 yes # ForwardX11Trusted yes N:/etc/ssh/ssh_config == Z:/etc/ssh/ssh_config and M:/etc/ssh/ssh_config == A:/etc/ssh/ssh_config /etc/ssh/sshd_config is the same for all 4 machines (apart from Port and login permissions for certain groups). If I forward M's ssh port to my local machine it still does not work: terminal1$ ssh -L 8888:M:22 N terminal2$ ssh -X -p 8888 localhost xclock Error: Can't open display: A:.Xauthority contains A, but M:.Xauthority does not contain M. xauth is installed in /usr/bin/xauth on both A and M. xauth is being run when logging in to A but not when logging in to M. ssh -vvv does not complain about X11 or xauth when logging in to A and M. Both say: debug2: x11_get_proto: /usr/bin/xauth list :0 2>/dev/null debug1: Requesting X11 forwarding with authentication spoofing. debug2: channel 0: request x11-req confirm 0 debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. I have a feeling the problem may be related to M missing in M:.Xauthority (caused by xauth not being run) or that $DISPLAY is somehow being disabled by a login script, but I cannot figure out what is wrong.

    Read the article

  • really weird DNS problem in Ubuntu {after one month, seems like ISP problem}

    - by OmniWired
    Hello everyone. I been having this random dns problem, in Ubuntu 10.04 and in 10.10 it started about 2 weeks ago after (I believe) an update. Basically when I go to a website randomly I get that the website I'm visiting is not available ("Oops! Google Chrome could not connect to ..." & "This webpage is not available."). I tested with Chromium "7.0.515.0 (58587)" and Firefox minefield (4.0ish) and 3.6.9. I did these 4 things already: /etc/default/grub GRUB_CMDLINE_LINUX="ipv6.disable=1" and this: /etc/sysctl.conf net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 1 *disabling Chromium DNS pre-fetching *using Google and OpenDNS servers as well as ISP DNS servers. But didn't improve, also no other computers in my network have the same problem. All computer wired to the same router. I'm a software engineer that run out of ideas, please help me. Thanks in advance. UPDATE: some programs (synaptic / firefox update/ vuze(azureus)) say connection refused for the error. Most of the time a second try will fix the "refusal". UPDATE2: I found out with Wireshark, that everytime I have this problem i've got this 192.168.0.10 8.8.8.8 ICMP Destination unreachable (Port unreachable) Confirmed an ISP error. ISP;Speedy Location: Argentina, Buenos Aires (capital Federal) Area.

    Read the article

  • Google-Bot fell in love with my 404-page

    - by 32bitfloat
    Every day my access-log looks kind of this: 66.249.78.140 - - [21/Oct/2013:14:37:00 +0200] "GET /robots.txt HTTP/1.1" 200 112 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.78.140 - - [21/Oct/2013:14:37:01 +0200] "GET /robots.txt HTTP/1.1" 200 112 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.78.140 - - [21/Oct/2013:14:37:01 +0200] "GET /vuqffxiyupdh.html HTTP/1.1" 404 1189 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" or this 66.249.78.140 - - [20/Oct/2013:09:25:29 +0200] "GET /robots.txt HTTP/1.1" 200 112 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.75.62 - - [20/Oct/2013:09:25:30 +0200] "GET /robots.txt HTTP/1.1" 200 112 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.78.140 - - [20/Oct/2013:09:25:30 +0200] "GET /zjtrtxnsh.html HTTP/1.1" 404 1186 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" The bot calls the robots.txt twice and after that tries to access a file (zjtrtxnsh.html, vuqffxiyupdh.html, ...) which cannot exist and must return a 404 error. The same procedure every day, just the unexisting html-filename changes. The content of my robots.txt: User-agent: * Disallow: /backend Sitemap: http://mysitesname.de/sitemap.xml The sitemap.xml is readable and valid, so there seems to be no reason why the bot should want to force a 404-error. How should I interpret this behaviour? Does it point to a mistake I've done or should I ignore it?

    Read the article

  • Why is apache serving the default?

    - by Matt
    I keep adding more vhosts and enabling them but all the sites always do to the default vhost in sites-available here is what the default kind of looks like with me only changing the ip for security reasons <VirtualHost 167.889.88.88:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> and here is my other which i named some-site.net <VirtualHost *:80> ServerName some-site.net DocumentRoot "/var/www/vhosts/somesite.com/http/" <Directory "/var/www/vhosts/somesite.com/http/"> AllowOverride all Options -MultiViews </Directory> </VirtualHost> and it turned on my this command sudo a2ensite some-site.net Enabling site some-site.net. Run '/etc/init.d/apache2 reload' to activate new configuration! then i reloaded /etc/init.d/apache2 reload * Reloading web server config apache2 ...done. but when i visit the url some-site.net i get the index page that is for the default vhost...what am i doing wrong

    Read the article

  • Exchange 2010 remove Arbitration mailbox and mailbox store db

    - by JNM
    I have a problem with Exchange 2010 which is a nightmare for me. The problem is, that in Exchange management console i have several store databases in database management tab. only one is mounted, because i am using it. the second one is mounted, but it was used on other server before (now that server is dead). that database mounted status is UNKNOWN. The file of that database does not exist, but it still shows there. I can't remove it from management console, because it has mailboxes. i removed all mailboxes and disabled two arbitrary mailboxes. i can't delete it because i still have one arbitrary mailbox left. i can't move it, because it requires connection to dead server. i can't disable it, because i get error that it is the last one in organization. Can somebody help me? Solved it by using this command: Get-Mailbox -Arbitration -Database db1 | Remove-Mailbox -Arbitration -RemoveLastArbitrationMailboxAllowed Now i have another problem. Exchange management console shows public folder from different server which is dead now. That folder was copied here, but it is not needed anymore. Public folder file has been deleted, and records from ADSI edit has been removed too. But i can't remove that folder from management console. i get an error Exchange isn't able to check for public folder replicas for "My Public Folder Database". Anybody can help me with that?

    Read the article

  • How to configure mod_proxy_balancer to gracefully fail under high load

    - by bramp
    We have a system which has one Apache instance in front of multiple tomcats. These tomcats then connect to various databases. We balance the load to the tomcat with mod_proxy_balancer. Currently we are receiving 100 requests a second, the load on the Apache server is quite low, but due to database heavy operations on the tomcats, the load there is roughly 25% (of what I estimate they can handle). In a few weeks there is an event happening and we estimate that our requests will jump significant, maybe by a factor of 10. I'm doing everything I can do reduce the load on our tomcats, but I know we are going to run out of capacity, so I would like to fail gracefully. By this I mean, instead of trying to deal with too many connections which all timeout, I would like Apache to somehow monitor average response time, and as soon as the response time to Tomcat is getting above some threshold, I would like a error page displayed. This means that users who are lucky still get a page rendered quickly, and those who are unlucky get a error page quickly. Instead of everyone waiting far too long for their page, and eventually everyone timing out, and the database being swamped with queries which are never used. Hopefully this makes sense, so I was looking for suggestions on how I could achieve this. thanks

    Read the article

  • How do you setup FTP with IIS Manager Users in an NLB environment with shared IIS configs?

    - by William Jens
    I've setup a 2 node NLB cluster and used the following to share IIS configs between them. http://blogs.technet.com/b/meamcs/archive/2012/05/30/configuring-iis-7-5-shared-configuration.aspx The IIS configs and content is located on a network share via a UNC path. This works - updating IIS settings on one node, is visible in another node and my website works on the individual nodes and the cluster as whole. I'm able to setup an FTP site and successfully connect with my Windows login. However, I want to use IIS Manager Authentication as defined in: http://www.iis.net/learn/publish/using-the-ftp-service/configure-ftp-with-iis-manager-authentication-in-iis-7 I've tried using "Network Service" with the FTP COM object as well as a dedicated user account that exists on all three hosts, but every time I try to login with an IIS user I get something like the following: IISWMSVC_AUTHENTICATION_UNABLE_TO_READ_CONFIG An unexpected error occurred while retrieving the authentication information. Exception:System.Runtime.InteropServices.COMException (0x8007052E): Filename: Error: at Microsoft.Web.Administration.Interop.AppHostWritableAdminManager.GetAdminSection(String bstrSectionName, String bstrSectionPath) at Microsoft.Web.Administration.Configuration.GetSectionInternal(ConfigurationSection section, String sectionPath, String locationPath) at Microsoft.Web.Management.Server.ConfigurationAuthenticationProvider.GetSection(ServerManager serverManager) Process:dllhost User=NT AUTHORITY\NETWORK SERVICE Can anyone point me in the right direction here?

    Read the article

  • Postfix does not work after setting server hostname from plesk

    - by Michael
    I have recently set my server hostname from localhost.localdomain to xx.mydomain.com from the plesk control panel. However after doing this change postfix has stopped working, I tried restarting and regenerating the config files but to no avail. I am not familiar with postfix but I believe there is a setting to be changed in main.cf. Here are the relevant errors I receive: postfix/postfix-script: starting the Postfix mail system postfix/master: daemon started -- version 2.8.4, configuration /etc/postfix postfix/cleanup: fatal: host/service localhost/12768 not found: Name or service not known postfix/pickup: warning: maildrop/E5559996219: error writing 4FA2C996217: queue file write error postfix/master: warning: process /usr/libexec/postfix/cleanup pid 15334 exit status 1 postfix/master: warning: /usr/libexec/postfix/cleanup: bad command startup -- throttling Any ideas? EDIT Setting it back to localhost.localdomain makes it work again. The only references to localhost/12768 I can find in main.cf are: smtpd_milters = inet:localhost:12768 non_smtpd_milters = inet:localhost:12768 Should something be changed here? These two lines stay the same when I change the hostname. EDIT If I comment out the two mail filter lines (the ones shown above) and restart, postfix works with the new hostname. However this is obviously not the ideal solution...

    Read the article

  • RoboCopy errors on Windows Server 2008

    - by Steve
    I am getting bizarre error with RoboCopy in Server 2008. It will randomly hang with "The specified network name is no longer available." error. Once that happens, it will continue to fail on the retries. But of course the remote server IS still available on the network and can be reached with other tools. I think it must be somehow permission related but I can't figure out what is wrong. Any ideas would be much appreciated. Other info: Options : *.* /S /E /COPY:DAT /NP /R:10 /W:30 If I turn on the /B option it will fail 100% of the time at the very beginning (that's why I think it has to be somehow permission-related) The two servers are standalone and I am doing a NET USE command prior to the robocopy It does not matter what user account on the remote server. Tried both Administrator and another user which was also a member of the local Administrators group UAC is turned off on both sides It is not always the same file that hangs. Sometimes it will get through half or more and sometimes it will fail on the first file

    Read the article

  • MySQL problem with many concurrent connections

    - by user48303
    Hi, here's a sixcore with 32 GB RAM. I've installed MySQL 5.1.47 (backport). Config is nearly standard, except max_connections, which is set to 2000. On the other hand there is PHP5.3/FastCGI on nginx. There is a very simple php application which should be served. NGINX can handle thousands of request parallel on this machine. This application accesses MySQL via mysqli. When using non-persistent connections in mysqli there is a problem when reaching 100 concurrent connections. [error] 14074#0: *296 FastCGI sent in stderr: "PHP Warning: mysqli::mysqli(): [2002] Resource temporarily unavailable (trying to connect via unix:///tmp/mysqld.sock) in /var/www/libs/db.php on line 7 I've no idea to solve this. Connecting via tcp to mysql is terrible slow. The interesting thing is, when using persistent connections (add 'p:' to hostname in mysqli) the first 5000-10000 thousand requests fail with the same error as above until max connections (from webserver, set to 1500) is reached. After the first requests MySQL keeps it 1500 open connections and all is fine, so that I can make my 1500 concurrent requests. Huh? Is it possible, that this is a problem with PHP FastCGI?

    Read the article

  • Fix for php 5.3.9 libxsl security "bug" fix

    - by Question Mark
    just this morning i updated my debian server to php 5.3.9 , change log (last item in list) has a fix for this bug and now when running any hosted site using XSL transforms i get: Warning: XSLTProcessor::transformToXml(): Can't set libxslt security properties, not doing transformation for security reasons I'm not using any <sax:output> tags in my xslt at all. Does anybody have any information on this, current chatter about it is thin, so i'm i little lost. Using the suggestion about switching ini settings on and off either side of -transformToXml(): ini_set("xsl.security_prefs", XSL_SECPREFS_NONE) or $xsl->setSecurityPreferences(XSL_SECPREFS_NONE) brings me back to the same error Many thanks. Progress: - Upgrading libxml and recompiling libxslt against the new version was a good suggestion, though has not fixed the issue. - Compiling the latest php5.3 snapshot does not fix the issue. Solution: I'm unsure what actually solved this, very sorry for anyone else having the same problem. firstly i upgraded libxml, then applied a few patches, then went into php source for the xsl parser and added some debugging and a few tweaks, after a few compiles getting the configure args right the error went away and wasn't reproducible. I would definitely recommend upgrading libxml as Petr suggested below and then grabbing the latest snapshot from php.net.

    Read the article

  • Where is the TFS database?

    - by Blanthor
    I've been using TFS 2010 with no problems. I tried adding a user and I got the following error message. "TF30063: You are not authorized to access <serverName>\DefaultCollection. -The remote server returned an error: (401) Unauthorized." I remoted into the server, <serverName>, and opened the TFS Console. The logs mentioned a connection string: ConnectionString: Data Source=<serverName>\SS2008;Initial Catalog=Tfs_DefaultCollection;Integrated Security=True While remoted in I open SQL Server 2008 Management Studio opening the (local) server with Windows Authentication. It shows the connection to be (local)(SQL Server 9.04.03 - <serverName>\Admin), and there is no Tfs_DefaultCollection database. Can someone tell me what is going on? Was I wrong in connecting to this instance of the database (i.e. Is the log file the wrong place to find the connection string)? Is the database so corrupted that SQL Manager Studio cannot see it anymore, although TFS could? Should I be logging into Management Studio as user SS2008? btw I don't know of any such credentials.

    Read the article

  • Database problem with MS JET

    - by Zimmy-DUB-Zongy-Zong-DUBBY
    I am migrating a bunch of sites which each use an Access database (or whatever an MDB file is). If I try to load the site, I get the following error: Microsoft OLE DB Provider for SQL Server error '80004005' [DBNETLIB][ConnectionOpen (Connect()).]SQL Server does not exist or access denied If I rename the MDB file, I get a complaint that the file does not exist, which makes sense. If the file is named correctly, the site tries to load for about 30 seconds or so, and then just fails with the above message. During this waiting period, I can see a lock file being created (and then at some point removed). The MDB file and it's parent dir have full permissions granted to all users. Given that the lock file is successfully created and removed, I don't think that this is a "real" permission issue. The OS is Windows Server 2003 SP2. I am not sure about much more detail on it's config wrt Access databases. I also don't know what version it is expected to be. VB code in question: set oConn=server.createobject("adodb.connection") DSNtemp="Provider=Microsoft.Jet.OLEDB.4.0;Data Source=D:\fullPathGoesHere\db\sitedb.mdb" oConn.Open DSNtemp

    Read the article

  • HP Pavillion DV6500 recovery disk failure

    - by Scott W
    I recently attempted to re-install Windows Vista on an HP Pavillion DV6500 using the factory recovery DVD's, but encountered a strange problem. When the recovery disk attempted to reformat the hard disk, it failed at 22%. The error message provided was not very informative, just the error code "0x400110020000 1005". A google search turned up some people with a similar problem who asserted that HP has been know to ship corrupted recovery DVDs. The recovery disk did manage to reformat the the recovery partition before failing though, so recovering from the partition is no longer an option. It would be possible to reinstall from an off-the-shelf retail copy of Vista and then pull the drivers from HP's website, but I don't have access to a copy of Vista, and it would really be outrageous to have to purchase a new OS when I have a perfectly valid license already. Thought about biting the bullet and upgrading to Windows 7, but my understanding is that without Vista installed I'd be unable to use the upgrade version, and be forced to purchase the more expensive non-upgrade retail copy (!). Can anyone suggest a possible solution to this Catch-22? I've run out of ideas.

    Read the article

  • Linux Mounting Problem

    - by Sam
    I have an Iomega Network Attached Storage device on my Windows network. I am trying to use a clonezilla live USB flash drive to backup my netbook to my Iomega Network Attached Storage device. The clonezilla USB flash drive runs linux. I'm having trouble getting the Network Attached Storage unit to mount using the following command: mount -t cifs -o username="myUsername" //192.168.1.100/backup /home/partimg The response from linux is: [134.730738] CIFS VFS: cifs_mount failed w/return code = -6 retrying with upper case share name [134.788461] CIFS VFS: cifs_mount failed w/return code = -6 mount error(6): No such device or address Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) I also tried adding the following to my username: username="myUsername,domain=workgroup" but that did not change the error. I am able to ping the network attached storage unit from linux on my netbook. I also booted from a Slax Live USB Flash Drive and Slax auto-mounted my network attached storage unit via Samba. Unfortunately, I don't believe that I can run clonezilla from inside the Slax installation. Does anyone have any insight about what is wrong with my mount statement? Or is there something peculiar about Iomega drives which makes this impossible?

    Read the article

  • Robocopy failure with Windows Server 2008 Scheduled Task

    - by CC
    So I have a batch script for robocopy. Running this from the command line does exactly what I want. robocopy "D:\SQL Backup" \\server1\Backup$\daily /mir /s /copyall /log:\\lmcrfs4g\NavBackup$\robocopyLog.txt /np Then I create a Scheduled Task in Windows Server 2008. If I set up the task to use my Domain Admin account, great. But I'm trying to get it to run as a separate domain account for Scheduled Tasks. If I use that account, folders get created, but files aren't copied. I get the following error: 2011/02/17 15:41:48 ERROR 1307 (0x0000051B) Copying NTFS Security to Destination Directory D:\SQL Backup\folder\ This security ID may not be assigned as the owner of this object. I've verified my domain\Scheduled Tasks account has Full Control NTFS permissions on both the source and destination, and the Full Control Sharing on my hidden \server1\backup$ share. Just for giggles, I've tried adding the domain account to the local Administrators group on both servers. This works fine, but that seems like a lot of privileges just to copy files. Any ideas on what I'm missing?

    Read the article

  • Migrating from tomcat to tc server - receiving java.sql.SQLException on startup

    - by user470184
    I'm receiving below error when I start tcServer. I do not receive this error on standalone version of tomcat. Is there extra config I need to add for tcServer ? WARNING: Unexpected exception resolving reference java.sql.SQLException: Io exception: The Network Adapter could not establish the connection at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:146) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:255) at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:387) at oracle.jdbc.driver.PhysicalConnection.(PhysicalConnection.java:441) at oracle.jdbc.driver.T4CConnection.(T4CConnection.java:165) at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:35) at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:801) at org.apache.tomcat.jdbc.pool.PooledConnection.connectUsingDriver(PooledConnection.java:277) at org.apache.tomcat.jdbc.pool.PooledConnection.connect(PooledConnection.java:182) at org.apache.tomcat.jdbc.pool.ConnectionPool.createConnection(ConnectionPool.java:699) at org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:631) at org.apache.tomcat.jdbc.pool.ConnectionPool.init(ConnectionPool.java:485) at org.apache.tomcat.jdbc.pool.ConnectionPool.(ConnectionPool.java:143) at org.apache.tomcat.jdbc.pool.DataSourceProxy.pCreatePool(DataSourceProxy.java:116) at org.apache.tomcat.jdbc.pool.DataSourceProxy.createPool(DataSourceProxy.java:103) at org.apache.tomcat.jdbc.pool.DataSourceFactory.createDataSource(DataSourceFactory.java:539) at org.apache.tomcat.jdbc.pool.DataSourceFactory.getObjectInstance(DataSourceFactory.java:237) at org.apache.naming.factory.ResourceFactory.getObjectInstance(ResourceFactory.java:140) at javax.naming.spi.NamingManager.getObjectInstance(NamingManager.java:304) at org.apache.naming.NamingContext.lookup(NamingContext.java:793) at org.apache.naming.NamingContext.lookup(NamingContext.java:140) at org.apache.naming.NamingContext.lookup(NamingContext.java:781) at org.apache.naming.NamingContext.lookup(NamingContext.java:153) at org.apache.catalina.core.NamingContextListener.addResource(NamingContextListener.java:1028) at org.apache.catalina.core.NamingContextListener.createNamingContext(NamingContextListener.java:637) at org.apache.catalina.core.NamingContextListener.lifecycleEvent(NamingContextListener.java:238) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142) at org.apache.catalina.core.StandardServer.start(StandardServer.java:747) at org.apache.catalina.startup.Catalina.start(Catalina.java:595) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)

    Read the article

  • Fresh Install of SQL Server 2008 doesn't install managment studio. Help!

    - by Jordan S
    Ok I am running Windows 7, 64 bit. I cleaned of SQL server 2005 completely off my system leaving only SQL Compact Edition. I went here http://www.microsoft.com/downloads/details.aspx?FamilyID=01af61e6-2f63-4291-bcad-fd500f6027ff&displaylang=en and installed SQL Server 2008 Express Edition Service Pack 1. After the install, under my start bar menu all i have for SQL configuration tools are the Configuration Manager, Error and Usage Reporting and the Install Center. I don't have the SQL Managment Studio. So I went here http://www.microsoft.com/downloads/details.aspx?FamilyID=08e52ac2-1d62-45f6-9a4a-4b76a8564a2b&displaylang=en and downloaded the SQL Server 2008 Management Studio Express but when I try to install it I get a warning says This program has known compatibility issues and that I need to Install SQL Server 2008 Service Pack 1. I thought that is what I installed. So, I tried to continue running the install but I then get an error message that says Invoke or BeginInvoke can not be called on a Form before it is opened... How can I check if Service pack 1 is installed or not? What should I do?

    Read the article

  • Upstart: cannot run as root

    - by Ronni Egeriis
    I have made this upstart script, which starts a Node.js service. But all of the sudden the service has stopped, and upstart has failed to restart it. Now that I am trying to start it manually, it fails to recognize my service: start: Unknown job: queue The script is properly placed in /etc/init, and should have the correct rights: -rw-r--r-- 1 root root 200 Aug 7 13:30 queue.conf When I check the config file with init-checkconf however, it says that it is not able to run as root: root@production1:~# init-checkconf /etc/init/queue.conf ERROR: cannot run as root What causes this error and how do I solve it? Debug info: Ubuntu 12.04.3 LTS root@production1:~# service --version service ver. 0.91-ubuntu1 Edit Here's queue.conf: description "Echo.it command queue" author "Ronni Egeriis Persson <[email protected]>" stop on shutdown respawn respawn 20 5 exec sudo -u beanstalk /usr/bin/node /var/www/queue/index.js >> /var/log/queue.log 2>&1 The command sudo -u beanstalk /usr/bin/node /var/www/queue/index.js >> /var/log/queue.log 2>&1 works fine when run manually.

    Read the article

< Previous Page | 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039  | Next Page >