Search Results

Search found 68825 results on 2753 pages for 'problem'.

Page 720/2753 | < Previous Page | 716 717 718 719 720 721 722 723 724 725 726 727  | Next Page >

  • How to update-grub on a system running overlayroot?

    - by mikepurvis
    We ship boxes configured with overlayroot, using the following overlayroot.conf: overlayroot=device:dev=/dev/sda6,timeout=20,recurse=0 Which produces the following mount configuration: $ mount overlayroot on / type overlayfs (rw,errors=remount-ro) /dev/sda5 on /media/root-ro type ext3 (ro,relatime,errors=continue,user_xattr,acl,barrier=1,data=ordered) /dev/sda6 on /media/root-rw type ext3 (rw,relatime,errors=continue,user_xattr,acl,barrier=1,data=ordered) /dev/sda1 on /boot type ext3 (rw) As you can see, three key physical partitions: sda1 is /boot, sda5 is a read-only "factory" root, and sda6 is a "user" root which can be wiped at any point to restore the machine to its original factory state. Now, the problem arises when update-grub is run for any reason: $ sudo update-grub [sudo] password for administrator: /usr/sbin/grub-probe: error: cannot find a device for / (is /dev mounted?). Understandable, since / is an overlayfs. The contents of /usr/sbin/update-grub are: #!/bin/sh set -e exec grub-mkconfig -o /boot/grub/grub.cfg "$@" With /usr/sbin/grub-mkconfig being the business part of things. But the actual problem is in /usr/sbin/grub-probe, called by grub-mkconfig, and grub-probe is a binary. So my question is, is there a parameter or whatever which can make grub-probe do the right thing in the face of / being an overlayfs? And secondly, is there a way to hack/patch that in so that the update-grub script just does the right thing? Thanks.

    Read the article

  • How to scope access to a service to set of users, using OpenLDAP, and only OUs

    - by JDS
    Okay, here goes. Solving this will solve several problems for me (as I can reapply this knowledge to several extant, similar problems), but luckily I have a very specific, concise problem to describe. Enough preamble. Our hosting partner is setting up VPN access for us and is connecting it to our LDAP server. They are using Cisco VPN, the docs on setting this up are here: http://www.cisco.com/en/US/products/ps6120/products_configuration_example09186a00808c3c45.shtml#maintask1 Specifically, note the screenshot in (5), under "ASDM" Now, I do NOT want to provide access to all of our users. I only want to provide access to our IT group. But I do not see a configuration option for LDAP groups on that web reference for the Cisco VPN. We are using: OpenLDAP 2.4 Static groups (i.e. "Group has the following members...") Single user OU, "ou=users,dc=mycompany,dc=com" Is it possible to provide an alias of some kind in OpenLDAP that creates another OU, "itusers", say, and lets me alias the members of that OU somehow? Something like: "cn=Jeff Silverman,ou=itusers,dc=mycompany,dc=com" is an alias for "cn=Jeff Silverman,ou=users,dc=mycompany,dc=com" And is NOT a separate, unique user account. Alternatively, should I just create a separate OU and manage it separately? It is a pain, but only 12-15 users will have to be managed that way, with two separate user accounts. But I hate this option - messy, unmanageable, unscalable. You know what I mean. I am open to any options. I've searched and read all over but I can't quite find an directly analagous example. I can't possibly be the only one who's had this problem! Thanks!

    Read the article

  • PECL install error after upgrading to OSX 10.8

    - by Clive
    I've just upgraded my OS to Mountain Lion and PECL is no longer working (it's on a test drive so no drama, but I'd like to get it working so I can upgrade the OS on my shiny new SSD as well). I'm using the native PHP installation, no macports/homebrew or anything like that. Running sudo pecl install uploadprogress (for example) produces the following terminal output: downloading uploadprogress-1.0.3.1.tgz ... Starting to download uploadprogress-1.0.3.1.tgz (9,040 bytes) .....done: 9,040 bytes 4 source files, building running: phpize grep: /usr/include/php/main/php.h: No such file or directory grep: /usr/include/php/Zend/zend_modules.h: No such file or directory grep: /usr/include/php/Zend/zend_extensions.h: No such file or directory Configuring for: PHP Api Version: Zend Module Api No: Zend Extension Api No: autom4te: need GNU m4 1.4 or later: /usr/bin/m4 ERROR: `phpize' failed I'm guessing the problem is the 3 grep lines. I've found several threads that suggest this is caused by XCode not being installed...but XCode is installed, and updated to the latest version (4.4). All the relevant symlinks to /Developer/usr/bin/* also exist as they should. m4 is currently at version: m4 (GNU M4) 1.4.13, so even though the output above contains a line pertaining to it, I don't think that can be the problem. I'm sure it's just a simple issue, anyone got any clues?

    Read the article

  • Outlook 2007 OST File Indexing and OneNote 2007 Indexing are Broken

    - by Matt
    I'm running Outlook 2007 under Windows 7 Home Premium RTM. My OST file was previously being properly indexed but eventually searches significantly slowed down so I suspected a problem. Searching and indexing appears broken in OneNote 2007 as well as search time is now significantly longer. I brought up the Outlook 2007 Search Options dialog and noticed that my mailbox (running from an Exchange 2007 server) wasn't listed in the "Index messages in these data files:" list box. Next I ran the Windows "Find and fix problem with windows search" wizard which reported no errors. Then I brought up the Windows Indexing Options dialog which shows Outlook listed (as shown here): then clicked Advanced and Rebuilt the index. No dice - the listbox in the Outlook 2007 dialog still didn't show my mailbox. When I clicked the Modify button in the Indexing Options dialog I see the following: When I hover over the "oneindex://..." entry, the alt text indicates "This location is currently unavailable". When I delete it and rebuild the index, this entry returns. UPDATE: Comparison of the last screenshot above with a working PC shows that on the broken PC, the lower half of the dialog lists Outlook but neither Outlook or OneNote are showing in the upper half. The working PC has Outlook and OneNote in both parts of the dialog.

    Read the article

  • Problems configuring DB2 CLI/ODBC System DSN ODBC Data Source Administrator

    - by Komyg
    I am trying to create a System DSN ODBC connection to a DB2 9.5 database, but I am getting a very strange problem. I've looked through the internet and found the following page that has some instructions on how I should proceed: http://www.ryslander.com/how-to-install-and-configure-db2-odbc-driver/. I followed these instructions and I am able to create a new System DSN, however when I try to configure it it seems as if my configurations don't work at all. For example, when I click on the "Configure" button on my System DSN and I add a TCP/IP protocol configuration on the "Advanced Settings" tab and click "Ok", no errors appear, but when I click on "Configure" again my TCP/IP setting has vanished. This happened to all my other configurations, such as database name, username, password etc. Could you help me figure out what I am doing wrong? Note: my user is in the administrator group and I am using a Windows Server 2008 R2 Enterprise x64. UPDATE I managed to create a User DSN and connect to the database. However the problem with the System DSN remains.

    Read the article

  • TFS 2010 : Unable to add Project to a collection

    - by Scott
    This morning I'm trying to setup Team Foundation Server 2010 to demo for my team. As this is just a demo, I thought I would install it on my Windows 7 machine which also serves as my development machine. My development machine uses Visual Studio 2008 Team Suite. I installed Team Explorer 2008 and then reapplied SP1. Finally I installed and setup TFS 2010. TFS by default gave me administrator privileges. I started up Visual Studios, and connected up to the Collection just fine. However, I'm unable to create a new project and get the follow error message: "TF30172: You are trying to create a team project either without required permissions or with an older version of team Explorer. Contact your project admin..." To check to permissions, I used my home computer which is running Visual Studio 2010. On this machine I was able to connect up to the same TFS instance and create a project no problem. So it looks as though it is a team explorer problem, but everywhere on the web people are saying not only am what I'm trying to do possible, but they have done it themselves. What am I missing to add a project to TFS 2010 under Visual Studio 2008?

    Read the article

  • PXE boot and DHCP server configuration Failing Auto Installation

    - by Harihara Vinayakaram
    I have a ISC DHCP Server installed on Ubuntu 9.10 . I have managed to successfully boot a PXE client , obtain a DHCP address and load the initrd.gz file. But I am facing a vague problem when the debian installer starts up and tries to get a DHCP server The client send a DHCP request and I verified that is the same MAC Address. But I get a DHCP DECLINE (The client declines the address ). It offers all the address in the pool and then there is a DHCP NAK (no more free leases ) I tried using the Option no-ping, and also option one-client-one-lease but it does not help . If I set the client to use a fixed-address then the above problem is not there and the installation proceeds smoothly Can you give me any clues on what should be the DHCP server configuration My dhcpd.conf looks like this { ddns-update-style none; option domain-name "hadoop-myorg.org"; option domain-name-servers 192.168.3.5; default-lease-time 600; max-lease-time 7200; group { filename "pxelinux.0"; next-server 192.168.13.184; host hadoop1 { hardware ethernet 90:e6:ba:d5:53:f8; } } subnet 192.168.13.0 netmask 255.255.255.0 { option routers 10.0.0.254; pool { option domain-name-servers 192.168.3.5; max-lease-time 3000; range 192.168.13.55 192.168.13.65; deny unknown-clients; } } }

    Read the article

  • Redmine subversion won't ignore certificate error even if told

    - by Pekka
    I have set up a copy of Redmine through the Bitnami Redmine Stack and am having trouble accessing a remote SVN repository through https. The trouble seems to be related to the fact that I don't have a signed certificate, and the certificate provided doesn't match the host name (I am accessing the same server through a number of host names). I am new to Ruby, Mongrel, Rails and Redmine. Following the advice in this forum thread, I changed the path Redmine uses to invoke the svn client in \apps\redmine\lib\ redmine\scm\adapters\subversion_adapter.rb from SVN_BIN = "svn" to SVN_BIN = "svn --trust-server-cert --non-interactive --config-dir c:/user/temp" I was hoping that the --trust-server-cert option would fix the certificate problem. However, I am still getting the following error message in mongrel.log: svn: OPTIONS of 'https://server.xyz:8443/svn/reponame': Server certificate verification failed: certificate issued for a different hostname, issuer is not trusted (https://server.xyz:8443) Does anybody know what to do about this? Additional info: I re-started the mongrel service after each change I am sure the configuration change has taken effect because subversion has created a full configuration directory in c:\user\temp I can access the remote repository using command line svn no problem The remote repository runs on a Windows box with VisualSVN

    Read the article

  • Error installing dotnet framework 3.5 SP1 on windows 2008

    - by Shiraz Bhaiji
    Getting a really wierd error. One of the developers tried to install Windows 2008 as a Virtual PC. He has also run windows update. When he tries to install dotnet framework 3.5 SP1 he gets the following error: [09/25/09,12:48:26] Microsoft .NET Framework 2.0SP1 (CBS): [2] Error: Installation failed for component Microsoft .NET Framework 2.0SP1 (CBS). MSI returned error code 1 [09/25/09,12:48:34] WapUI: [2] DepCheck indicates Microsoft .NET Framework 2.0SP1 (CBS) is not installed. I though that dotnet framework was installed automatically with windows update on windows 2008. So how could it be missing? Thanks. Shiraz EDIT We also have the same problem on a VPC that had dotnet framework 3.5 installed and working OK. I have tried removing all versions of dotnet framework, using the following clean up tool: http://blogs.msdn.com/astebner/pages/8904493.aspx I then downloaded and tried to install dotnet framework 2.0 SP1, from this location: http://www.microsoft.com/Downloads/details.aspx?familyid=79BC3B77-E02C-4AD3-AACF-A7633F706BA5&displaylang=en The error I now get is: "This product is not supported on the Vista Operating System" EDIT Thanks for the help, have given an up vote to everyone. In the end our problem was that we had installed Windows Server 2008 from an older ISO image, on this everything worked fine untill we tried to install framework 3.5 SP1. We reinstalled Windows from a new image, and it worked OK.

    Read the article

  • Exim service cPanel error

    - by Luka
    I cleaned out some logs from my cPanel dedicated server From here http://linuxhostingsupport.net/blog/log-files-on-a-cpanel-server i deleted all log listed at that link. Problem is with EXIM process it can not shut down, but it can run. When I try to send Email from roundcube, horde or via smtp it is down. 25 port is down, I can not receive, or send mails. But 1 minute before cleaning logs I received mails and I could send mails. what is problem, I just deleted logs... When I try service exim restart. I get: Shutting down clamd: [ OK ] Shutting down exim: [FAILED] Shutting down spamd: [ OK ] Starting clamd: [ OK ] Starting exim: [ OK ] 0 processes (antirelayd) sent signal 9 /usr/local/cpanel/scripts/update_sa_rules: running in background Exim log: 2012-10-20 03:06:14 cwd=/ 3 args: /usr/sbin/exim -bd -q1h 2012-10-20 03:06:24 cwd=/ 3 args: /usr/sbin/exim -bd -q1h 2012-10-20 03:06:32 cwd=/ 3 args: /usr/sbin/exim -bd -q1h 2012-10-20 03:06:34 cwd=/ 2 args: /usr/sbin/sendmail -t 2012-10-20 03:08:20 cwd=/ 3 args: /usr/sbin/exim -bd -q1h 2012-10-20 03:11:37 cwd=/ 2 args: /usr/sbin/sendmail -t 2012-10-20 03:13:45 cwd=/ 3 args: /usr/sbin/exim -bd -q1h 2012-10-20 03:14:01 cwd=/ 3 args: /usr/sbin/exim -bd -q1h 2012-10-20 03:14:28 cwd=/home/pegaz/public_html 3 args: /usr/sbin/sendmail -t -i 2012-10-20 03:21:43 cwd=/ 3 args: /usr/sbin/exim -bd -q1h

    Read the article

  • Error with Apache, Nagios and Snorby integration

    - by user1428366
    I'm trying to use apache to serve two different websites (Nagios and Snorby). The problem is that when I try to see the "/snorby" website, apache sends me the "It works" page. If I try to access to "/nagios" it works perfectly. Snorby is running under ruby passenger .This are the config files. <VirtualHost *:80> ScriptAlias /nagios/cgi-bin "/srv/nagios/sbin" <Directory "/srv/nagios/sbin"> # SSLRequireSSL Options ExecCGI AllowOverride None Order allow,deny Allow from all # Order deny,allow # Deny from all # Allow from 127.0.0.1 AuthName "Nagios Access" AuthType Basic AuthUserFile /srv/nagios/etc/htpasswd.users Require valid-user </Directory> Alias /nagios "/srv/nagios/share" <Directory "/srv/nagios/share"> # SSLRequireSSL Options None AllowOverride None Order allow,deny Allow from all # Order deny,allow # Deny from all # Allow from 127.0.0.1 AuthName "Nagios Access" AuthType Basic AuthUserFile /srv/nagios/etc/htpasswd.users Require valid-user </Directory> </VirtualHost> And the other one is this: <VirtualHost *:80> #Alias /snorby "/var/www/snorby-2.6.0/public" # !!! Be sure to point DocumentRoot to 'public'! DocumentRoot /var/www/snorby-2.6.0/public <Directory /var/www/snorby-2.6.0/public> # This relaxes Apache security settings. AllowOverride all # MultiViews must be turned off. Options -MultiViews </Directory> </VirtualHost> If I disable the Nagios webpage, the Snorby webpage works. I think the problem is Snorby because when I try to access to the Ip address with Nagios page disable, the webapplication redirects me to http:// myserverip/dashboard. Can anyone help me please? Thank you so much! Regards

    Read the article

  • Printer spooler spoolsv.exe crashes

    - by MattiasSN
    We have a problem with a Windows 7 print spooler. There is a Windows 2011 Small Business Server running as print server and 2 computers in the network their print spooler keeps crashing at random. The log files says it is ntdll.dll that has a fault. Naam van toepassing met fout: spoolsv.exe, versie: 6.1.7601.17514, tijdstempel: 0x4ce7b4e7 Naam van module met fout: ntdll.dll, versie: 6.1.7601.17725, tijdstempel: 0x4ec4aa8e Uitzonderingscode: 0xc0000374 Foutoffset: 0x00000000000c40f2 Id van proces met fout: 0x55c Starttijd van toepassing met fout: 0x01cd9db324904eb1 Pad naar toepassing met fout: C:\Windows\System32\spoolsv.exe Pad naar module met fout: C:\Windows\SYSTEM32\ntdll.dll Rapport-id: 8789af0b-09a6-11e2-9d78-001c25237c45 The print spooler on the server keeps running and works fine. We can also print from other computers. But on two computers the print spooler crashes. Sometimes it crashes after a user is logged in, but it also happened multiple times after a print job. After each crash we get the same ntdll.dll error. Hopefully someone can help me with this problem. If you need more information, don't hesitate to ask.

    Read the article

  • Trouble with IIS SMTP relaying to Gmail

    - by saille
    I appreciate that similar questions have been asked about how to setup SMTP relaying with IIS's virtual SMTP server. However I'm still completely stumped on this problem. Here's the setup: IIS 6.0 SMTP server running on Win2k3 box with a NAT'ed IP. Company uses Gmail for all email services. An app on the box needs to send email, so normally we'd just set the app up to talk to smtp.gmail.com directly, but this app doesn't support TLS. Easy, we just setup a local SMTP relay right? So I thought. What we have done so far: Setup IIS SMTP server to relay to smtp.gmail.com, as per these excellent instructions: http://fmuntean.wordpress.com/2008/10/26/how-to-configure-iis-smtp-server-to-forward-emails-using-a-gmail-account/ The local SMTP relay allows anonymous access. Both the local IP and the loopback IP have been explicitly allowed in the Connection... and Relay... dialogs. Tried sending email from 2 different apps via the local SMTP server, but failed (the emails end up in the Queue folder, but never get sent). The IIS logs show the conversation with the local app, but zero conversation happening with smtp.gmail.com. The port used by gmail is open outbound, and indeed the apps we have that support TLS can send email directly via smtp.gmail.com, so there is no problem with the network. At this point I changed the smtp settings in IIS SMTP server to use a different external SMTP server and hey-presto, the local apps can send email via local IIS SMTP relay. So smtp.gmail.com fails to work with our IIS SMTP relay, but another 3rd party SMTP service works fine. We need to use smtp.gmail.com, so how to troubleshoot this one?

    Read the article

  • Modem doesn't seem to pass internet connection to router

    - by Vian Esterhuizen
    I was supplied a Cisco DPC3825 modem by my ISP and use a Cisco E4200 as my main router. Today the Internet just stopped working, just out of the blue, I can't think of anything that would have triggered it. It's been working pretty good for the past month except for a few random blips here and there. After some trouble shooting I realized that a direct wired connection to the modem would get me Internet access but if I was wired to the router as I was before I would have no connection. Assuming it was router I connected a TP-Link WR841N but it had the exact same problem. Also, connecting to the router via Wi-Fi from multiple devices will connect me to the router but I still can't get access to the Internet. From these test, it seems that the modem just won't send Internet through to the router but is clearly connecting and able to directly connect to my PC. What I've Tried A full reset and factory reset on the E4200 A full reset on the modem (but I believe my ISP has remote access to the modem because the passwords and so on are always set back). My ISP has remotely reconfigured the modem What I Should Try What else can I try? What should I do to try to narrow down the issue? Can you figure out what the problem might be based on this information?

    Read the article

  • Failed to connect from slave to master with error "error connecting to master (1045)"

    - by Victor Lin
    I try to setup replication from slave to the master. CHANGE MASTER TO MASTER_HOST = 'master', MASTER_PORT = 3306, MASTER_USER = 'repl', MASTER_PASSWORD = 'xxx'; And I did grant privileges to the user on master. I can connect with mysql command from slave machine to the master mysql -h master -u repl -p mysql> show grants; GRANT RELOAD, SUPER, REPLICATION SLAVE, CREATE USER ON *.* TO 'repl'@'xxx' IDENTIFIED BY PASSWORD 'xxx' mysql> select 1; +---+ | 1 | +---+ | 1 | +---+ 1 row in set (0.04 sec) As you can see, privileges are correct, connection works fine, but however, the connection for replication to master always failed. mysql> show slave status\G *************************** 1. row *************************** Slave_IO_State: Connecting to master Master_Host: master Master_User: repl Master_Port: 3306 Connect_Retry: 60 Master_Log_File: Read_Master_Log_Pos: 4 Relay_Log_File: slave-replay-bin.000002 Relay_Log_Pos: 4 Relay_Master_Log_File: Slave_IO_Running: Connecting Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 0 Relay_Log_Space: 107 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 1045 Last_IO_Error: error connecting to master 'repl@master:3306' - retry-time: 60 retries: 86400 Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 0 1 row in set (0.00 sec) Is this caused by different version of MySQL server? The version of master is 5.0.77, and the slave is 5.5.13. But all articles I could find tell me that it's okay to replicate from a newer slave to old master. How to solve this problem? -- Update -- I even try to upgrade the old MySQL, but still, the problem is not solved. mysql> show slave status\G *************************** 1. row *************************** Slave_IO_State: Connecting to master Master_Host: master Master_User: repl Master_Port: 3306 Connect_Retry: 60 Master_Log_File: master-bin.000007 Read_Master_Log_Pos: 107 Relay_Log_File: slave-replay-bin.000001 Relay_Log_Pos: 4 Relay_Master_Log_File: master-bin.000007 Slave_IO_Running: Connecting Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 107 Relay_Log_Space: 107 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 1045 Last_IO_Error: error connecting to master 'repl@master' - retry-time: 60 retries: 86400 Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 0 1 row in set (0.00 sec)

    Read the article

  • Subversion and Quickbooks Files

    - by Jorge Fernandez
    I currently have a large problem on one of the file servers I manage for an Accounting Firm. Quickbooks has a tendency to create multiple files of the same thing over and over to prevent data loss. This is a good thing when you handle just a few files. But at an accounting firm it becomes a problem. Some of the older clients have 5-10 files in their respective folders, each with a different cut off date. Because of user error some of these file aren't labeled properly with their correct cutoff dates. This is where Subversion came to mind. Using the revision system would allow for 1 file to be master and have all of its revisions. Has anyone ever tried this with Quickbooks files? I've only used SVN with code for applications making each file size much smaller. How does SVN stand up with larger files like 10-25MB? I'm not exactly sure how SVN handles revisions - does it keep a duplicate of the files and duplicates the disk space space needed?

    Read the article

  • Windows 7 migration led to crashdump and hibernate problems

    - by MartyMacGyver
    Note: I'm using a Samsung 830 SSD (migrated OS from defunct PC) and other than these two (interrelated?) problems it's working fine. Surprisingly well actually. Motherboard is a ASUS P8Z77-V Deluxe. Problem 1: Crashdumps are not working. volmgr throws an event 45 "The system could not sucessfully load the crash dump driver." whenever you modify crashdump settings, or if a crashdump occurs. diskpart says that "Crashdump disk = no" which is peculiar. Problem 2: Hibernation isn't working. Again, volmgr throws the same event 45 if you try to hibernate. The screen blanks, then you're at the password prompt. No sleepage occurs. (Yes, I know I should avoid hibernation on SSDs but it's enabled and the hibernation file is definitely there so I'd like to know why it's failing). Diskpart claims "Hibernation file = no" which is again peculiar... it's plainly there and getting created by the system. The common factor appears to be volmgr and/or the crashdump "service" (if that's what it is). I'd much rather get this working than spend days reinstalling and reconfiguring the entire system, especially when it's working perfectly otherwise. Sleep works as well (as long as it's not hybrid sleep). So, what defines the flags "Crashdump disk" and "Hibernation file disk" in diskpart's output? And what might be going wrong that's breaking crashdumps in particular?

    Read the article

  • Configure Plesk only for Tomcat-Java

    - by AJIT RANA
    I need to configure tomcat on Linux dedicate server only for Java project through Plesk . Following services is running on it. '1.Apache on port 80 ' '2.Tomcat on port 8080/9080' '3.Mysql on port 3306 ' Now problem is this, i need to run only java project on this server from port 80 .this time user type my site name then default page call index.html or .php file from root directory of Apache. so how it can be possible to run java project from this server default port 80 after deploye .war(java project) file to this server. Because user who wants to access my site does not know its port number for Tomcat as here is 9080 and also deploy file name. Pls look below for detail about problem Suppose my sit name is www.example.com and hosted on Linux dedicate server with Plesk install on it with Apache, Tomcat and Mysql. Now for running my java project on it, i need to enter www.example.com:9080/java_projrect_name/ in browser. So how can i run this project only from URL www.example.com and it will call default file .jsp from java_project_name directory. I do not want to enter port number and java_project_name in url and my client who wants to access this project did not know about port number as well as project name . He knows only about URL as www.example.com and when he browses it then it should call default page from java_project directory. So to implement this what should we need to do? Pls help. Thanks

    Read the article

  • How do I improve my incremental-backup performance?

    - by Alistair Bell
    I'm currently using the traditional rsync+cp -al method to create incremental/snapshot backups of our server tree. The backups are going onto a pair of eight-disk towers connected to the backup machine (a Sandy Bridge machine with 16 GB of RAM, running CentOS 5.5) via four eSATA connections (four disks per connection). Each disk is a regular 2 TB disk, so we have 32 TB of disk space connected to the backup machine. We're backing up about 20 TB of data on the servers with this. The problem is that each daily backup is taking more than 24 hours, and the real time-killer isn't the actual rsync, but the time it takes to perform a cp -al of the tree locally on the backup machine. It's taking more than 12 hours just to make the shadow copy of the tree, and as far as I can tell the performance backlog is at the disk (top shows the cp using a lot of RAM but not a lot of CPU and mostly in uninterruptible-sleep state) We have the server data split into four major volumes (and a few minor ones), and each of these backups runs in parallel (with some offsets in the cron to try to get some disks' cp done first). There are two volumes on the backup drive, both striped LVM volumes of 16 TB each. So obviously I need to improve the performance because it's unusable as it stands. The first question is: when CentOS 6 comes out, with support for btrfs, will making snapshots of subvolumes with btrfs substantially increase this performance? The second is: is there a way, with ext3 or something else supported in CentOS 5 or 6, to 'encourage' it to put the directories/inodes in one part of a volume (which could happen to be the part that's on an SSD, via LVM) and the files in another? That would presumably solve the problem, but I don't know of ways to hint ext3 like that.

    Read the article

  • Oracle 10g for Windows does not start up on system boot

    - by Mike Dimmick
    We have an Oracle 10g Enterprise Edition installation (10.2.0.1.0) on a Windows Server 2003 virtual machine. It was initially created with Virtual Server 2005 R2 SP1 but has now been migrated to Windows Server 2008 Hyper-V. The services start on system boot, but the instance does not start up. This problem was actually occurring on Virtual Server after a migration from one server to another, but I managed to fix it then with: oradim -edit -sid ORCL -startmode auto However, this now has no effect. oradim.log (in %OracleHome%\database\oradim.log) says: Thu Jun 10 14:14:48 2010 C:\oracle\product\10.2.0\db_3\bin\oradim.exe -startup -sid orcl -usrpwd * -log oradim.log -nocheck 0 Thu Jun 10 14:14:48 2010 ORA-12560: TNS:protocol adapter error sqlnet.log in the same folder has: Fatal NI connect error 12560, connecting to: (DESCRIPTION=(ADDRESS=(PROTOCOL=BEQ)(PROGRAM=oracle)(ARGV0=oracleorcl)(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))'))(CONNECT_DATA=(SID=orcl)(CID=(PROGRAM=C:\oracle\product\10.2.0\db_3\bin\oradim.exe)(HOST=ORACLE-VM)(USER=SYSTEM)))) VERSION INFORMATION: TNS for 32-bit Windows: Version 10.2.0.1.0 - Production Oracle Bequeath NT Protocol Adapter for 32-bit Windows: Version 10.2.0.1.0 - Production Time: 10-JUN-2010 14:14:48 Tracing not turned on. Tns error struct: ns main err code: 12560 TNS-12560: TNS:protocol adapter error ns secondary err code: 0 nt main err code: 530 TNS-00530: Protocol adapter error nt secondary err code: 2 nt OS err code: 0 The ORA_ORCL_AUTOSTART registry value is set to TRUE, so it should be auto-starting - and you can see that it's trying to. The problem also occurs when stopping and restarting the OracleServiceORCL service. I've enabled SQL*Net tracing which shows: [10-JUN-2010 15:09:33.919] snlpcss: entry [10-JUN-2010 15:09:34.419] snlpcss: Unable to spawn Oracle oracle (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) orcl, error 2. [10-JUN-2010 15:09:34.419] snlpcall: exit On a hunch that error 2 is Windows error 2 (file not found) I tried restarting the service with Process Monitor watching oradim.exe, but this appears to delay things just enough that it always works. Right now I have a horrible hack where I've created a Scheduled Task to run oradim -startup -sid ORCL when the Administrator account logs on, and set the VM to auto-logon. I'd still like to work out why it's not working.

    Read the article

  • Hylafax: Encounter "No font metric information" when try to send a fax

    - by Chau Chee Yang
    I am using Hylafax 6.0.5 on Fedora 13 x86_64. As there are no rpm package available for Fedora 13, I use the source tar ball to install hylafax myself. Everything seems fine during compile and install. I try to send a fax with sendfax and encounter error: # sendfax -n -d <fax-number> /etc/passwd /usr/local/sbin/textfmt: No font metric information found for "Courier-Bold". Usage: /usr/local/sbin/textfmt [-1] [-2] [-B] [-c] [-D] [-f fontname] [-F fontdir(s)] [-m N] [-o #] [-p #] [-r] [-U] [-Ml=#,r=#,t=#,b=#] [-V #] files... >out.ps Default options: -f Courier -1 -p 11bp -o 0 Error converting document; command was "/usr/local/sbin/textfmt -B -f Courier-Bold -Ml=0.4in -p 11 -s default >'/tmp//sndfaxp5GdJ9' <'/etc/passwd'" It seems like there is problem with font problem. I have ghostscript-fonts installed too. I can't find hyla.conf in path /etc/hylafax. There is no /etc/hylafax path in my file system. All configuration files seems located in /var/spool/hylafax/etc. Please advice. Thank you.

    Read the article

  • How do I make a Data Validation drop-down exclude blanks?

    - by Iszi
    Related: How can I use non-adjacent cells on another sheet for a Data Validation drop-down, and only show non-blank values? For now, I've worked around the above problem by re-arranging my sheet so all the Data Validation Source cells are in one range. I'm leaving the above question open though, because I think it still poses an interesting problem. However, the issue now is that the Data Validation drop-down isn't working in the way I expected it to (and how I believe others are telling me it should). Even though I've got everything into one named range, Excel still shows blanks in a drop-down that references that range. Setup: Sheet 1 A1= (blank) B1= Header A2= 1 B2= Value1 A3= 2 B3= Value2 A4= 3 B4= Value3 A5= 4 B5= (empty) A6= 5 B6= (empty) A7= 6 B7= (empty) Sheet1!B2:B7 is named Validation Sheet2!A1 is set to use Data Validation with a Source =Validation, and in-cell drop-down. The drop-down in Sheet2!A1 shows: Value1 Value2 Value3 . . . (Dots represent blank lines) How can I get rid of these blank lines in the in-cell drop-down, while still including Sheet1!B5:B7 in the Data Validation Source? Note: I nuked the sheet, and tried it again without column A from Sheet1 (putting values from column B in the above example into column A), and it worked fine. Adding Column A back though, brought the blanks back into the Data Validation drop-down. What do I need to do to keep column A as I want it and keep the in-cell drop-down clean?

    Read the article

  • Check for updates of a specific Debian package list

    - by Erwan Queffélec
    The setup I run a Debian Squeeze host that I use to build a multilanguage project (python, java, php...) and generate custom packages (debian and RPM) automatically (through jenkins) The problem The target distributions of those Debian packages are Etch, Lenny and Squeeze. But our project has some native dependencies that are available only through the DebianRelease + 1 repository (i.e Lenny + 1 == Squeeze, Squeeze + 1 == Wheezy). We for example, need the jetty packages from Squeeze in Lenny, and the cyrus-imapd-2.4 packages from Wheezy in Squeeze. Some additional info : Some package we can simply 'backport by hand' by mirroring the DebianRelease + 1 packages to our own repositories. For instance, the jetty package from Squeeze will run fine on Lenny because it doesn't need an s**tload of additional dependencies However we do need to rebuild some packages. For instance, cyrus-imapd-2.4 from Wheezy has a lot of unsatisfied dependencies on Squeeze. So we need to rebuild it in Squeeze and then upload it to our repo. The question I need to have a simple way of knowing if they are any updates on those extra packages (both "normal" and "security" updates). I could write a simple script that runs weekly, get some parameters from a file, and generate an update report. Let's say the file looks like this: jetty:squeeze cyrus-imapd-2.4:wheezy The script should run as normal user not to mess up the system apt configuration and issue the appropriate commands to generate that report. Does Debian has some built-in apt-* commands/options dedicated to that kind of problem I could use to write this script ? If not, can someone think of another clean solution to achieve what I need ?

    Read the article

  • Apache mod-pagespeed installation affects mod-spdy?

    - by tim peterson
    Recently my site (an https connection, running on an Amazon EC2 ubuntu apache2.2) has this issue where I need to load the page several times (3-4) before it will load normally without issue. It will then load normally as long as I keep loading pages regularly (every couple seconds). It will stall again if I don't load pages for a few minutes. It has nothing to do with my application because I don't have this problem with the exact same app codebase on my Apache installation on my laptop. The only things to my knowledge that I've changed is that I recently installed mod_spdy and then a few weeks later I installed mod_pagespeed, https://developers.google.com/speed/pagespeed/mod. However, I have since turned mod_pagespeed off by setting its pagespeed.conf to mod_pagespeed off. Unfortunately, that didn't solve the problem. The line below is how every of last 10 lines of my error.log look: # tail -f /var/log/apache2/error.log ... [32728:32729:ERROR:mod_spdy.cc(162)] request->chunked == 1 in request GET / HTTP/1.1 [Sat Jun 02 04:50:08 2012] [warn] [client 50.136.93.153] [stream 5] [32728:32729:WARNING:http_to_spdy_filter.cc(113)] HttpToSpdyFilter is not the last filter in the chain: chunk any thoughts? thank you, tim

    Read the article

  • MacBook Pro screen flickers, and lines appears after using it a while

    - by Adam L. S.
    My aunt gave me her old MacBook Pro, but unfortunately it has a few issues. The main problem is that lines appears after I use it with applications that heavily depend on graphics, but they don't necessarily appear in that window. I also figured out that these lines appear with the windows, so the cursor goes over the lines, and menus and other windows also overlap them. If I take a screenshot of the window by itself, no lines appear. It seems obvious to me that this is a driver-related problem. Unfortunately, there are no updates or anything for the driver. At first I tried asking for a solution at Apple's forum, but they were only able to figure out that "something is wrong with the video card". I checked the MacBook with an Ubuntu disk, and the screen seemed OK. I've uploaded some photos to my Picasa account to show the symptoms. Recently, I've also noticed the screen flickering when using applications that need high performance graphics, like games. Also, the MacBook Pro came with Mac OS X Tiger, but she upgraded it to Leopard (she only brought the Tiger disk with her.) What can I do with this?

    Read the article

< Previous Page | 716 717 718 719 720 721 722 723 724 725 726 727  | Next Page >