Search Results

Search found 21319 results on 853 pages for 'state management'.

Page 686/853 | < Previous Page | 682 683 684 685 686 687 688 689 690 691 692 693  | Next Page >

  • SNMPD running but not listening for connections at random

    - by Lukasz
    OS: CentOS release 5.7 (Final) Net-SNMP: net-snmp-5.3.2.2-14.el5_7.1 (from RPM) Periodically my NMS notifies me that SNMP has gone down on this machine. The service is restored in between 10 to 30 minutes. My NMS also pings and check SSH and those services are not affected during the SNMP outage. SNMPD log file shows that it is working and apparently receiving packets (either from local agents from 127.0.0.1 or from my NMS at 172.16.37.37) however attempting to snmpwalk locally or from the NMS system fails with a timeout. I have 7 of these servers running mixture of CentOS 5.7 and RHEL 5.7 with this specific version of Net-SNMP installed from RPM - none of them have this issue except this one. 5 of the machines (including the NMS system and this problem server) are in the same rack connected using one switch. Restarting SNMPD does not fix the issue - it clears up by itself eventually. Any suggestions where I can begin diagnosing the issue? It's a closed subnet so IPTables is not used. SNMPD config below: # Following entries were added by HP Insight Management Agents at # Tue May 15 10:58:17 CLT 2012 dlmod cmaX /usr/lib64/libcmaX64.so rwcommunity public 127.0.0.1 rocommunity public 127.0.0.1 rwcommunity 3adRabRu 172.16.37.37 rocommunity 3adRabRu 172.16.37.37 rwcommunity 3adRabRu 172.16.37.36 rocommunity 3adRabRu 172.16.37.36 trapcommunity callmetraps trapsink 172.16.37.37 callmetraps trapsink 172.16.37.36 callmetraps syscontact Lukasz Piwowarek syslocation Santiago, Chile # ---------------------- END -------------------- agentAddress udp:161 com2sec rwlocal default public com2sec rolocal default public com2sec subnet default 3adRabRu group rwv2c v2c rwlocal group rov2c v2c rolocal group rov2c v2c subnet view all included .1 access rwv2c "" any noauth exact all all none access rov2c "" any noauth exact all none none

    Read the article

  • How do I locate the app generating this network traffic?

    - by Christopher Bartels
    I don't know what this process is doing on my computer. I run Windows 7 Professional w/ all its updates running current non-free antivirus. I only see it in Resource Monitor, where you can see the Network Service process connected to bitum.nnov.ru. When my PC's network traffic generating apps are idle, this process is using the most of all the idle processes using the network. Screenshot hosted here: http://sss.proinbox.com/bitum-nnov-ru.jpg Does anyone recognize this? The page source mentions a control port & a stream port: Page Source for http://bitum.nnov.ru : <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <title>DVR WebViewer</title> <meta http-equiv="Content-Type" content="text/html; charset=euc-kr"> </head> <body topmargin="0" leftmargin="0"> <OBJECT classid="clsid:EE479A40-C128-40DD-93DA-000556AF9607" codebase="CtrWeb.cab#version=1,0,2,2" width=875 height=585 align=center hspace=0 vspace=0 > <param name="CmdPort" value="5920"> <param name="StreamPort" value="5921"> </body> </html> When I google this page's title, I see a number of other domains that host the same page. Whois: domain: NNOV.RU nserver: ns.kis.ru. nserver: ns.nnov.ru. 78.25.80.210 nserver: ns1.kis.ru. nserver: ns2.kis.ru. state: REGISTERED, DELEGATED, VERIFIED org: "Agentstvo Delovoj Svjazi", Ltd registrar: RU-CENTER-REG-RIPN admin-contact: https://www.nic.ru/whois created: 1996.10.23 paid-till: 2012.11.01 free-date: 2012.12.02 source: TCI Last updated on 2012.06.16 04:20:46 MSK

    Read the article

  • "Windows detected a hard drive" issue in Windows 7 x64

    - by Jasiu
    I upgraded to the OCZ-Agility3 120GB from a 60 OCZ Vertex2 SSD. I cloned the drive from the Vertex to the new Agility. Everything seemed to have gone well and have not had any problems. Recently in the passed month I have gotten this error: I downloaded teh OCZToolboxMP and ran the SMART utility and don't see anything wrong: SMART READ DATA ModelNumber : OCZ-AGILITY3 Serial Number : OCZ-Y1945X77438P4NU6 WWN : 5-e8-3a-97 ebea5ba76 Revision: 10 Attributes List 1: SSD Raw Read Error Rate Normalized Rate: 70 total ECC and RAISE errors 5: SSD Retired Block Count Reserve blocks remaining: 100% 9: SSD Power-On Hours Total hours power on: 968 12: SSD Power Cycle Count Count of power on/off cycles: 28 171: SSD Program Fail Count Total number of Flash program operation failures: 0 172: SSD Erase Fail Count Total number of Flash erase operation failures: 0 174: SSD Unexpected power loss count Total number of unexpected power loss: 11 177: SSD Wear Range Delta Delta between most-worn and least-worn Flash blocks: 0 181: SSD Program Fail Count Total number of Flash program operation failures: 0 182: SSD Erase Fail Count Total number of Flash erase operation failures: 0 187: SSD Reported Uncorrectable Errors Uncorrectable RAISE errors reported to the host for all data access: 4145 194: SSD Temperature Monitoring Current: 30 High: 30 Low: 30 195: SSD ECC On-the-fly Count Normalized Rate: 120 196: SSD Reallocation Event Count Total number of reallocated Flash blocks: 100 201: SSD Uncorrectable Soft Read Error Rate Normalized Rate: 120 204: SSD Soft ECC Correction Rate (RAISE) Normalized Rate: 120 230: SSD Life Curve Status Current state of drive operation based upon the Life Curve: 100 231: SSD Life Left Approximate SDD life Remaining: 100% 241: SSD Lifetime writes from host lifetime writes 893 GB 242: SSD Lifetime reads from host lifetime reads 968 GB Does anyone have any ideas of what might be wrong and or how I can go about fixing this? Please let me know if there is other information I can provide. Thanks for your help Windows 7 x64 SP1 AMD Phenom II X4 940 8GB RAM

    Read the article

  • How to update-grub on a system running overlayroot?

    - by mikepurvis
    We ship boxes configured with overlayroot, using the following overlayroot.conf: overlayroot=device:dev=/dev/sda6,timeout=20,recurse=0 Which produces the following mount configuration: $ mount overlayroot on / type overlayfs (rw,errors=remount-ro) /dev/sda5 on /media/root-ro type ext3 (ro,relatime,errors=continue,user_xattr,acl,barrier=1,data=ordered) /dev/sda6 on /media/root-rw type ext3 (rw,relatime,errors=continue,user_xattr,acl,barrier=1,data=ordered) /dev/sda1 on /boot type ext3 (rw) As you can see, three key physical partitions: sda1 is /boot, sda5 is a read-only "factory" root, and sda6 is a "user" root which can be wiped at any point to restore the machine to its original factory state. Now, the problem arises when update-grub is run for any reason: $ sudo update-grub [sudo] password for administrator: /usr/sbin/grub-probe: error: cannot find a device for / (is /dev mounted?). Understandable, since / is an overlayfs. The contents of /usr/sbin/update-grub are: #!/bin/sh set -e exec grub-mkconfig -o /boot/grub/grub.cfg "$@" With /usr/sbin/grub-mkconfig being the business part of things. But the actual problem is in /usr/sbin/grub-probe, called by grub-mkconfig, and grub-probe is a binary. So my question is, is there a parameter or whatever which can make grub-probe do the right thing in the face of / being an overlayfs? And secondly, is there a way to hack/patch that in so that the update-grub script just does the right thing? Thanks.

    Read the article

  • Unable to connect Xend with virt-manager

    - by Majid Azimi
    I have installed debian 6.0.1a. I have install all XEN stuff. including xen kernel, libvirtd, ... but when i want to connect xend, virt-manager shows me this: Verify that: A Xen host kernel was booted The Xen service has been started details: Unable to open connection to hypervisor URI 'xen:///': unable to connect to '/var/run/libvirt/libvirt-sock', libvirtd may need to be started: Permission denied Traceback (most recent call last): File "/usr/share/virt-manager/virtManager/connection.py", line 971, in _try_open None], flags) File "/usr/lib/python2.6/dist-packages/libvirt.py", line 111, in openAuth if ret is None:raise libvirtError('virConnectOpenAuth() failed') libvirtError: unable to connect to '/var/run/libvirt/libvirt-sock', libvirtd may need to be started: Permission denied here is uname output: Linux debian 2.6.32-5-xen-amd64 #1 SMP Tue Mar 8 00:01:30 UTC 2011 x86_64 GNU/Linux and also xend and libvirtd is runnig: root@debian:/home/mazimi# /etc/init.d/libvirt-bin status Checking status of libvirt management daemon: libvirtd running. root@debian:/home/mazimi# /etc/init.d/xend start Starting Xen daemons: xenstored xenconsoled xend. permissions for livbirt-sock: root@debian:/home/mazimi# ls -alih /var/run/libvirt/ total 12K 671017 drwxr-xr-x 3 root root 4.0K Apr 15 13:54 . 654083 drwxr-xr-x 18 root root 4.0K Apr 15 13:54 .. 670901 srwxrwx--- 1 root libvirt 0 Apr 15 13:54 libvirt-sock 670928 srwxrwxrwx 1 root libvirt 0 Apr 15 13:54 libvirt-sock-ro 670870 drwxr-xr-x 2 root root 4.0K Apr 15 02:34 qemu and also we have group named libvirt in /etc/group When running libvirtd with verbose mode it behaves kind of stange: root@debian:/var/log/libvirt# /usr/sbin/libvirtd --verbose 17:26:55.841: warning : qemudStartup:1832 : Unable to create cgroup for driver: No such device or address 17:26:56.128: warning : lxcStartup:1900 : Unable to create cgroup for driver: No such device or address and waits infinitely.

    Read the article

  • Hide account from login screen but can be used in UAC

    - by tvanover
    So I have a Windows 7 home machine with 2 user accounts. One is a standard user account and one is an administrator account. Now this is going to be put in the hands of a very low-tech user so I don't want them to be able to see the administrator account on logon, but they want to have a password to prevent someone else from using the machine. My goal is that when the user turns on the computer, they are presented with their login. After logging in to their non-administrator account, if something needs to be installed then the administrator account can be used through UAC. I have tried creating the reg key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\SpecialAccounts\UserList and adding a dword of the account name and set it to 0. It succeeded in hiding the account from th login screen. As well as hiding it from UAC. So it fails the second requirement, of being able to run things as administrator through UAC. Also since I didn't set an administrator password (left it blank) it seems that I have completely locked myself out of the machine since runas doesn't accept blank passwords. So I also cannot undo it, and have quite effectively bricked the install, prompting an OS reinstall. This is Windows 7 Home, so there is no Users management console.

    Read the article

  • ASP.NET Website Administration Tool: Unable to connect to SQL Server database

    - by MedicineMan
    I am trying to get authentication and authorization working with my ASP MVC project. I've run the aspnet_regsql.exe tool without any problem and see the aspnetdb database on my server (using the Management Studio tool). my connection string in my web.config is: <connectionStrings> <add name="ApplicationServices" connectionString="data source=MYSERVERNAME;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|aspnetdb.mdf;User Instance=true" providerName="System.Data.SqlClient" /> The error I get is: There is a problem with your selected data store. This can be caused by an invalid server name or credentials, or by insufficient permission. It can also be caused by the role manager feature not being enabled. Click the button below to be redirected to a page where you can choose a new data store. The following message may help in diagnosing the problem: Unable to connect to SQL Server database. In the past, I have had trouble connecting to my database because I've needed to add users. Do I have to do something similar here?

    Read the article

  • How do I identify and fix the cause of transaction log growth on SIMPLE recovery model databases?

    - by Stuart B
    I recently upgraded our SQL Server 2008 installations to service pack 2. One of our databases is on the simple recovery model, but its transaction log is growing extremely fast. The path I'm currently investigating is that we have a transaction somewhere out there stuck in active state. Here is why: select name, recovery_model_desc, log_reuse_wait_desc from sys.databases where name in ('SimpleDB') name recovery_model_desc log_reuse_wait_desc SimpleDB SIMPLE ACTIVE_TRANSACTION When I check my active transactions, I get the following. Note that I installed SP2 and restarted our server on 12/25 at around noonish. select transaction_id, name, transaction_begin_time, transaction_type from sys.dm_tran_active_transactions transaction_id name transaction_begin_time transaction_type 233 worktable 2010-12-25 12:44:29.283 2 236 worktable 2010-12-25 12:44:29.283 2 238 worktable 2010-12-25 12:44:29.283 2 240 worktable 2010-12-25 12:44:29.283 2 243 worktable 2010-12-25 12:44:29.283 2 245 worktable 2010-12-25 12:44:29.283 2 62210 tran_sp_MScreate_peer_tables 2010-12-25 12:45:00.880 1 55422856 user_transaction 2010-12-28 16:41:56.703 1 55422889 SELECT 2010-12-28 16:41:57.303 2 470 LobStorageProviderSession 2010-12-25 12:44:30.510 2 Note that according to the documentation a transaction_type of 1 means read/write, and 2 means read-only. So, my line of thinking is that the trans_sp_MScreate_peer_tables transaction is stuck for some reason and holding up transaction log truncation. Is this a plausible scenario? Correct me if my line of thinking is off, as I'm not a SQL Server expert. If this is correct, how do I erase that transaction so that my transaction log is truncated as usual?

    Read the article

  • MySQL remote access not working - Port Close?

    - by dave.zap
    I am not able to get a remote connection established to MySQL. From my pc I am able to telnet to 3306 on the existing server, but when I try the same with the new server it hangs for few minutes then returns # mysql -utest3 -h [server ip] -p Enter password: ERROR 2003 (HY000): Can't connect to MySQL server on '[server ip]' (110) Here is some output from the server. # nmap -sT -O localhost -p 3306 ... PORT STATE SERVICE 3306/tcp closed mysql ... # netstat -anp | grep mysql tcp 0 0 [server ip]:3306 0.0.0.0:* LISTEN 6349/mysqld unix 2 [ ACC ] STREAM LISTENING 12286 6349/mysqld /DATA/mysql/mysql.sock # netstat -anp | grep 3306 tcp 0 0 [server ip]:3306 0.0.0.0:* LISTEN 6349/mysqld unix 3 [ ] STREAM CONNECTED 3306 1411/audispd # lsof -i TCP:3306 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME mysqld 6349 mysql 10u IPv4 12285 0t0 TCP [domain]:mysql (LISTEN) I am running... OS CentOS release 5.8 (Final) mysql 5.5.28 (Remi) Note: Internal connections to mysql work fine. I have disabled IPtables, the box has no other firewall, it runs Apache on port 80 and ssh no problem. Had followed this tutorial - http://www.cyberciti.biz/tips/how-do-i-enable-remote-access-to-mysql-database-server.html I have bound the IP address in my.cnf user=mysql bind-address = [sever ip] port=3306 I even started over by deleting the mysql folder in my datastore and running mysql_install_db --datadir=/DATA/mysql --force Then recreated all the users as per the manual... http://dev.mysql.com/doc/refman/5.5/en/adding-users.html I have created one test user CREATE USER 'test'@'%' IDENTIFIED BY '[password]'; GRANT ALL PRIVILEGES ON *.* TO 'test'@'%' WITH GRANT OPTION; FLUSH PRIVILEGES; So all I can see is that the port is not really open. Where else might I look? thanks

    Read the article

  • AsteriskNow Migration / Shared Extension Space

    - by Aaron C. de Bruyn
    I am testing the possibility of migrating from an old Avaya phone system to AsteriskNow. The migration would cover several hundred phones--but spread out over several years. (Management wants to move buildings to the new phone system one by one as cables get cut or time permits.) Two other directive is that extensions must not change and they want a GUI that other admins (non-Linux geeks) can manage. They currently use 9XXX for all extensions. We linked the Avaya and Asterisk box via PRI card and they both are communicating. From the Avaya side, if we move (for example) extension 9001 to Asterisk, we forward the call over the PRI to the AsteriskNow box and the SIP phone rings. In AsteriskNow we have an outgoing rule '_9XXX' that routes all 4-digit extensions starting with 9 back to Avaya. Here's the trouble. Dialing 9001 (the extension moved over to AsteriskNow) causes the call to be routed out the PRI to the Avaya box, then the Avaya box routes the call back to Asterisk, and Asterisk routes it to the SIP phone. As we get more and more users switched over, it will use up more and more channels over the PRI card. Is there a way I can ask Asterisk to check it's local extensions first--then forward off to the Avaya system if it starts with '_9XXX'? (I know how I can do it when editing the raw config files, I'm just looking for a way to do it in the GUI so other admins can manage it if necessary.) As a last-ditch plan, I know I can specifically add '_9001' as an outgoing call rule and sent it directly to extension 9001--but I'd really hate to do that for several hundred phones

    Read the article

  • What device can create wireless network while connecting to an ethernet router

    - by Nicolo
    Hi, I have access to an ethernet port of a wireless router. I simply connect my laptop to it via an ethernet cable. There are a total of four such ports on the wireless router. Now I want to connect a device (a wireless access point? wireless bridge? wireless switch?) via an ethernet cable to one of the other ethernet ports of the router. I want this device to act as a kind of wireless switch - it should "split" the ethernet connection coming from the router to two or more computers that connect to this device via a wireless. Basically, I have a wireless router with its wireless function switched off. I don't know the password for that router so can't activate the wireless function. Don't know the password of the ISP either. The only thing I can do is to connect via ethernet cable to the wireless router and this does not require a password. Now I want to use that connection and build a wireless upon it. What kind of device do I need? I am not really very well informed about network management and find the descriptions "wireless access point", "wireless bridge", "wireless switch" confusing. I know what an ethernet switch is - what I need is a device which would do the same but by allowing the clients to connect to it via a wireless. What kind of device would do that? Any recommendations about specific products?

    Read the article

  • Azure can't ping or telnet VM from client

    - by Raif
    I have a VM on Azure with an instance sqlserver 2012 running on it. From my work computer and my home computer I can't get sqlserver management studio connect to it. I have looked at ALL the settings recommended in numerous articles. everything is setup correctly. endpoint 1433 Private and public sqlserver tcp enabled. sqlserver tcp listening on right port sqlserver using mixed auth windows fire wall, holes poked and then disabled on both client and VM can log in from VM using the credentials that I'm trying to use remotely further more I can't ping the dns or ip or tellnet address from my local machines. I can however hit the iis from a browser using the ip. strange. CS asked me to download MS Network Monitor, which I did and pinged and telneted. I have the results saved but can't really make heads or tails of them. CS hasn't responded yet. I can post some info here that would help. EDIT Never one to shrink from a challenge, I deleted my VM and re-did everything. Now it works although my confidence azure is somewhat shaken.

    Read the article

  • Validating signature trust with gpg?

    - by larsks
    We would like to use gpg signatures to verify some aspects of our system configuration management tools. Additionally, we would like to use a "trust" model where individual sysadmin keys are signed with a master signing key, and then our systems trust that master key (and use the "web of trust" to validate signatures by our sysadmins). This gives us a lot of flexibility, such as the ability to easily revoke the trust on a key when someone leaves, but we've run into a problem. While the gpg command will tell you if a key is untrusted, it doesn't appear to return an exit code indicating this fact. For example: # gpg -v < foo.asc Version: GnuPG v1.4.11 (GNU/Linux) gpg: armor header: gpg: original file name='' this is a test gpg: Signature made Fri 22 Jul 2011 11:34:02 AM EDT using RSA key ID ABCD00B0 gpg: using PGP trust model gpg: Good signature from "Testing Key <[email protected]>" gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: ABCD 1234 0527 9D0C 3C4A CAFE BABE DEAD BEEF 00B0 gpg: binary signature, digest algorithm SHA1 The part we care about is this: gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. The exit code returned by gpg in this case is 0, despite the trust failure: # echo $? 0 How do we get gpg to fail in the event that something is signed with an untrusted signature? I've seen some suggestions that the gpgv command will return a proper exit code, but unfortunately gpgv doesn't know how to fetch keys from keyservers. I guess we can parse the status output (using --status-fd) from gpg, but is there a better way?

    Read the article

  • MS SQL - Problem running SQL Server Agent Job via service account credentials

    - by molecule
    There are 5 steps in this job. First job is an SSIS Package store, second to fifth are file system jobs. We configured all jobs to use Windows Authentication. Under Run As, we specified a user account which was created under SecurityCredentials and SQL Server AgentProxiesSSIS Package execution. The job runs without any problems with this user account. We then proceeded to configure the job to use a service account instead. Service account was specified under SecurityCredentials and SQL Server AgentProxiesSSIS Package Execution. The job fails with this error. Executed as user: domain\serviceaccount. ....00 for 32-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved. Started: 3:37:57 PM Error: 2010-03-09 15:37:57.95 Code: 0xC0016016 Source: Description: Failed to decrypt protected XML node "DTS:Password" with error 0x8009000B "Key not valid for use in specified state.". You may not be authorized to access this information. This error occurs when there is a cryptographic error. Verify that the correct key is available. End Error Error: 2010-03-09 15:38:01.19 Code: 0xC0047062 Source: Get CONT_VIEW_LADDER in latest 45days OracleFMDatabase [1] Description: System.Data.OracleClient.OracleException: ORA-01005: null password given; logon denied at System.Data.OracleClient.OracleException.Check(OciErrorHandle errorHandle, Int32 rc) at System.Data.OracleClient.OracleInternalConnection.OpenOnLocalTransaction(String userName, String password, String serverName, Boo... The package execution fa... The step failed. Based on some research, I then go into MS Visual Studio and Open the project. I change the property of the package security from "EncryptSensitiveWithUserKey" to "DontSaveSensitive" but i still get the above error. I am new to this so any help will be very much appreciated. Thanks in advance

    Read the article

  • Exchange Server 2010: move mailboxes from recoveded and mounted edb to user’s mailbox

    - by user36090
    One of our exchange servers crashed, and I am trying to recover the mailboxes. We had 1 exchange 2003 server named "apex" and 1 exchange 2010 server named "2008Enterprise. the exchange 2010 server named "2008Enterprise" crashed. I created a new exchange 2010 server named "Providence". I ran the command on Providence: New-MailboxDatabase -Recovery -Name JBCMail -Server Providence -EdbFilePath "c:\data\Exchange\Mailbox\Mailbox Database 0579285147\Mailbox Database 0579285147.edb" -LogFolderPath "c:\data\Exchange\Mailbox\Mailbox Database 0579285147" this command executed and finished without error I then ran the command: eseutil /p E00 this command was executed from the below directory: c:\data\Exchange\Mailbox\Mailbox Database 0579285147 I then mounted the JBCMail with the mount command note: I do not have my full typed command. Inside my Exchange Management Console (EMC) I can view the new mailbox database named JBCMail. The JBCMail database is show as mounted on the exchange server named Providence. I can see the crashed Exchange server named 2008Exchange. In the EMC the crashed exchange server states the Copy Status under ServerConfiguration-Mailbox is ServiceDown. From here I need to recover three mailboxes The mail boxes are on the apex server. How do I move the mailboxs from apex to Providence? How do I restore the mailboxes from JBCmail mounted database to the user's mailbox? I do not fully understand how to use the Restore-Mailbox command because when I use this command it tries to restore the mailbox to the dead apex server. Restore-Mailbox -ID 'Jason Young' -RecoveryDatabase JBCMail

    Read the article

  • How do I improve my incremental-backup performance?

    - by Alistair Bell
    I'm currently using the traditional rsync+cp -al method to create incremental/snapshot backups of our server tree. The backups are going onto a pair of eight-disk towers connected to the backup machine (a Sandy Bridge machine with 16 GB of RAM, running CentOS 5.5) via four eSATA connections (four disks per connection). Each disk is a regular 2 TB disk, so we have 32 TB of disk space connected to the backup machine. We're backing up about 20 TB of data on the servers with this. The problem is that each daily backup is taking more than 24 hours, and the real time-killer isn't the actual rsync, but the time it takes to perform a cp -al of the tree locally on the backup machine. It's taking more than 12 hours just to make the shadow copy of the tree, and as far as I can tell the performance backlog is at the disk (top shows the cp using a lot of RAM but not a lot of CPU and mostly in uninterruptible-sleep state) We have the server data split into four major volumes (and a few minor ones), and each of these backups runs in parallel (with some offsets in the cron to try to get some disks' cp done first). There are two volumes on the backup drive, both striped LVM volumes of 16 TB each. So obviously I need to improve the performance because it's unusable as it stands. The first question is: when CentOS 6 comes out, with support for btrfs, will making snapshots of subvolumes with btrfs substantially increase this performance? The second is: is there a way, with ext3 or something else supported in CentOS 5 or 6, to 'encourage' it to put the directories/inodes in one part of a volume (which could happen to be the part that's on an SSD, via LVM) and the files in another? That would presumably solve the problem, but I don't know of ways to hint ext3 like that.

    Read the article

  • RAID 5 - DELL 2850 and others

    - by Kiara
    I have installed Ubuntu on a DELL 2850 and I have configured an array of 5 disks (SCSI 73GB 10K) in RAID5. I wanted to simulate a drive error so in the middle of something I just took one of the drives out and put it back again after a bit. Then the drive shows an orange light and seems to be rebuilding but actually is taking hours and hours with no results. So I went to the PERC utility (Ctrl+M) and the disk shows "REBLD". But it never gets to an online state. So I went to Objects - Physical drives - Rebuilding - View rebuild process. And in here I can see a bar moving from 0%... but if I reboot before finishing and get into the PERC Utility again, it seems to start again rebuilding from 0% - so it is not rebuilding automatically. My concern is: what would happen in a real situation? Do I have to just switch the server off and go to the Perc utility to start the rebuilding manually? I thought the whole point was to have this done automatically and without the need of stopping the server. Or does it perhaps rebuild automatically indeed but needs to have enough time without rebooting because otherwise the rebuilding process will start from scratch? It seems to take more than 3h for a 73gb disk! My second question is: can I mix then hard drives? So if I have a RAID of 5x73GB 10K can I use different size (146GB) or speeds (15K)? Apparently someone said it is OK in here Poweredge 2850: replace disk with larger in RAID?

    Read the article

  • How can I map a Windows group login to the dbo schema in a database?

    - by Christian Hayter
    I have a database for which I want to restrict access to 3 named individuals. I thought I could do the following: Create a local Windows group on the database server and add the named individuals to it. Create a Windows login in SQL Server mapped to the local Windows group. Map the login to the "dbo" schema in the database, so that the users can access all objects without having to qualify them with the schema name. When I try to do step 3, I get the following error: Msg 15353, Level 16, State 1, Line 1 An entity of type database cannot be owned by a role, a group, an approle, or by principals mapped to certificates or asymmetric keys. I have tried to do this via the IDE, the sp_changedbowner sproc, and the ALTER AUTHORIZATION command, and I get the same error each time. After searching MSDN and Google, I find that this restriction is by design. Great, that's useful. Can anyone tell me: Why this restriction exists? It seems very arbitrary. More importantly, can I accomplish my requirement some other way? Other info that might be pertinent: The server is fully up to date with service packs and hotfixes. All objects in the database are owned by the "dbo" schema, and it's not feasible to change that. The database is running in compatibility level 80, and it's not feasible to change that to 90 yet. I am free to make any other changes (within reason, depending on what they are).

    Read the article

  • HP ProCurve & Cisco switches interoperability

    - by Kamil Z
    I have a couple of questions regarding Cisco and HP ProCurve interoperability. Here's a link to pdf with my network topology. Can someone help me with basic VLAN configuration in such topology? Below there are some details of my configuration: # m_management_2 interface FastEthernet0/43 switchport access vlan 250 switchport mode access spanning-tree port-priority 32 spanning-tree cost 100 # MTA2-swmgmt1 vlan 1 name "DEFAULT_VLAN" untagged 1-48 ip address 10.10.249.190 255.255.255.128 exit # MTA2-swtr1 vlan 1 name "DEFAULT_VLAN" untagged 1-14,16-48 no ip address no untagged 15 exit vlan 100 name "MTA Mgmt" untagged 15 ip address 10.10.249.188 255.255.255.128 exit # MTA2-swtr2 vlan 1 name "DEFAULT_VLAN" untagged 1-14,16-48 no ip address no untagged 15 exit vlan 100 name "MTA Mgmt" untagged 15 ip address 10.10.249.189 255.255.255.128 exit I don't post MTA2-bcsw[12] configuration, because I wasn's successfull in this one yet. Every time I configure VLANs on MTA2-bcsw[12] Fa0/24 interface on m_management_2 goes down bacause of receiving tagged BPDUs on access port (there are no VLANs configured on MTA2-swmgmt1 because of fact that only 250 VLAN is allowed in this switch. Is it correct?). Can someone provide me some basic configuration for this topology? Second thing I want to ask is concept of connection from MTA2-swmgmt1 to MTA2-swtr[12] HP switches for the sake of management. How to configure such ports on HP switches (managed switch and manager switch). Is my actual configuration correct?

    Read the article

  • Clone virtual machine with Server 2008 R2 and Hyper-V?

    - by bwerks
    Hi all, I've recently just started working with Hyper-V, and so far it's quite nice. However, I've been running into problems with what seems like it should be the most basic of workflows. I've set up a baseline Server 2008 R2 configuration, and exported it with the intention of using the export for cloning. I entered "C:\Exports\" as the export folder. However, I run into problems when I try to import the image. From the Hyper-V manager, I select "Import Virtual Machine" and in the resulting window I entered "C:\Exports\BuildServer\" as the folder, set the radial to "Copy the virtual machine (create a new unique ID)" and checked the checkbox for "Duplicate all files so the same virtual machine can be imported again." Doing so results in the following error: "Import failed. Import task failed to copy file from 'H:\Exports\BuildServer\Virtual Hard Disks\BuildServer.vhd' to 'C:\Hyper-V\Virtual Hard Disks\BuildServer.vhd': The file exists. (0x80070050)" Have I somehow messed something up in configuration? Or is this a known thing? I've read it should be possible to clone VMs by copying them in the filesystem but I'd prefer to keep things in the management Ui if possible.

    Read the article

  • Outbound HTTP performance tuning recommendations

    - by Richard Gadsden
    I'll detail my exact setup below, but general recommendations for a better web-browsing experience will be useful. A nice checklist of things to try would be great! I have 600 users on a single site with an 8MB leased line. I get a lot of moans about the performance of "the internet" (ie web-browsing). What recommendations do the community have for speeding things up without just throwing more bandwidth at it? I expect I will end up buying some more, but good management tips are always valuable. My setup is this: Cisco PIX (515E) firewall on the edge of the network. It's just doing some basic NAT, and opening up a handful of ports to various bastion hosts (aka DMZ servers). The DMZ is just a switch that the servers are plugged into. ISA 2006 Enterprise array (two servers) connecting DMZ to the internal LAN, with WebSense Web Security filtering HTTP traffic so users can't look at porn or waste bandwidth on YouTube during working hours. I've done a few things - I've just switched my internal DNS over to use root hints, which halved DNS query latency from 500ms to 250ms. Well worth doing. I'm trying to cache more aggressively, but so much more of the internet is AJAXy and doesn't cache very well as compared to five years ago. Plus the 70GB of cache which felt like a lot a few years ago really isn't any more. I'm getting about 45% cache hits by number of requests, but only about 22% by size, ie larger objects are less likely to be cached. Latency seems to be part of the problem. Is that attributable to the bandwidth problem, or are there things I can look at to try to reduce latency even on heavily-loaded bandwidth?

    Read the article

  • Resizing a LUKS encrypted volume

    - by mgorven
    I have a 500GiB ext4 filesystem on top of LUKS on top of an LVM LV. I want to resize the LV to 100GiB. I know how to resize ext4 on top of an LVM LV, but how do I deal with the LUKS volume? mgorven@moab:~% sudo lvdisplay /dev/moab/backup --- Logical volume --- LV Name /dev/moab/backup VG Name moab LV UUID nQ3z1J-Pemd-uTEB-fazN-yEux-nOxP-QQair5 LV Write Access read/write LV Status available # open 1 LV Size 500.00 GiB Current LE 128000 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 2048 Block device 252:3 mgorven@moab:~% sudo cryptsetup status backup /dev/mapper/backup is active and is in use. type: LUKS1 cipher: aes-cbc-essiv:sha256 keysize: 256 bits device: /dev/mapper/moab-backup offset: 3072 sectors size: 1048572928 sectors mode: read/write mgorven@moab:~% sudo tune2fs -l /dev/mapper/backup tune2fs 1.42 (29-Nov-2011) Filesystem volume name: backup Last mounted on: /srv/backup Filesystem UUID: 63877e0e-0549-4c73-8535-b7a81eb363ed Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 32768000 Block count: 131071616 Reserved block count: 0 Free blocks: 112894078 Free inodes: 32044830 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stride: 128 RAID stripe width: 128 Flex block group size: 16 Filesystem created: Sun Mar 11 19:24:53 2012 Last mount time: Sat May 19 13:29:27 2012 Last write time: Fri Jun 1 11:07:22 2012 Mount count: 0 Maximum mount count: 100 Last checked: Fri Jun 1 11:03:50 2012 Check interval: 31104000 (12 months) Next check after: Mon May 27 11:03:50 2013 Lifetime writes: 118 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 383bcbc5-fde9-4720-b98e-2d6224713ecf Journal backup: inode blocks

    Read the article

  • Solaris 10 invalid ARP requests from 0.0.0.0?

    - by JWD
    The guys at the data center where I'm hosting a server running Solaris 10 are telling me that my server is making a lot of invalid arp requests. This is an example of a portion of what was sent to me from the logs (with Mac addresses and IP addresses changed). xxxx:xxxx:xxxx/0.0.0.0/0000.0000.0000/[myipaddress]/[Datestamp]) I don't see anything in the arp tables (arp -a) or routing tables (netstat -r) and I don't see anything relating to 0.0.0.0 when snoping the arp requests. The only place I see any reference to 0.0.0.0 is if I do netstat -a for the SCTP SCTP: Local Address Remote Address Swind Send-Q Rwind Recv-Q StrsI/O State ------------------------------- ------------------------------- ------ ------ ------ ------ ------- ----------- 0.0.0.0 0.0.0.0 0 0 102400 0 32/32 CLOSED But not really sure what that means. Doesn't seem like I can disable SCTP. Does anyone have any idea what might be causing this and how to stop it? I think the switch I'm connected to doesn't like it and momentarily drops the connection. Is there anyway to at least block those requests using ipfilter or something else?

    Read the article

  • Probelms Intstalling Trac using apt-get Ubuntu Jaunty

    - by Ben Waine
    Hi, I'm having some issues getting apt to install trac correctly on my Ubuntu Jaunty Box. Using the command 'apt-get install trac' I get the following output: root@myserver:~# apt-get install trac Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. Since you only requested a single operation it is extremely likely that the package is simply not installable and a bug report against that package should be filed. The following information may help to resolve the situation: The following packages have unmet dependencies: trac: Depends: python-setuptools (> 0.5) but it is not installable Depends: python-pysqlite2 (>= 2.3.2) but it is not going to be installed Depends: python-subversion but it is not installable Depends: libjs-jquery but it is not installable Recommends: python-pygments (= 0.6) but it is not installable or enscript but it is not installable Recommends: python-tz but it is not installable E: Broken packages I have successfully used the command on my karmic kola desktop machine and am able to create new projects etc. I thought I might be able to solve the problem by installing all python related extensions. This produced a very similar output. I have Main, universe and multi-verse repositories enabled. Its a remote machine and I have no access to the gui. Hope someone can help, googleing failed to solve the issue or find a solution! Thanks, Ben

    Read the article

  • Problem with creation of scheduled task from IIS6 on SR2003

    - by Morten Louw Nielsen
    Hi, I have also posted this question on stackoverflow, but will also try here, since it might be more system-related I am writing a webapplication using .NET. The webapp creates scheduled tasks using the System.Diagnostics.Process class, calling SCHTASKS.EXE with parameters. I have changed the identity on the app pool, to a specific domain user. The domain-user is local administrator on all the four webservers. From webserver01 I am creating tasks on webserver01 to webserver04. It works perfect for 3-5 days, but then it breaks. It gives me the following errormessage in a messagebox: "The application failed to initialize properly (0xc0000142). Click on OK to terminate the application." If I have the system in the broken state, and I change the identity of the app pool to Domain administrator, it works. As I change it back to my domain-user, it breaks again. If I reboot the server, it works again for the same amount of days, but will break again. It seems like a permission-related problem. I just don't understand why it works sometimes, and sometimes doesn't. I hope someone outthere has seen this problem! Looking forward to hear from you! Kind regards, Morten, Denmark

    Read the article

< Previous Page | 682 683 684 685 686 687 688 689 690 691 692 693  | Next Page >