Search Results

Search found 29627 results on 1186 pages for 'auto update'.

Page 663/1186 | < Previous Page | 659 660 661 662 663 664 665 666 667 668 669 670  | Next Page >

  • Application losing Printer within Terminal Services for remote users

    - by Richard
    Question: What I need to do is have a permanent link to a printer, normally only accessible through Terminal Services (Printer Redirect), to allow Sage Line 50 layouts to see that printer persistently, even after users have disconnected and reconnected to the Terminal Services session? Although the printer is accessible each time a user connects to the Sage Server via Terminal Services, it is given a different session number and therefore the Sage Layout sees it as a different printer. History behind question: Users using Terminal Services connecting to a Sage Server on a different site Using Sage Line 50 v 15 on that Server Users want to print invoices (sage layouts) locally Sage Server cannot see the users local printers, to get around this user uses the Print redirect features of Terminal Services The individual reports can be edited to point to a specific printer by default. This means the user just has to select an invoice and click print, then select the layout/report wanted and it auto prints that invoice to the default printer specified. The problem occurs because the layouts are edited to point to the users local printer "Ricoh 1018d (session#)", note the "(session#)" as this is the users local printer being redirected through the terminal services session. Users are able to print using the sage layouts once the default printer is setup within the layout and saved, but as soon as the users disconnects from the Terminal Services session and then reconnect in the morning go to print, it has lost the connection to that printer. I understand why its failed, because that the printer is on a per session basis and the layout would not be able to hold on to the connection from a previous session. Thanks in advance for any assistance...

    Read the article

  • Windows Server 2008 network speed slow, Xen 3.4.3 HVM ISO

    - by Elliot.Bradshaw
    I've setup a VM running Windows Server 2008 on a host node running Xen 3.4.3-5 and the following kernel: 2.6.18-308.1.1.el5xen #1 SMP Wed Mar 7 05:38:01 EST 2012 i686 i686 i386 GNU/Linux The network speed on the VM is very slow--using the online speed tests I can only get it up to 8-9mbps. The line is 100mbps burstable and the host node has no problem achieving those speeds. If it setup a VM running CentOS, it too has no problems achieving those speeds. I've done some pretty exhaustive troubleshooting, but nothing has helped: New VM installations of Win2k8 do have the same network problem. Upgrading to most recent kernel-xen did not help (2.6.18-308.1.1.el5xen). Upgrading from xen 3.4.0 to xen 3.4.3-5 did not help. Disabling Windows firewall, etc did not help. Changing network card device config from auto negotiation to manually be 100mbps full duplex did not help. Changing the network receive buffer packet size did not help (tried all combos from 64k to 8k). At this point I'm pretty much out of ideas--any help would be appreciated!

    Read the article

  • Inbound SIP calls through Cisco 881 NAT hang up after a few seconds

    - by MasterRoot24
    I've recently moved to a Cisco 881 router for my WAN link. I was previously using a Cisco Linksys WAG320N as my modem/router/WiFi AP/NAT firewall. The WAG320N is now running in bridged mode, so it's simply acting as a modem with one of it's LAN ports connected to FE4 WAN on my Cisco 881. The Cisco 881 get's a DHCP provided IP from my ISP. My LAN is part of default Vlan 1 (192.168.1.0/24). General internet connectivity is working great, I've managed to setup static NAT rules for my HTTP/HTTPS/SMTP/etc. services which are running on my LAN. I don't know whether it's worth mentioning that I've opted to use NVI NAT (ip nat enable as opposed to the traditional ip nat outside/ip nat inside) setup. My reason for this is that NVI allows NAT loopback from my LAN to the WAN IP and back in to the necessary server on the LAN. I run an Asterisk 1.8 PBX on my LAN, which connects to a SIP provider on the internet. Both inbound and outbound calls through the old setup (WAG320N providing routing/NAT) worked fine. However, since moving to the Cisco 881, inbound calls drop after around 10 seconds, whereas outbound calls work fine. The following message is logged on my Asterisk PBX: [Dec 9 15:27:45] WARNING[27734]: chan_sip.c:3641 retrans_pkt: Retransmission timeout reached on transmission [email protected] for seqno 1 (Critical Response) -- See https://wiki.asterisk.org/wiki/display/AST/SIP+Retransmissions Packet timed out after 6528ms with no response [Dec 9 15:27:45] WARNING[27734]: chan_sip.c:3670 retrans_pkt: Hanging up call [email protected] - no reply to our critical packet (see https://wiki.asterisk.org/wiki/display/AST/SIP+Retransmissions). (I know that this is quite a common issue - I've spend the best part of 2 days solid on this, trawling Google.) I've done as I am told and checked https://wiki.asterisk.org/wiki/display/AST/SIP+Retransmissions. Referring to the section "Other SIP requests" in the page linked above, I believe that the hangup to be caused by the ACK from my SIP provider not being passed back through NAT to Asterisk on my PBX. I tried to ascertain this by dumping the packets on my WAN interface on the 881. I managed to obtain a PCAP dump of packets in/out of my WAN interface. Here's an example of an ACK being reveived by the router from my provider: 689 21.219999 193.x.x.x 188.x.x.x SIP 502 Request: ACK sip:[email protected] | However a SIP trace on the Asterisk server show's that there are no ACK's received in response to the 200 OK from my PBX: http://pastebin.com/wwHpLPPz In the past, I have been strongly advised to disable any sort of SIP ALGs on routers and/or firewalls and the many posts regarding this issue on the internet seem to support this. However, I believe on Cisco IOS, the config command to disable SIP ALG is no ip nat service sip udp port 5060 however, this doesn't appear to help the situation. To confirm that config setting is set: Router1#show running-config | include sip no ip nat service sip udp port 5060 Another interesting twist: for a short period of time, I tried another provider. Luckily, my trial account with them is still available, so I reverted my Asterisk config back to the revision before I integrated with my current provider. I then dialled in to the DDI associated with the trial trunk and the call didn't get hung up and I didn't get the error above! To me, this points at the provider, however I know, like all providers do, will say "There's no issues with our SIP proxies - it's your firewall." I'm tempted to agree with this, as this issue was not apparent with the old WAG320N router when it was doing the NAT'ing. I'm sure you'll want to see my running-config too: ! ! Last configuration change at 15:55:07 UTC Sun Dec 9 2012 by xxx version 15.2 no service pad service tcp-keepalives-in service tcp-keepalives-out service timestamps debug datetime msec localtime show-timezone service timestamps log datetime msec localtime show-timezone no service password-encryption service sequence-numbers ! hostname Router1 ! boot-start-marker boot-end-marker ! ! security authentication failure rate 10 log security passwords min-length 6 logging buffered 4096 logging console critical enable secret 4 xxx ! aaa new-model ! ! aaa authentication login local_auth local ! ! ! ! ! aaa session-id common ! memory-size iomem 10 ! crypto pki trustpoint TP-self-signed-xxx enrollment selfsigned subject-name cn=IOS-Self-Signed-Certificate-xxx revocation-check none rsakeypair TP-self-signed-xxx ! ! crypto pki certificate chain TP-self-signed-xxx certificate self-signed 01 quit no ip source-route no ip gratuitous-arps ip auth-proxy max-login-attempts 5 ip admission max-login-attempts 5 ! ! ! ! ! no ip bootp server ip domain name dmz.merlin.local ip domain list dmz.merlin.local ip domain list merlin.local ip name-server x.x.x.x ip inspect audit-trail ip inspect udp idle-time 1800 ip inspect dns-timeout 7 ip inspect tcp idle-time 14400 ip inspect name autosec_inspect ftp timeout 3600 ip inspect name autosec_inspect http timeout 3600 ip inspect name autosec_inspect rcmd timeout 3600 ip inspect name autosec_inspect realaudio timeout 3600 ip inspect name autosec_inspect smtp timeout 3600 ip inspect name autosec_inspect tftp timeout 30 ip inspect name autosec_inspect udp timeout 15 ip inspect name autosec_inspect tcp timeout 3600 ip cef login block-for 3 attempts 3 within 3 no ipv6 cef ! ! multilink bundle-name authenticated license udi pid CISCO881-SEC-K9 sn ! ! username xxx privilege 15 secret 4 xxx username xxx secret 4 xxx ! ! ! ! ! ip ssh time-out 60 ! ! ! ! ! ! ! ! ! interface FastEthernet0 no ip address ! interface FastEthernet1 no ip address ! interface FastEthernet2 no ip address ! interface FastEthernet3 switchport access vlan 2 no ip address ! interface FastEthernet4 ip address dhcp no ip redirects no ip unreachables no ip proxy-arp ip nat enable duplex auto speed auto ! interface Vlan1 ip address 192.168.1.1 255.255.255.0 no ip redirects no ip unreachables no ip proxy-arp ip nat enable ! interface Vlan2 ip address 192.168.0.2 255.255.255.0 ! ip forward-protocol nd ip http server ip http access-class 1 ip http authentication local ip http secure-server ip http timeout-policy idle 60 life 86400 requests 10000 ! ! no ip nat service sip udp port 5060 ip nat source list 1 interface FastEthernet4 overload ip nat source static tcp x.x.x.x 80 interface FastEthernet4 80 ip nat source static tcp x.x.x.x 443 interface FastEthernet4 443 ip nat source static tcp x.x.x.x 25 interface FastEthernet4 25 ip nat source static tcp x.x.x.x 587 interface FastEthernet4 587 ip nat source static tcp x.x.x.x 143 interface FastEthernet4 143 ip nat source static tcp x.x.x.x 993 interface FastEthernet4 993 ip nat source static tcp x.x.x.x 1723 interface FastEthernet4 1723 ! ! logging trap debugging logging facility local2 access-list 1 permit 192.168.1.0 0.0.0.255 access-list 1 permit 192.168.0.0 0.0.0.255 no cdp run ! ! ! ! control-plane ! ! banner motd Authorized Access only ! line con 0 login authentication local_auth length 0 transport output all line aux 0 exec-timeout 15 0 login authentication local_auth transport output all line vty 0 1 access-class 1 in logging synchronous login authentication local_auth length 0 transport preferred none transport input telnet transport output all line vty 2 4 access-class 1 in login authentication local_auth length 0 transport input ssh transport output all ! ! end ...and, if it's of any use, here's my Asterisk SIP config: [general] context=default ; Default context for calls allowoverlap=no ; Disable overlap dialing support. (Default is yes) udpbindaddr=0.0.0.0 ; IP address to bind UDP listen socket to (0.0.0.0 binds to all) ; Optionally add a port number, 192.168.1.1:5062 (default is port 5060) tcpenable=no ; Enable server for incoming TCP connections (default is no) tcpbindaddr=0.0.0.0 ; IP address for TCP server to bind to (0.0.0.0 binds to all interfaces) ; Optionally add a port number, 192.168.1.1:5062 (default is port 5060) srvlookup=yes ; Enable DNS SRV lookups on outbound calls ; Note: Asterisk only uses the first host ; in SRV records ; Disabling DNS SRV lookups disables the ; ability to place SIP calls based on domain ; names to some other SIP users on the Internet ; Specifying a port in a SIP peer definition or ; when dialing outbound calls will supress SRV ; lookups for that peer or call. directmedia=no ; Don't allow direct RTP media between extensions (doesn't work through NAT) externhost=<MY DYNDNS HOSTNAME> ; Our external hostname to resolve to IP and be used in NAT'ed packets localnet=192.168.1.0/24 ; Define our local network so we know which packets need NAT'ing qualify=yes ; Qualify peers by default dtmfmode=rfc2833 ; Set the default DTMF mode disallow=all ; Disallow all codecs by default allow=ulaw ; Allow G.711 u-law allow=alaw ; Allow G.711 a-law ; ---------------------- ; SIP Trunk Registration ; ---------------------- ; Orbtalk register => <MY SIP PROVIDER USER NAME>:[email protected]/<MY DDI> ; Main Orbtalk number ; ---------- ; Trunks ; ---------- [orbtalk] ; Main Orbtalk trunk type=peer insecure=invite host=sipgw3.orbtalk.co.uk nat=yes username=<MY SIP PROVIDER USER NAME> defaultuser=<MY SIP PROVIDER USER NAME> fromuser=<MY SIP PROVIDER USER NAME> secret=xxx context=inbound I really don't know where to go with this. If anyone can help me find out why these calls are being dropped off, I'd be grateful if you could chime in! Please let me know if any further info is required.

    Read the article

  • Tomcat dying silently on regular basis

    - by Hendrik
    My tomcat (6.0.32, Java Sun 1.6.0_22-b04 on Ubuntu 10.04) keeps crashing multiple times daily without any specific output in catalina.out. This usually happens on high load (see top output). Update: The pid-file is properly removed when this happens. Update 2: No CATALINA_OPTS set, _JAVA_OPTS are: export _JAVA_OPTIONS="-Xms128m -Xmx1024m -XX:MaxPermSize=512m \ -XX:MinHeapFreeRatio=20 \ -XX:MaxHeapFreeRatio=40 \ -XX:NewSize=10m \ -XX:MaxNewSize=10m \ -XX:SurvivorRatio=6 \ -XX:TargetSurvivorRatio=80 \ -XX:+CMSClassUnloadingEnabled \ -Djava.awt.headless=true \ -Dcom.sun.management.jmxremote \ -Dcom.sun.management.jmxremote.port=37331 \ -Dcom.sun.management.jmxremote.ssl=false \ -Dcom.sun.management.jmxremote.authenticate=true \ -Djava.rmi.server.hostname=(myhostname) \ -Dcom.sun.management.jmxremote.password.file=/etc/java-6-sun/management/jmxremote.password \ -Dcom.sun.management.jmxremote.access.file=/etc/java-6-sun/management/jmxremote.access" Top: top - 12:40:03 up 9 days, 12:15, 3 users, load average: 30.00, 22.39, 21.91 Tasks: 89 total, 4 running, 85 sleeping, 0 stopped, 0 zombie Cpu(s): 53.2%us, 9.7%sy, 0.0%ni, 34.7%id, 1.5%wa, 0.0%hi, 0.8%si, 0.0%st Mem: 4194304k total, 3311304k used, 883000k free, 0k buffers Swap: 4194304k total, 0k used, 4194304k free, 0k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 25850 tomcat6 20 0 1981m 1.2g 11m S 161 29.6 11:41.56 java 12632 mysql 20 0 393m 97m 4452 S 141 2.4 1690:05 mysqld 14932 nobody 20 0 253m 44m 9152 R 56 1.1 3:26.57 php-cgi 7011 nobody 20 0 241m 31m 9124 S 30 0.8 1:35.96 php-cgi 10093 nobody 20 0 228m 18m 8520 S 25 0.5 2:29.97 php-cgi 27071 nobody 20 0 237m 28m 8640 S 11 0.7 3:13.72 php-cgi 3306 nobody 20 0 227m 16m 6736 R 7 0.4 2:29.83 php-cgi 7756 nobody 20 0 261m 58m 15m R 5 1.4 2:22.33 php-cgi 7129 www-data 20 0 3646m 7228 1896 S 2 0.2 0:36.65 nginx 2657 nobody 20 0 228m 18m 8540 S 1 0.5 1:59.51 php-cgi 7131 www-data 20 0 3645m 6464 1960 S 1 0.2 0:34.13 nginx 7140 www-data 20 0 3652m 12m 1896 S 1 0.3 0:35.80 nginx 619 nobody 20 0 231m 29m 15m S 0 0.7 2:33.46 php-cgi 16552 nobody 20 0 250m 41m 8784 S 0 1.0 2:48.12 php-cgi 17134 nobody 20 0 239m 37m 16m S 0 0.9 2:32.86 php-cgi 21004 nobody 20 0 243m 34m 8700 S 0 0.8 1:19.85 php-cgi 26105 root 20 0 19220 1392 1060 R 0 0.0 0:00.82 top 32430 nobody 20 0 256m 47m 9196 S 0 1.2 2:19.01 php-cgi 314 nobody 20 0 256m 47m 8804 S 0 1.1 1:46.00 php-cgi 2111 nobody 20 0 253m 44m 9196 S 0 1.1 3:01.14 php-cgi 2142 root 20 0 26452 2564 868 S 0 0.1 0:00.56 screen 2144 root 20 0 19484 2012 1368 S 0 0.0 0:00.00 bash 2333 nobody 20 0 249m 41m 9160 S 0 1.0 1:10.33 php-cgi 2552 root 20 0 19484 2260 1620 S 0 0.1 0:00.01 bash 2587 nobody 20 0 258m 49m 9192 S 0 1.2 2:04.50 php-cgi 2684 root 20 0 4092 652 540 S 0 0.0 0:00.00 xvfb-run 2696 root 20 0 60720 13m 2352 S 0 0.3 0:09.12 Xvfb 2759 root 20 0 617m 12m 4676 S 0 0.3 0:00.66 node 3514 nobody 20 0 270m 61m 9216 S 0 1.5 3:13.69 php-cgi 5270 root 20 0 25164 1324 1036 S 0 0.0 0:00.01 screen 5402 nobody 20 0 227m 16m 8032 S 0 0.4 1:33.61 php-cgi 5765 root 20 0 81180 3820 3028 S 0 0.1 0:00.31 sshd 5798 nobody 20 0 242m 32m 9124 S 0 0.8 1:52.08 php-cgi 5856 root 20 0 19496 2292 1636 S 0 0.1 0:00.03 bash 6442 root 20 0 62332 20m 1960 S 0 0.5 0:30.58 mrtg 7082 root 20 0 88992 1916 1636 S 0 0.0 0:00.00 PassengerWatchd I can't find any concrete reason for it, no Exceptions or messages of a shutdown in catalina.out (and no other logs in tomcat's log dir). I can start up the service and it will run for a few days or just minutes before dying again. Is there somewhere else i could look for output? Could the kernel start killing threads due to a lack of ressources and by that bring the VM down?

    Read the article

  • Upgrading PHP from 5.1 to 5.2 on CentOS 5.4

    - by andufo
    i'm trying to upgrade php 5.1 to 5.2 on a CentOS 5.4 I use: yum upgrade php The result is this (check out the last part): [root@mail httpd]# yum update php Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * addons: mirror.raystedman.net * base: mirrors.serveraxis.net * centosplus: mirrors.tummy.com * contrib: mirror.raystedman.net * extras: mirror.raystedman.net * updates: mirrors.netdna.com Setting up Update Process Resolving Dependencies --> Running transaction check --> Processing Dependency: php = 5.1.6-27.el5 for package: php-devel --> Processing Dependency: php = 5.1.6 for package: php-eaccelerator ---> Package php.x86_64 0:5.2.10-1.el5.centos set to be updated --> Processing Dependency: php-cli = 5.2.10-1.el5.centos for package: php --> Processing Dependency: php-common = 5.2.10-1.el5.centos for package: php --> Running transaction check --> Processing Dependency: php = 5.1.6 for package: php-eaccelerator ---> Package php-cli.x86_64 0:5.2.10-1.el5.centos set to be updated --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-xml --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-pdo --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-gd --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-ldap --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-mbstring --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-mysql --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-imap ---> Package php-common.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-devel.x86_64 0:5.2.10-1.el5.centos set to be updated --> Running transaction check --> Processing Dependency: php = 5.1.6 for package: php-eaccelerator ---> Package php-gd.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-imap.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-ldap.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-mbstring.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-mysql.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-pdo.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-xml.x86_64 0:5.2.10-1.el5.centos set to be updated --> Finished Dependency Resolution php-eaccelerator-5.1.6_0.9.5.2-4.el5.rf.x86_64 from installed has depsolving problems --> Missing Dependency: php = 5.1.6 is needed by package php-eaccelerator-5.1.6_0.9.5.2-4.el5.rf.x86_64 (installed) Error: Missing Dependency: php = 5.1.6 is needed by package php-eaccelerator-5.1.6_0.9.5.2-4.el5.rf.x86_64 (installed) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package. [root@mail httpd]# What are the consequences of using --skip-broken? Any recommendations?

    Read the article

  • How to handle server failure in an n-tier architecture?

    - by andy
    Imagine I have an n-tier architecture in an auto-scaled cloud environment with say: a load balancer in a failover pair reverse proxy tier web app tier db tier Each tier needs to connect to the instances in the tier below. What are the standard ways of connecting tiers to make them resilient to failure of nodes in each tier? i.e. how does each tier get the IP addresses of each node in the tier below? For example if all reverse proxies should route traffic to all web app nodes, how could they be set up so that they don't send traffic to dead web app nodes, and so that when new web app nodes are brought online they can send traffic to it? I could run an agent that would update all the configs to all the nodes, but it seems inefficient. I could put an LB pair between each tier, so the tier above only needs to connect to the load balancers, but how do I handle the problem of the LBs dying? This just seems to shunt the problem of tier A needing to know the IPs of all nodes in tier B, to all nodes in tier A needing to know the IPs of all LBs between tiers A and B. For some applications, they can implement retry logic if they contact a node in the tier below that doesn't respond, but is there any way that some middleware could direct traffic to only live nodes in the following tier? If I was hosting on AWS I could use an ELB between tiers, but I want to know how I could achieve the same functionality myself. I've read (briefly) about heartbeat and keepalived - are these relevant here? What are the virtual IPs they talk about and how are they managed? Are there still single points of failure using them?

    Read the article

  • can't get php mail() working on Ubuntu desktop version with sendmail and postfix

    - by user36428
    I'm running Ubuntu 9.10 LAMP and trying to do a simple email test with PHP and I'm not getting any emails sent. mail("[email protected]", "eric-linux test", "test") or die("can't send mail"); I get no errors from PHP when running that script. In my php.ini file is: sendmail_path = /usr/lib/sendmail -t -i $ sudo ps aux | grep sendmail eric 2486 0.0 0.4 8368 2344 pts/0 T 14:52 0:00 sendmail -s “Hello world” [email protected] eric 8747 0.0 0.3 5692 1616 pts/2 T 16:18 0:00 sendmail eric 8749 0.0 0.3 5692 1636 pts/2 T 16:18 0:00 sendmail start eric 9190 0.0 0.3 5692 1636 pts/2 T 19:12 0:00 sendmail start eric 9192 0.0 0.3 5692 1616 pts/2 T 19:12 0:00 sendmail eric 9425 0.0 0.3 5692 1620 pts/1 T 19:37 0:00 sendmail eric 9427 0.0 0.3 6584 1636 pts/1 T 19:37 0:00 sendmail restart eric 9429 0.0 0.3 5692 1636 pts/1 T 19:38 0:00 /usr/lib/sendmail restart eric 9432 0.0 0.1 3040 804 pts/1 R+ 19:38 0:00 grep --color=auto sendmail When I run $ sendmail start it just hangs there doing nothing. I installed postfix also to see if it would help, but it didn't. I tried to see port 25: eric@eric-linux:~$ telnet localhost 25 Trying ::1... Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 220 eric-linux ESMTP Postfix (Ubuntu) thanks

    Read the article

  • Does a 3ware "ECC-ERROR" matter on a JBOD when I have ZFS?

    - by Stefan Lasiewski
    I have a FreeBSD 8.x machine running ZFS and with a 3ware 9690SA controller. The 3ware controller shows an ECC-ERROR with one of the disks: //host> /c0 show VPort Status Unit Size Type Phy Encl-Slot Model ------------------------------------------------------------------------------ p0 OK u0 279.39 GB SAS 0 - SEAGATE ST3300657SS p1 OK u0 279.39 GB SAS 1 - SEAGATE ST3300657SS p2 OK u1 931.51 GB SAS 2 - SEAGATE ST31000640SS p3 ECC-ERROR u2 931.51 GB SAS 3 - SEAGATE ST31000640SS p4 OK u3 931.51 GB SAS 4 - SEAGATE ST31000640SS /c0 show events shows no ECC errors in it's recent history. ZFS does not currently detect any errors. zpool status says No known data errors My question: Is this ECC-ERROR something that I need to be concerned about? According to the 3ware CLI 9.5.2 Manual, an ECC-ERROR means that the 3ware controller caught a read-error for one or more sectors on this drive. This sometimes occurs when a RAID array is recovering from a failed disk. I believe that ECC-ERRORS can also be detected when the 3ware Controller verifies each disk. None of the drives have failed and thus there was no drive rebuild, so I assume that 3ware discovered a bad sector when it ran it's weekly auto-verify scan of the disks. Is this a safe assumption? According to our logs, ZFS has not detected any bad sectors on this drive. ZFS can work around read errors -- if ZFS detects a bad sector on the drive, it will simply mark that sector as bad and never use it again. From the ZFS perspective one bad sector isn't a big deal, although it might indicate that the drive is starting to go bad.

    Read the article

  • How to enable synergy 24800 (or some other port) through firewalld

    - by ndasusers
    After upgrading to Fedora 18, Synergy, the keyboard sharing system was blocked by default. The culprit was firewalld, which happily ignored my previous settings made in the Fedora GUI, backed by iptables. ~]$ ps aux | grep firewall root 3222 0.0 1.2 22364 12336 ? Ss 18:17 0:00 /usr/bin/python /usr/sbin/firewalld --nofork david 3783 0.0 0.0 4788 808 pts/0 S+ 20:08 0:00 grep --color=auto firewall ~]$ Ok, so how to get around this? I did sudo killall firealld for several weeks, but that got annoying every time I rebooted. It was time to look for some clues. There were several one liners, but they did not work for me. They kept spitting out the help text. For example: ~]$ sudo firewall-cmd --zone=internal --add --port=24800/tcp [sudo] password for auser: option --add not a unique prefix Also, posts that clamied this command worked also stated it was temporary, unable to survive a reboot. I ended up adding a file to the config directory to be loaded in on boot. Would anyone be able to have a look at that and see if I missed something? Though synergy works, when I run the list command, I get no result: ~]$ sudo firewall-cmd --zone=internal --list-services ipp-client mdns dhcpv6-client ssh samba-client ~]$ sudo firewall-cmd --zone=internal --list-ports ~]$

    Read the article

  • MS Securily Essentials efficiency / usage, suspicious processes

    - by biggvsdiccvs
    I recently noticed that my (originally pretty fast) Windows 7 Pro laptop started getting slow and using a lot of CPU power for no apparent reason. A full scan by Microsoft Security Essentials revealed nothing. After some investigation, I found multiple instances of a strange process called urpev.exe and a couple of similar exe files sitting in subdirectories of Users//AppData/Roaming (this particular one was in a folder called Xyceowme). Description: "Mescrosift Visaal Studie 2010". Company name: "Mesrosift Corporatien". Is it a virus or something? :) Now, all of these exe files were scheduled to be started from the Task Scheduler by tasks with names like "Security Center Update - 1291373911" and similar. My user name was listed as the author of the tasks. I disabled the tasks, restarted the computer in safe mode and moved all of the exe files to quarantine for further investigation. All of this was done last night. I just scanned the files with Security Essentials again (not updated since yesterday) in the quarantine location and this time it found PWS:Win32/Zbot.gen!plock in urpev.exe (but not in the other exe files, which are most likely viruses, too). Category: Password Stealer Description: This program is dangerous and captures user passwords. Another strange process is browser.exe (not chrome.exe) by Google Inc., described as Google Chrome. I uninstalled Chrome but it's still there. It runs out of Users\\AppData\LocalLow\UIVoice\ToolMedium\browser.exe and if I move it in safe mode, it just reappears there, and multiple instances run. Needless to say, it I kill it, it just runs again. Couldn't see anything in Task Scheduler, but found a couple of references to it in the Registry Editor: HKEY_CURRENT_USER/Software/Microsoft/Internet Explorer/LowRegistry/Audio/PolicyConfig/PropertyStore/ HKEY_USERS/S-1-5-21-1685709306-872053864-2599010960-1002/Software/Microsoft/Internet Explorer/LowRegistry/Audio/PolicyConfig/PropertyStore/ Maybe it's a legit process, but seems kind of strange. For the time being, I suspended the process and killed all of the child processes when I booted up the laptop. I used Security Essentials to scan the system periodically, but obviously it's not effective at least against one virus. I had the "real-time protection" turned off. Would it help if it were turned on and how much of a nuisance would it be? I wonder if there is a better alternative to Security Essentials. Over the years I've used multiple antivirus products at home and especially at work and was not very happy with any of them. Apparently, asking for software recommendations or comparisons is taboo here, but I will mention that I installed Malware Bytes and it was able to find an quarantine a bunch of suspicious files, and at least some of which were truly infected, but when it scans the bogus security center update executables from Mesrosift Corporatien, it finds nothing wrong. Also, any thoughts on the browser.exe mystery? Neither MS Security Essentials nor Malware Bytes found anything wrong with that file. However, after I ran a Malware Bytes scan and quarantined everything it found suspicious and rebooted the laptop, the process did not run.

    Read the article

  • Active Directory Time Synchronisation - Time-Service Event ID 50

    - by George
    I have an Active Directory domain with two DCs. The first DC in the forest/domain is Server 2012, the second is 2008 R2. The first DC holds the PDC Emulator role. I sporadically receive a warning from the Time-Service source, event ID 50: The time service detected a time difference of greater than %1 milliseconds for %2 seconds. The time difference might be caused by synchronization with low-accuracy time sources or by suboptimal network conditions. The time service is no longer synchronized and cannot provide the time to other clients or update the system clock. When a valid time stamp is received from a time service provider, the time service will correct itself. Time sync in the domain is configured with the second DC to synchronise using the /syncfromflags:DOMHIER flag. The first DC is configured to sync time using a /syncfromflags:MANUAL /reliable:YES, from a peerlist consisting of a number of UK based stratum 2 servers, such as ntp2d.mcc.ac.uk. I'm confused why I receive this event warning. It implies that my PDC emulator cannot synchronise time with a supposedly reliable external time source, and it quotes a time difference of 5 seconds for 900 seconds. It's worth also mentioning that I used to use a UK pool from ntp.org but I would receive the warning much more often. Since updating to a number of UK based academic time servers, it seems to be more reliable. Can someone with more experience shed some light on this - perhaps it is purely transient? Should I disregard the warning? Is my configuration sound? EDIT: I should add that the DCs are virtual, and installed on two separate VMware ESXi/vSphere physical hosts. I can also confirm that as per MDMarra's comment and best practice, VMware timesync is disabled, since: c:\Program Files\VMware\VMware Tools\VMwareToolboxCmd.exe timesync status returns Disabled. EDIT 2 Some strange new issue has cropped up. I've noticed a pattern. Originally, the event ID 50 warnings would occur at about 1230pm each day. This is interesting since our veeam backup happens at 12 midday. Since I made the changes discussed here, I now receive an event ID 51 instead of 50. The new warning says that: The time sample received from peer server.ac.uk differs from the local time by -40 seconds (Or approximately 40 seconds). This has happened two days in a row. Now I'm even more confused. Obviously the time never updates until I manually intervene. The issue seems to be related to virtualisation and veeam. Something may be occuring when veeam is backing up the PDCe. Any suggestions? UPDATE & SUMMARY msemack's excellent list of resources below (the accepted answer) provided enough information to correctly configure the time service in the domain. This should be the first port of call for any future people looking to verify their configuration. The final "40 second jump" issue I have resolved (there are no more warnings) through adjusting the VMware time sync settings as noted in the veeam knowledge base article here: http://www.veeam.com/kb1202 In any case, should any future reader use ESXi, veeam or not, the resources here are an excellent source of information on the time sync topic and msemack's answer is particularly invaluable.

    Read the article

  • vagrant and puppet security for ssl certificates

    - by Sirex
    I'm pretty new to vagrant, would someone who knows more about it (and puppet) be able to explain how vagrant deals with the ssl certs needed when making vagrant testing machines that are processing the same node definition as the real production machines ? I run puppet in master / client mode, and I wish to spin up a vagrant version of my puppet production nodes, primarily to test new puppet code against. If my production machine is, say, sql.domain.com I spin up a vagrant machine of, say, sql.vagrant.domain.com. In the vagrant file I then use the puppet_server provisioner, and give a puppet.puppet_node entry of “sql.domain.com” to it gets the same puppet node definition. On the puppet server I use a regex of something like /*.sql.domain.com/ on that node entry so that both the vagrant machine and the real one get that node entry on the puppet server. Finally, I enable auto-signing for *.vagrant.domain.com in puppet's autosign.conf, so the vagrant machine gets signed. So far, so good... However: If one machine on my network gets rooted, say, unimportant.domain.com, what's to stop the attacker changing the hostname on that machine to sql.vagrant.domain.com, deleting the old puppet ssl cert off of it and then re-run puppet with a given node name of sql.domain.com ? The new ssl cert would be autosigned by puppet, match the node name regex, and then this hacked node would get all the juicy information intended for the sql machine ?! One solution I can think of is to avoid autosigning, and put the known puppet ssl cert for the real production machine into the vagrant shared directory, and then have a vagrant ssh job move it into place. The downside of this is I end up with all my ssl certs for each production machine sitting in one git repo (my vagrant repo) and thereby on each developer's machine – which may or may not be an issue, but it dosen't sound like the right way of doing this. tl;dr: How do other people deal with vagrant & puppet ssl certificates for development or testing clones of production machines ?

    Read the article

  • Installing mod_mono on Ubuntu: handler doesn't seem to get registered

    - by Trevor Johns
    I'm trying to install mod_mono on Apache 2 (Prefork MPM). I'm using Ubuntu Karmic, and just want an auto-hosting setup (so that any .aspx files are executed, similar to how PHP is normally setup). I did the following to install Mono: $ apt-get install libapache2-mod-mono mono-apache-server2 mono-devel $ a2dismod mod_mono $ a2enmod mod_mono_auto I've confirmed that mod_mono is getting loaded by Apache. However, any .aspx pages I try to load are returned unprocessed and still have an application/x-asp-net MIME type. It's as if the mod_mono handler never gets registered with Apache. Here's the contents of /etc/mod_mono_auto.load: LoadModule mono_module /usr/lib/apache2/modules/mod_mono.so And here's /etc/mod_mono_auto.conf: MonoAutoApplication enabled AddType application/x-asp-net .aspx AddType application/x-asp-net .asmx AddType application/x-asp-net .ashx AddType application/x-asp-net .asax AddType application/x-asp-net .ascx AddType application/x-asp-net .soap AddType application/x-asp-net .rem AddType application/x-asp-net .axd AddType application/x-asp-net .cs AddType application/x-asp-net .config AddType application/x-asp-net .dll DirectoryIndex index.aspx DirectoryIndex Default.aspx DirectoryIndex default.aspx I've even tried setting the handler explicitly: AddHandler mono .aspx .ascx .asax .ashx .config .cs .asmx .asp Nothing seems to help. Any ideas how to get this working?

    Read the article

  • Overclock Failed

    - by John
    This morning when I tried to turn on my computer, the monitor would not display anything. The computer is on, but no UI on the monitor and no beep. I turned off the computer and turn it back on again. This time, the computer was on for about 2 seconds and automatically turned itself off for about 2 second and automatically turned itself back on. This happened two times without the monitor displaying any UI. On the third try, the computer did the same thing but when it turned itself back on, the monitor displayed this: American Megatrends Asus P8P67 Le ACPI bios Revision 1011 CPU Intel Core i5 2500K CPU @ 3.30GHz Speed: 3300MHz Total Memory 4096 MB (DDR3-1333) USB Devices Total: 0 Drive, 1 Keyboard, 1 Mouse, 2 Hubs Detected ATA/ATAPI Deivces SATA6G_2 Hitachi HDs821010CLA 332 SATA3G_1 Corsair force 3 SSD SATA3G_2 HL-DT-ST DB-RE UH12CS28 Overclocking Failed! Please enter setup to reconfigure your system. Press F1 to Run Setup. After I pressed F1, it went into the Asus EF1 Bios Utility. I couldn't do anything in there so I just exited. After which, the computer auto turned off and on again and I was able to use the PC like regular. This has never happened to this computer before and the day before, nothing seemed wrong, just regular use and some gaming.

    Read the article

  • Why are folders disappearing in Windows XP?

    - by XenoFoxx
    I am researching a problem for a friend, and unfortunatly do not have direct access to his computer. I've tried to gather as much information as possible and I have researched it on various websites. I've not found anyone having the same problem my friend is having. So here goes: He has a media server in his home running Microsoft Windows XP. It has 3 drives, 1 for the OS and 2 for mass storage. Not long ago he went to access one of the mass storage media drives and it was empty, except for a single folder. His first assumption was that his roommate had deleted everything on the drive (excluding the remaining folder). He then checked the properties of the drive and it was still saying that the hard drive was nearly full. I told him to check the recycling bin, thinking that whoever deleted them didn't clear them from recycling and that they were still taking up space on the drive. My friend said the recycling bin was empty. So we have a drive that the Windows file management system says is empty (again except for the remaining folder), but the properties of the drive say it's mostly full. Now it gets weirder My friend tried to create a new folder on this drive and it auto-named itself "New Folder(1)" which means that it recognizes there is already a "New Folder" in that directory. He tried to rename it to a name that he KNEW was there previsouly, and Windows wouldn't allow it because it was a duplicate folder name. SO now it seems the folders are there, but not displaying in Windows Explorer. Both of us have no idea why this is occuring, why the folders vanished, why the one remaining folder didn't vanish, or how to make them visable again. Anyone else ever experience this? I can get more details if needed.

    Read the article

  • Why is IIS Anonymous authentication being used with administrative UNC drive access?

    - by Mark Lindell
    My account is local administrator on my machine. If I try to browse to a non-existent drive letter on my own box using a UNC path name: \mymachine\x$ my account would get locked out. I would also get the following warning (Event ID 100, Type “Warning”) 5 times under the “System” group in Event Viewer on my box: The server was unable to logon the Windows NT account 'ourdomain\myaccount' due to the following error: Logon failure: unknown user name or bad password. I would also get the following warning 3 times: The server was unable to logon the Windows NT account 'ourdomain\myaccount' due to the following error: The referenced account is currently locked out and may not be logged on to. On the domain controller, Event ID 680 of type “Failure Audit” would appear 4 times under the “Security” group in Event Viewer: Logon attempt by: MICROSOFT_AUTHENTICATION_PACKAGE_V1_0 Logon account: myaccount Followed by Event ID 644: User Account Locked Out: Target Account Name: myaccount Target Account ID: OURDOMAIN\myaccount Caller Machine Name: MYMACHINE Caller User Name: STAN$ Caller Domain: OURDOMAIN Caller Logon ID: (0x0,0x3E7) Followed by another 4 errors having Event ID 680. Strangely, every time I tried to browse to the UNC path I would be prompted for a user name and password, the above errors would be written to the log, and my account would be locked out. When I hit “Cancel” in response to the user name/password prompt, the following message box would display: Windows cannot find \mymachine\x$. Check the spelling and try again, or try searching for the item by clicking the Start button and then clicking Search. I checked with others in the group using XP and they only got the above message box when browsing to a “bad” drive letter on their box. No one else was prompted for a user name/password and then locked out. So, every time I tried to browse to the “bad” drive letter, behind the scenes XP was trying to login 8 times using bad credentials (or, at least a bad password as the login was correct), causing my account to get locked out on the 4th try. Interestingly, If I tried browsing to a “good” drive such as “c$” it would work fine. As a test, I tried logging on to my box as a different login and browsing the “bad” UNC path. Strangely, my “ourdomain\myaccount” account was getting locked out – not the one I was logged in as! I was totally confused as to why the credentials for the other login were being passed. After much Googling, I found a link referring to some IIS settings I was vaguely familiar with from the past but could not see how they would affect this issue. It was related to the IIS directory security setting “Anonymous access and authentication control” located under: Control Panel/Administrative Tools/Computer Management/Services and Applications/Internet Information Services/Web Sites/Default Web Site/Properties/Directory Security/Anonymous access and authentication control/Edit/Password I found no indication while scouring the Internet that this property was related to my UNC problem. But, I did notice that this property was set to my domain user name and password. And, my password did age recently but I had not reset the password accordingly for this property. Sure enough, keying in the new password corrected the problem. I was no longer prompted for a user name/password when browsing the UNC path and the account lock-outs ceased. Now, a couple of questions: Why would an IIS setting affect the browsing of a UNC path on a local box? Why had I not encountered this problem before? My password has aged several times and I’ve never encountered this problem. And, I can’t remember the last time I updated the “Anonymous access” IIS password it’s been so long. I’ve run the script after a password reset before and never had my account locked-out due to the UNC problem (the script accesses UNC paths as a normal part of its processing). Windows Update did install “Cumulative Security Update for Internet Explorer 7 for Windows XP (KB972260)” on my box on 7/29/2009. I wonder if this is responsible.

    Read the article

  • Debian Lenny to Debian Squeeze upgrade problems

    - by Roland Soós
    Hi! Yesterday I made a dist-upgrade on my Debian Lenny server. I thought it will be easy as an usual upgrade, but it's not. I got a lot of problem after the update: # apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: linux-image-2.6-amd64 : Depends: linux-image-2.6.32-5-amd64 but it is not installed E: Unmet dependencies. Try using -f. Then I tried the suggestion: # apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: libio-compress-base-perl libatk1.0-0 libts-0.0-0 libmime-types-perl libc-client2007b libgtk2.0-common libxfixes3 libgsf-1-common hicolor-icon-theme libfile-remove-perl libxcomposite1 libltdl3-dev libneon27 libmd5-perl libwmf0.2-7 libilmbase6 libatk1.0-data djvulibre-desktop libdirectfb-1.0-0 fam libxinerama1 libcroco3 libopenexr6 libgsf-1-114 libmail-box-perl libdjvulibre21 openssl-blacklist librsvg2-2 libio-compress-zlib-perl libsysfs2 libbeecrypt6 libxdamage1 libobject-realize-later-perl libuser-identity-perl libgtk2.0-bin libxi6 libxcursor1 portmap libxrandr2 libgtk2.0-0 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: linux-image-2.6.32-5-amd64 Suggested packages: linux-doc-2.6.32 The following NEW packages will be installed: linux-image-2.6.32-5-amd64 0 upgraded, 1 newly installed, 0 to remove and 121 not upgraded. 98 not fully installed or removed. Need to get 0 B/28.6 MB of archives. After this operation, 103 MB of additional disk space will be used. Do you want to continue [Y/n]? y perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "hu_HU.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: Nincs ilyen f?jl vagy k?nyvt?r Preconfiguring packages ... (Reading database ... 37915 files and directories currently installed.) Unpacking linux-image-2.6.32-5-amd64 (from .../linux-image-2.6.32-5-amd64_2.6.32-30_amd64.deb) ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: Nincs ilyen f?jl vagy k?nyvt?r dpkg: error processing /var/cache/apt/archives/linux-image-2.6.32-5-amd64_2.6.32-30_amd64.deb (--unpack): failed in write on buffer copy for backend dpkg-deb during `./lib/modules/2.6.32-5-amd64/kernel/sound/pci/hda/snd-hda-codec-realtek.ko': No space left on device configured to not write apport reports dpkg-deb: subprocess paste killed by signal (Broken pipe) locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: Nincs ilyen f?jl vagy k?nyvt?r Running postrm hook script /sbin/update-grub. Searching for GRUB installation directory ... found: /boot/grub Searching for default file ... found: /boot/grub/default Testing for an existing GRUB menu.lst file ... found: /boot/grub/menu.lst Searching for splash image ... none found, skipping ... Found kernel: /boot/vmlinuz-2.6.26-2-amd64 Updating /boot/grub/menu.lst ... done Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 2.6.32-5-amd64 /boot/vmlinuz-2.6.32-5-amd64 Errors were encountered while processing: /var/cache/apt/archives/linux-image-2.6.32-5-amd64_2.6.32-30_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) # dpkg-reconfigure locales perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "hu_HU.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: Nincs ilyen f?jl vagy k?nyvt?r /usr/sbin/dpkg-reconfigure: locales is broken or not fully installed Then I stucked. Do you have any idea how could I solve this?

    Read the article

  • DNS lookups failing somewhere between firewall and router

    - by TessellatingHeckler
    we have a setup of ADSL line - Cisco 837 ADSL router - Zyxel ZyWall 35 firewall/NAT - Switch == Intel load balanced NICS in a server. It has been fine for years, suddenly DNS resolution stopped working on the server. No changes that I know of, so I can't work backwards from there. It was configured with the ISP's DNS servers, neither network device does DNS relaying. Wireshark shows the request go out but nothing comes back. The server networking stack seems OK though, because if we query an internal DNS server on a remote site, that works. I can logon to the Cisco, and DNS resolves OK from the command line. I can logon to the ZyWall, and DNS does not resolve from the command line. So the problem seems to be the firewall, patch cable or router, yes? On the router: interface Ethernet0 ip address aaa.bbb.ccc.ddd 255.255.255.ddd ip tcp adjust-mss 1450 hold-queue 100 out On the firewall: DNS server set to 8.8.8.8 (Google's), DNS traffic allowed LAN-WAN. What else should I look for? Update: Following This guide I've got traffic logging on the Cisco. I have also got access to a public DNS server which I can run tcpdump on to see things from the other side. And as per the below comments, I've tested with Dig and see that DNS over TCP works, and over UDP does not. Currently: DNS request from the server using TCP shows up in the firewall log, and in the Cisco log, and in tcpdump on the DNS server, the answer comes back, it works fine. DNS request from the server using UDP shows up in the firewall log, and in the Cisco log, does NOT show in tcpdump on the DNS server, times out. DNS request from the cisco (using UDP) does show up in tcpdump on the DNS server, answer received, works fine. Ping requests from the server and the cisco to the DNS server show up in tcpdump on the DNS server. DNS request from the server using UDP does show up on the firewall. Summary: TCP seems fine throughought. UDP works over the ADSL and to the Cisco, and it works from the server to the Cisco, but it doesn't cross the Cisco properly, it seems. I did see the Cisco showing as connected at 10Mb/full-duplex internally, and the firewall showing as 100Mb/full-duplex externally. I have forced the firewall to 10Mb and rebooted both devices. That seemed to help get UDP traffic (server-firewall-cisco) instead of (server-firewall), but did not fix it. Update: Sanitized Cisco config: version 12.2 no service pad service timestamps debug datetime msec service timestamps log datetime msec service password-encryption ! hostname cisco ! logging queue-limit 100 enable secret 5 {password} enable password 7 {password} ! ip subnet-zero ip domain name example.org ip name-server {nameserver_IP} ! ! ip audit notify log ip audit po max-events 100 no ftp-server write-enable ! interface Ethernet0 ip address {Inside_public_IP} 255.255.255.248 ip tcp adjust-mss 1460 hold-queue 100 out ! interface ATM0 no ip address no atm ilmi-keepalive pvc 0/38 encapsulation aal5mux ppp dialer dialer pool-member 1 ! dsl operating-mode auto ! interface Dialer1 ip unnumbered Ethernet0 encapsulation ppp dialer pool 1 dialer idle-timeout 0 dialer persistent no cdp enable ppp chap hostname {ADSL_Username} ppp chap password 7 {ADSL_Password} ! ip classless ip route 0.0.0.0 0.0.0.0 Dialer1 no ip http server no ip http secure-server ! access-list 23 permit {IP} dialer-list 1 protocol ip permit no cdp run snmp-server enable traps tty ! {con, vty} end

    Read the article

  • Windows XP mounting USB drive to same letter as previously mapped network drive

    - by GAThrawn
    Why does Windows always mount a USB drive as the next drive letter after the last physical drive, even when that letter is already taken by a mapped drive, and is there any way to improve this behaviour? What happens is I tend to use a few different flash drives on my PC, as well as having both a Blackberry and a personal phone that mount as USB drives when I plug them in to charge. Being on a corporate PC I also have a number of mapped network drives (some set by login script, some set as persistent mappings in my profile). When I first login I'll have drive letters like this: C: - Local Drive D: - DVD Drive G: - Login script mapped drive J: - Login script mapped drive When I plug the Blackberry in it'll mount two drives (one for onboard storage, one for the SD card) as E: and F:. If I then plug in another USB drive it will mount as G:, even though that's already taken by a network mapped drive. This leaves me with the following drives: C: - Local Drive D: - DVD Drive E: - USB drive (Blackberry) F: - USB drive (Blackberry) G: - Login script mapped drive [G: - USB drive - mounted but not visible in Explorer or command prompt] J: - Login script mapped drive I then have to go into Disk Management, find the new USB drive that's mounted to G: and re-assign it to another letter eg Z:, once this is done Auto-Play detects it and throws up its normal dialog, and its browseable in Explorer. While this is OK to do if you only use one or two USB drives and have admin access to your PC with your login account, its a total pain in the proverbial if you regularly use a whole load of different USB devices, and corporate policy means you have one account for your normal login (that only has User access to workstations), but have to use a different account for any privileged action. I realize that one possible reason for this is the difference between hardware which is mounted and assigned drive letters at the systen level, and mapped drives which are done at the user level. For USB devices that are already plugged in before login, then obviously they're mounted before Windows knows what network drives may be mapped. However if you plug the USB devices in after you're fully logged in and have drives mapped then Windows must know which letters are available?

    Read the article

  • reiserfsck on lvm

    - by DaDaDom
    It seems like my filesystem got corrupted somehow during the last reboot of my server. I can't fsck some logical volumes anymore. The setup: root@rescue ~ # cat /mnt/rescue/etc/fstab proc /proc proc defaults 0 0 /dev/md0 /boot ext3 defaults 0 2 /dev/md1 / ext3 defaults,errors=remount-ro 0 1 /dev/systemlvm/home /home reiserfs defaults 0 0 /dev/systemlvm/usr /usr reiserfs defaults 0 0 /dev/systemlvm/var /var reiserfs defaults 0 0 /dev/systemlvm/tmp /tmp reiserfs noexec,nosuid 0 2 /dev/sda5 none swap defaults,pri=1 0 0 /dev/sdb5 none swap defaults,pri=1 0 0 [UPDATE] First question: what "part" should I check for bad blocks? The logical volume, the underlying /dev/md or the /dev/sdx below that? Is doing what I am doing the right way to go? [/UPDATE] The errormessage when checking /dev/systemlvm/usr: root@rescue ~ # reiserfsck /dev/systemlvm/usr reiserfsck 3.6.19 (2003 www.namesys.com) [...] Will read-only check consistency of the filesystem on /dev/systemlvm/usr Will put log info to 'stdout' Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes ########### reiserfsck --check started at Wed Feb 3 07:10:55 2010 ########### Replaying journal.. Reiserfs journal '/dev/systemlvm/usr' in blocks [18..8211]: 0 transactions replayed Checking internal tree.. Bad root block 0. (--rebuild-tree did not complete) Aborted Well so far, let's try --rebuild-tree: root@rescue ~ # reiserfsck --rebuild-tree /dev/systemlvm/usr reiserfsck 3.6.19 (2003 www.namesys.com) [...] Will rebuild the filesystem (/dev/systemlvm/usr) tree Will put log info to 'stdout' Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes Replaying journal.. Reiserfs journal '/dev/systemlvm/usr' in blocks [18..8211]: 0 transactions replayed ########### reiserfsck --rebuild-tree started at Wed Feb 3 07:12:27 2010 ########### Pass 0: ####### Pass 0 ####### Loading on-disk bitmap .. ok, 269716 blocks marked used Skipping 8250 blocks (super block, journal, bitmaps) 261466 blocks will be read 0%....20%....40%....60%....80%....100% left 0, 11368 /sec 52919 directory entries were hashed with "r5" hash. "r5" hash is selected Flushing..finished Read blocks (but not data blocks) 261466 Leaves among those 13086 Objectids found 53697 Pass 1 (will try to insert 13086 leaves): ####### Pass 1 ####### Looking for allocable blocks .. finished 0% left 12675, 0 /sec The problem has occurred looks like a hardware problem (perhaps memory). Send us the bug report only if the second run dies at the same place with the same block number. mark_block_used: (39508) used already Aborted Bad. But let's do it again as mentioned: [...] Flushing..finished Read blocks (but not data blocks) 261466 Leaves among those 13085 Objectids found 54305 Pass 1 (will try to insert 13085 leaves): ####### Pass 1 ####### Looking for allocable blocks .. finished 0%... left 12127, 958 /sec The problem has occurred looks like a hardware problem (perhaps memory). Send us the bug report only if the second run dies at the same place with the same block number. build_the_tree: Nothing but leaves are expected. Block 196736 - internal Aborted Same happens every time, only the actual error message changes. Sometimes I get mark_block_used: (somenumber) used already, other times the block number changes. Seems like something is REALLY broken. Are there any chances I can somehow get the partitions to work again? It's a server to which I don't have physical access directly (hosted server). Thanks in advance!

    Read the article

  • Windows Share authentication from Active Directory Linux login

    - by Kenny
    I'm using Active Directory to log into RHEL. To do this, I followed the steps outlined here: http://www.markwilson.co.uk/blog/2007/05/using-active-directory-to-authenticate-users-on-a-linux-computer.htm I'd like to be able to read data from Windows Servers shared folders without being prompted for a password. On Windows I log into an AD domain, and when I access windows file shares on a server on the LAN (also part of the AD domain) my I can just access them with no authentication step. I've used SMBclient on Linux to access these shares, but it asks for my password. I would like to be able to script access to the data on the shares, but I can't if there's a password prompt in the way. Well, I could, but it's not how I want to do it. Now, since I'm logged in using my active directory username & password, can't I just access the shares without jumping that extra hoop? I know I can mount the share using something like: //192.168.0.5/share /mnt/windows cifs auto,username=steve,password=secret,rw 0 0 but access will depend who is logged in... each user logging in should have their own unique AD access privelages. Thanks for reading!

    Read the article

  • Problems starting autossh on boot [ubuntu]

    - by Ken
    I'm trying to automatically start an SSH tunnel to my server on boot from a ubuntu box. I have an ubuntu box that's mounted on an 18-wheeler and is networked behind an air card. The box hosts a mysql database that i'm trying to have replicated when the aircard is connected. As I can never be sure of my IP and how many or which routers I'm behind I'm connected to my replication server with an SSH tunnel. I got that working using the following command: ssh -R 3307:localhost:3307 [email protected] Now I'd like that to start whenever the box is, and be alive all the time, so I installed auto-ssh and setup this little script: ID=xkenneth HOST=erdosmiller.com AUTOSSH_POLL=15 AUTOSSH_PORT=20000 AUTOSSH_GATETIME=30 AUTOSSH_DEBUG=yes AUTOSSH_PATH=/usr/bin/ssh export AUTOSSH_POLL AUTOSSH_DEBUG AUTOSSH_PATH AUTOSSH_GATETIME AUTOSSH_PORT autossh -2 -fN -M 20000 -R 3307:localhost:3306 ${ID}@${HOST} I've tried putting this scrip in /etc/init.d/ and using a post-up command in /etc/network/interfaces as well as putting it in /etc/network/if-up.d/. In both situations the script starts on boot, but the tunnel doesn't appear to be correctly established. The script works when run manually.

    Read the article

  • Browser history, by window

    - by Alex
    I use Chrome --but can switch to another browser if the feature I am about to describe is available. Browsing history, AFAIK in Chrome is sorted exclusively chronologically. Very often however, I will be working on a particular task on my laptop and have multiple (read: a lot) of tabs open in a single Chrome window for that task. Before finishing that task, I may need to work on something else --so I will open another window and minimize the other one, and start researching an entirely different issue. Over the course of a day, I may end up with 10-15 windows with many tabs each. This raises two issues: (a) memory usage and (b) quickly switching between the most relevant two or three windows. I solve these two problems like any regular guy probably would --closing windows. I want to be able to reopen specific windows that I have closed, such that the tabs that were open in that window at the time the window was closed will reopen. Ideally, closed windows will be sorted by the time they were closed and identified by the tabs that were open (even more ideally, I would be able to name these windows (contemporaneously or in the history menu)). Now that I describe this, what I am asking is: does any browser offer the ability to "save and close" windows? (This is distinct from an option to auto-restore tabs upon reopening the browser) Thank you.

    Read the article

  • Turning Google Apps mail inbox into Support Ticket System

    - by Saif Bechan
    I have an account at Google apps and most of my email is done trough Gmail. Now I just manually respond to support email. I was wondering if there is an easy way using filters and auto responders to create a support ticket system. I think something like this would be great, but is it possible: Gmail has to make a unique ID for every email that comes in. After that all the other incoming mails should be grouped using that unique ID and not grouped by email. Something like that, or are there things I am not taking into account. I really don't want to use separate software for this, because I think with some small configurations this can do the trick quite well. Or are there some free apps on the marketplace that do this already. I have searched for this but cannot find any. Sorry if this is not an SF question. I really didn't know which is the best fit for this question. This can be migrated to anywhere if someone things it would be answered better there.

    Read the article

  • Split horizon, route filtering, and having RIPv2 announce a non-attached route to host...

    - by Paul
    Routers A, B & C live at 10.1.1.1, 10.1.1.2 and 10.1.1.3 on a /24 metro Ethernet subnet. Each router also has its own private subnet on another interface. Router B's private subnet links thru a firewall to a 10.20.20.0 network at another organization. Router B redistributes to A and C several static routes for hosts on 10.20.20.0. However, a new host 10.20.20.5/32 must be reached via a different path that goes through router C. I know that C can advertise this host-based route with no problem, but I'd like to keep all my 10.20.20.x static routes in one place. So, how can B tell A via RIPv2 to send packets for 10.20.20.5/32 to C? So far it looks like I need no ip split-horizon on router B's 10.1.1.2 interface, perhaps because B has already learned from C other routes with a next hop of 10.1.1.3. But how does RIPv2 split horizon with no auto-summary and network 10.0.0.0 really work? If B learns a route to ANY 10.x.x.x network or host from A or C, is that enough for split horizon to keep it from redistributing ip route 10.20.20.5 255.255.255.255 10.1.1.3? And if I want to suspend split horizon only for this one new host, how do I filter out the mess of regurgitated routes that B advertises when I try no ip split-horizon? Thanks much.

    Read the article

< Previous Page | 659 660 661 662 663 664 665 666 667 668 669 670  | Next Page >