Search Results

Search found 6833 results on 274 pages for 'pure managed'.

Page 194/274 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • Inbound SIP calls through Cisco 881 NAT hang up after a few seconds

    - by MasterRoot24
    I've recently moved to a Cisco 881 router for my WAN link. I was previously using a Cisco Linksys WAG320N as my modem/router/WiFi AP/NAT firewall. The WAG320N is now running in bridged mode, so it's simply acting as a modem with one of it's LAN ports connected to FE4 WAN on my Cisco 881. The Cisco 881 get's a DHCP provided IP from my ISP. My LAN is part of default Vlan 1 (192.168.1.0/24). General internet connectivity is working great, I've managed to setup static NAT rules for my HTTP/HTTPS/SMTP/etc. services which are running on my LAN. I don't know whether it's worth mentioning that I've opted to use NVI NAT (ip nat enable as opposed to the traditional ip nat outside/ip nat inside) setup. My reason for this is that NVI allows NAT loopback from my LAN to the WAN IP and back in to the necessary server on the LAN. I run an Asterisk 1.8 PBX on my LAN, which connects to a SIP provider on the internet. Both inbound and outbound calls through the old setup (WAG320N providing routing/NAT) worked fine. However, since moving to the Cisco 881, inbound calls drop after around 10 seconds, whereas outbound calls work fine. The following message is logged on my Asterisk PBX: [Dec 9 15:27:45] WARNING[27734]: chan_sip.c:3641 retrans_pkt: Retransmission timeout reached on transmission [email protected] for seqno 1 (Critical Response) -- See https://wiki.asterisk.org/wiki/display/AST/SIP+Retransmissions Packet timed out after 6528ms with no response [Dec 9 15:27:45] WARNING[27734]: chan_sip.c:3670 retrans_pkt: Hanging up call [email protected] - no reply to our critical packet (see https://wiki.asterisk.org/wiki/display/AST/SIP+Retransmissions). (I know that this is quite a common issue - I've spend the best part of 2 days solid on this, trawling Google.) I've done as I am told and checked https://wiki.asterisk.org/wiki/display/AST/SIP+Retransmissions. Referring to the section "Other SIP requests" in the page linked above, I believe that the hangup to be caused by the ACK from my SIP provider not being passed back through NAT to Asterisk on my PBX. I tried to ascertain this by dumping the packets on my WAN interface on the 881. I managed to obtain a PCAP dump of packets in/out of my WAN interface. Here's an example of an ACK being reveived by the router from my provider: 689 21.219999 193.x.x.x 188.x.x.x SIP 502 Request: ACK sip:[email protected] | However a SIP trace on the Asterisk server show's that there are no ACK's received in response to the 200 OK from my PBX: http://pastebin.com/wwHpLPPz In the past, I have been strongly advised to disable any sort of SIP ALGs on routers and/or firewalls and the many posts regarding this issue on the internet seem to support this. However, I believe on Cisco IOS, the config command to disable SIP ALG is no ip nat service sip udp port 5060 however, this doesn't appear to help the situation. To confirm that config setting is set: Router1#show running-config | include sip no ip nat service sip udp port 5060 Another interesting twist: for a short period of time, I tried another provider. Luckily, my trial account with them is still available, so I reverted my Asterisk config back to the revision before I integrated with my current provider. I then dialled in to the DDI associated with the trial trunk and the call didn't get hung up and I didn't get the error above! To me, this points at the provider, however I know, like all providers do, will say "There's no issues with our SIP proxies - it's your firewall." I'm tempted to agree with this, as this issue was not apparent with the old WAG320N router when it was doing the NAT'ing. I'm sure you'll want to see my running-config too: ! ! Last configuration change at 15:55:07 UTC Sun Dec 9 2012 by xxx version 15.2 no service pad service tcp-keepalives-in service tcp-keepalives-out service timestamps debug datetime msec localtime show-timezone service timestamps log datetime msec localtime show-timezone no service password-encryption service sequence-numbers ! hostname Router1 ! boot-start-marker boot-end-marker ! ! security authentication failure rate 10 log security passwords min-length 6 logging buffered 4096 logging console critical enable secret 4 xxx ! aaa new-model ! ! aaa authentication login local_auth local ! ! ! ! ! aaa session-id common ! memory-size iomem 10 ! crypto pki trustpoint TP-self-signed-xxx enrollment selfsigned subject-name cn=IOS-Self-Signed-Certificate-xxx revocation-check none rsakeypair TP-self-signed-xxx ! ! crypto pki certificate chain TP-self-signed-xxx certificate self-signed 01 quit no ip source-route no ip gratuitous-arps ip auth-proxy max-login-attempts 5 ip admission max-login-attempts 5 ! ! ! ! ! no ip bootp server ip domain name dmz.merlin.local ip domain list dmz.merlin.local ip domain list merlin.local ip name-server x.x.x.x ip inspect audit-trail ip inspect udp idle-time 1800 ip inspect dns-timeout 7 ip inspect tcp idle-time 14400 ip inspect name autosec_inspect ftp timeout 3600 ip inspect name autosec_inspect http timeout 3600 ip inspect name autosec_inspect rcmd timeout 3600 ip inspect name autosec_inspect realaudio timeout 3600 ip inspect name autosec_inspect smtp timeout 3600 ip inspect name autosec_inspect tftp timeout 30 ip inspect name autosec_inspect udp timeout 15 ip inspect name autosec_inspect tcp timeout 3600 ip cef login block-for 3 attempts 3 within 3 no ipv6 cef ! ! multilink bundle-name authenticated license udi pid CISCO881-SEC-K9 sn ! ! username xxx privilege 15 secret 4 xxx username xxx secret 4 xxx ! ! ! ! ! ip ssh time-out 60 ! ! ! ! ! ! ! ! ! interface FastEthernet0 no ip address ! interface FastEthernet1 no ip address ! interface FastEthernet2 no ip address ! interface FastEthernet3 switchport access vlan 2 no ip address ! interface FastEthernet4 ip address dhcp no ip redirects no ip unreachables no ip proxy-arp ip nat enable duplex auto speed auto ! interface Vlan1 ip address 192.168.1.1 255.255.255.0 no ip redirects no ip unreachables no ip proxy-arp ip nat enable ! interface Vlan2 ip address 192.168.0.2 255.255.255.0 ! ip forward-protocol nd ip http server ip http access-class 1 ip http authentication local ip http secure-server ip http timeout-policy idle 60 life 86400 requests 10000 ! ! no ip nat service sip udp port 5060 ip nat source list 1 interface FastEthernet4 overload ip nat source static tcp x.x.x.x 80 interface FastEthernet4 80 ip nat source static tcp x.x.x.x 443 interface FastEthernet4 443 ip nat source static tcp x.x.x.x 25 interface FastEthernet4 25 ip nat source static tcp x.x.x.x 587 interface FastEthernet4 587 ip nat source static tcp x.x.x.x 143 interface FastEthernet4 143 ip nat source static tcp x.x.x.x 993 interface FastEthernet4 993 ip nat source static tcp x.x.x.x 1723 interface FastEthernet4 1723 ! ! logging trap debugging logging facility local2 access-list 1 permit 192.168.1.0 0.0.0.255 access-list 1 permit 192.168.0.0 0.0.0.255 no cdp run ! ! ! ! control-plane ! ! banner motd Authorized Access only ! line con 0 login authentication local_auth length 0 transport output all line aux 0 exec-timeout 15 0 login authentication local_auth transport output all line vty 0 1 access-class 1 in logging synchronous login authentication local_auth length 0 transport preferred none transport input telnet transport output all line vty 2 4 access-class 1 in login authentication local_auth length 0 transport input ssh transport output all ! ! end ...and, if it's of any use, here's my Asterisk SIP config: [general] context=default ; Default context for calls allowoverlap=no ; Disable overlap dialing support. (Default is yes) udpbindaddr=0.0.0.0 ; IP address to bind UDP listen socket to (0.0.0.0 binds to all) ; Optionally add a port number, 192.168.1.1:5062 (default is port 5060) tcpenable=no ; Enable server for incoming TCP connections (default is no) tcpbindaddr=0.0.0.0 ; IP address for TCP server to bind to (0.0.0.0 binds to all interfaces) ; Optionally add a port number, 192.168.1.1:5062 (default is port 5060) srvlookup=yes ; Enable DNS SRV lookups on outbound calls ; Note: Asterisk only uses the first host ; in SRV records ; Disabling DNS SRV lookups disables the ; ability to place SIP calls based on domain ; names to some other SIP users on the Internet ; Specifying a port in a SIP peer definition or ; when dialing outbound calls will supress SRV ; lookups for that peer or call. directmedia=no ; Don't allow direct RTP media between extensions (doesn't work through NAT) externhost=<MY DYNDNS HOSTNAME> ; Our external hostname to resolve to IP and be used in NAT'ed packets localnet=192.168.1.0/24 ; Define our local network so we know which packets need NAT'ing qualify=yes ; Qualify peers by default dtmfmode=rfc2833 ; Set the default DTMF mode disallow=all ; Disallow all codecs by default allow=ulaw ; Allow G.711 u-law allow=alaw ; Allow G.711 a-law ; ---------------------- ; SIP Trunk Registration ; ---------------------- ; Orbtalk register => <MY SIP PROVIDER USER NAME>:[email protected]/<MY DDI> ; Main Orbtalk number ; ---------- ; Trunks ; ---------- [orbtalk] ; Main Orbtalk trunk type=peer insecure=invite host=sipgw3.orbtalk.co.uk nat=yes username=<MY SIP PROVIDER USER NAME> defaultuser=<MY SIP PROVIDER USER NAME> fromuser=<MY SIP PROVIDER USER NAME> secret=xxx context=inbound I really don't know where to go with this. If anyone can help me find out why these calls are being dropped off, I'd be grateful if you could chime in! Please let me know if any further info is required.

    Read the article

  • Testlink stop working when configuring it to use LDAP

    - by YuriAlbuquerque
    I have a TestLink webservice running on a server, and OpenLDAP running on other server. There are no firewall problems between them (I managed to configure Redmine, on the same server as TestLink, to use LDAP authentication). But whenever I place the configuration for LDAP in TestLink, TestLink stops working. I have no clue on what is happening. This is where I define LDAP's settings on custom_config.inc.php: $tlCfg->authentication['method'] = 'LDAP'; $tlCfg->authentication['ldap_server'] = 'serverip'; $tlCfg->authentication['ldap_port'] = '389'; $tlCfg->authentication['ldap_version'] = '2'; $tlCfg->authentication['ldap_root_dn'] = 'dc=mycompany,dc=com,dc=br'; $tlCfg->ldap_organization'] = ''; $tlCfg->authentication['ldap_uid_field'] = 'uid'; $tlCfg->authentication['ldap_bind_dn'] = 'myuser'; //Not actual login name and password, for obvious reasons $tlCfg->authentication['ldap_bind_passwd'] = 'mypassword'; $tlCfg->authentication['ldap_tls'] = false; $tlCfg->user_self_signup = true; I'm certain that OpenLDAP is 2.X. My TestLink version is 1.9.3 What could I be doing wrong?

    Read the article

  • Routing WIFI and LAN for specific traffic

    - by jakebird451
    I have two network devices aboard my macbook pro: WIFI (en1): Used for general traffic. Connects to an ip of 192.168.19.* via DHCP LAN (en0): Used for specific traffic. Connects to an ip of 192.168.2.10 as a static IP. Does not connect to a router, only a switch for direct routing connection. I have 4 IP addresses I need to access on the LAN: 192.168.2.1 192.168.2.21 192.168.2.20 192.168.2.30 The rest of the traffic needs to go to WIFI. I have tried setting up a routing table for the specific ip addresses, but I only managed to mess up my network. I do not venture out into the world of networking too often, but this was the latest command I have been trying: sudo route add -host 192.168.2.30 -interface en0 This command killed my ability to use ping. It told me that ping could not allocate memory (is that even possible)? It also killed my wifi access. Logging out and back in fixed the issue. I really do not mind to make this solution permanent, so I am fine with a temporary routing. EDIT: If I currently have been trying: sudo route flush sudo route add default 192.168.19.1 This gets everything to work for about a minute. But after such minute it "forgets" the routing to WiFi while retaining LAN's (en0) routing. If I unplug and replug my LAN (en0) cable, the process works for another minute.

    Read the article

  • Cannot print certain colours on Ubuntu with HP Laser Printer

    - by ILMV
    We have a load of machines running Ubuntu in our office, they are either on 8.04 or 9.10. We have a server which connects a HP JetDirect that connects to a HP 3550 Colour Laser printer using CUPS. The problem we are having is we cannot print red, magenta or yellow at 100%, I've got a picture of the Ubuntu test page to demonstrate my problem: This is obviously a pretty big problem as we are constantly receiving documents with these colours and cannot successfully print them off, we cannot just switch the grayscale, our business depends on being able to print colour (seems trivial but we handle lots of artwork). We're using the recommended driver HP Color LaserJet 3550 footmatic/pxljr (recommended), there is another driver in the list labelled HP Color LaserJet 3550 footmatic/hpijs. These are production printers so need to make sure any setting change won't kick is in the nuts. It would appear HPIJS is for HP Inkjets, makes sense I guess. The problem doesn't occur in Windows. RESOLVED I've managed to solve the problem, I did indeed use the HPIJS driver (apparently for inkjets) but it seems to have worked, we're going to roll with it for now to see how we get on with it.

    Read the article

  • GRE Tunnel over IPsec with Loopback

    - by Alek
    I'm having a really hard time trying to estabilish a VPN connection using a GRE over IPsec tunnel. The problem is that it involves some sort of "loopback" connection which I don't understand -- let alone be able to configure --, and the only help I could find is related to configuring Cisco routers. My network is composed of a router and a single host running Debian Linux. My task is to create a GRE tunnel over an IPsec infrastructure, which is particularly intended to route multicast traffic between my network, which I am allowed to configure, and a remote network, for which I only bear a form containing some setup information (IP addresses and phase information for IPsec). For now it suffices to estabilish a communication between this single host and the remote network, but in the future it will be desirable for the traffic to be routed to other machines on my network. As I said this GRE tunnel involves a "loopback" connection which I have no idea of how to configure. From my previous understanding, a loopback connection is simply a local pseudo-device used mostly for testing purposes, but in this context it might be something more specific that I do not have the knowledge of. I have managed to properly estabilish the IPsec communication using racoon and ipsec-tools, and I believe I'm familiar with the creation of tunnels and addition of addresses to interfaces using ip, so the focus is on the GRE step. The worst part is that the remote peers do not respond to ping requests and the debugging of the general setup is very difficult due to the encrypted nature of the traffic. There are two pairs of IP addresses involved: one pair for the GRE tunnel peer-to-peer connection and one pair for the "loopback" part. There is also an IP range involved, which is supposed to be the final IP addresses for the hosts inside the VPN. My question is: how (or if) can this setup be done? Do I need some special software or another daemon, or does the Linux kernel handle every aspect of the GRE/IPsec tunneling? Please inform me if any extra information could be useful. Any help is greatly appreciated.

    Read the article

  • Symantec Endpoint Protection Virus Definitions

    - by Gus Denton
    I have done some Googling but I cannot get a definitive answer certainly not from the Symantec KB. I have a Virtualised Win 2003R2 server 32bit. It has been provisioned to me with Symantec Endpoint Protection 11.0.62xxx CLIENT (not a definitions server) the directory C:\Program Files\Common Files\Symantec Shared\VirusDefs is 750MB IT doesn't contain .tmp directories so it is NOT a corrupt definitions server. IT does contain directories named with a date pattern YYYYMMDD.xxx Some of these folders are 12 months old and I would like to recover the space. The sysmantect forums are full of this stuff but a lot of the postings contain links back to documents that are not specific to End Point Protection Client. It appears that I should be able to delete the older folders and all will be OK. with a service restart however there is a warning about having Live Update Administrator Installed Firstly I have no idea if I have this installed how to I check and secondly can I just ditch these old files and restart ? Regards Gus Denton Learning and Teaching Uni of New South Wales Sydney Australia For those trying to assist me I thankyou. I have followed some instructions found on the Symantec site and assumed that the response from Nixphoe would resolve my issue. It appears that as I am on a provisioned VM from a central IT unit I cannot run the Symantec commands from the Run prompt as my admin creds to get me in. (smc -stop) Basically I need to claw back some Diskspace from the c: drive which is being filed up with WSUS patches and Symantec files. I have managed to delete one symantec cache through the live update control panel and recovered 470Mb I suppose my last question for those more experienced than myself is, can I simply remove say the two oldest virus definition folders without completely foobaring the End Point protection and the server ? Regards Gus

    Read the article

  • suggestions for firewall/router project using *BSD or Linux

    - by Adeodatus
    Hi All, I have a project in mind and I'd love to hear some ideas on some open source solutions with COTS hardware. I have a few 24 and/or 48 port managed layer2 switches with customers potentially on each port (though its usually about 20-30). Right now the switch has a bridged network and backhaul the traffic to our core to a centralized DHCP server. I need to move them to a NAT solution and, while doing this, I'd like to protect the customers on each port from the customer traffic on the other ports. I also need to be able to port forward from the public side of the firewall/nat box to specific hardware on the inside of the nat machine (easy enough, I know). My first thoughts are to build an appliance-like box (the fewer moving parts the better) that can do filtering and NAT with rfc1918 an address range being handed out via a DHCP server on the appliance. A caching DNS server on the appliance would be a plus since we backhaul everything to the core. I'd like to run FreeBSD but I'm open. Now, to try to limit the broadcast traffic thats visible I was thinking of doing each port on the switch as a different vlan and have the switch do trunking to the private NIC on the FreeBSD/appliance. I'd probably need to do some magic on the freebsd NIC to get this working but it should. We have the parts to build these systems. So, does this make sense? Are there any other solutions out there that we don't have to spend money on but can use our parts to create something? Are there any good distros that could do this already (monowall)?? I may or may not admin this solution so a secure web configuration and management tool would be a plus in the other admins' minds. Thoughts?

    Read the article

  • Problem with wireless networking

    - by Rodnower
    Hello, I have atheros wifi hardware, intell chipset, gigabyte laptop and CentOS 5 installed. Now I try to use wireless network and get problems. First of all I want to say that I have 2 OS on my laptop, and when I load Windows XP I still may to access to the wireless network. First I try to get it on Linux was to make active wlan0 interface in: system - administration - network but I get: Determining IP information for wlan0... failed. Second I try also was unsuccessfully: [root 1 network-scripts]# ifup-wireless Error : unrecognised wireless request "off" This relevant output of iwconfig is: Warning: Driver for device wlan0 recommend version 21 of Wireless Extension, but has been compiled with version 20, therefore some driver features may not be available... wlan0 IEEE 802.11 ESSID:"" Mode:Managed Frequency:2.462 GHz Access Point: Not-Associated Tx-Power=27 dBm Retry min limit:7 RTS thr:off Fragment thr=2352 B Encryption key:off Link Quality:0 Signal level:0 Noise level:0 Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 {output not in the original format} The same things are happen even if I do: modprobe wlan0 (this not get error) Important to say that modprobe not succeed to find ath_pci, tharefor I decide to download latest version of the madwifi driver from http://madwifi-project.org. I extracted this, but when I make this, this is what I get: [root 1 madwifi-0.9.4]# make /bin/sh: line 0: cd: /lib/modules/2.6.18-164.el5/build: No such file or directory Makefile.inc:66: * /lib/modules/2.6.18-164.el5/build is missing, please set KERNELPATH. Stop. I tried to set KERNELPATH, but I think that it was incorrect: [root 1 madwifi-0.9.4]# make KERNELPATH=/lib/modules/2.6.18-164.el5/kernel/ /bin/sh: cc: command not found Makefile.inc:81: * Cannot detect kernel version - please check compiler and KERNELPATH. Stop. Some one have any ideas? Thank you very much for ahead.

    Read the article

  • Getting a TTY in a Connectback Shell

    - by Asad R.
    I'm often asked by friends to help with small Linux problems, and more often than not I'm required to login to the remote system. Usually there are a lot of issues with making an account and logging in (sometimes the box is behind a NAT device, sometimes SSHD isn't installed, etc.) so I usually just ask them to make a connect-back shell using netcat (nc -e /bin/bash ). If they don't have netcat I can just ask them to grab a copy of a statically compiled binary which isn't that hard or time consuming to download and run. Though this works well enough for me to enter simple commands, I can't run any apps that require a tty (vi, for example) and can't use any job control functions. I managed to bypass this issue by running in.telnetd with a few arguments within the connect-back shell that would assign me a terminal and drop me to a shell. Unfortunately in.telnetd isn't usually installed by default on most systems. What's the easiest way to get a fully functional connect-back terminal shell without requiring any non-standard packages? (A small C program that does the job would be fine as well, I just can't seem to find much documentation on how a TTY is assigned/allocated. A solution that doesn't require me to plough through the source code for SSHD and TELNETD would be nice :))

    Read the article

  • Changing the interface language in Windows 7 Home Premium

    - by Cristián Romo
    A friend of mine has recently purchased a laptop in the U.S. that has Windows 7 Home Premium on it with an English interface. Not being a native English speaker, I'm trying to change the interface language to traditional Chinese. I've looked through the Control Panel in search of something that might let me change the interface language. Naturally, I looked at the Region and Language section and managed to change the formats the computer uses and install a working keyboard, but I haven't found a way to change the interface language. Upon doing some research, I found out that there are two kinds of interface packs, Multilingual User Interface (MUI) and Language Interface Packs (LIP). It seems that MUIs can only be installed through Windows Update, so I looked through the list of updates. To my dismay, the language packs are not present. The optional updates tab doesn't even show up. Many sites show a drop down menu the under Keyboards and Languages tab in the Region and Language options, yet it doesn't show up for me. We also don't have the Windows 7 DVD which might contain this useful file. As far as the LIPs go, I can't find one in Chinese at all, let alone traditional Chinese. Can the interface language be changed in Home Premium at all? If it can, how would I do so?

    Read the article

  • Sign multiple domains with single Domain Key (dk-filter)

    - by Lashae
    Motivation The private shopping website GILT, send periodical update emails from giltgroupe.bounce.ed10.net however all of the mails are signed with domain keys of giltgroupe.com. mailed-by giltgroupe.bounce.ed10.net signed-by giltgroupe.com My Story I couldn't manage to sign x.com with y.com 's domain key using dk-filter under Debian Lenny with postfix. If I try to init dk-filter service with following arguments: DAEMON_OPTS="$DAEMON_OPTS -d x.com,y.com -c nofws -k -i /var/dk-filter/internal_hosts -s /etc/dk-keys.conf" dk-filter service signs with domain x.com (d=x.com) If I change the daemon arg.s as following: DAEMON_OPTS="$DAEMON_OPTS -d x.com -c nofws -k -i /var/dk-filter/internal_hosts -s /etc/dk-keys.conf" then emails sent From y.com is not being signed. the dk-keys.conf file is as follows: *:/var/dk-filter/y.com/mail I managed to do same thing with DKIM, works perfect. However DK doesn't seem to work. I don't have any problem signing y.com's emails with y.com's key and x.com's emails x.com's key, which indicates there is no configuration problem. Do you have any experience/advice to make it possible to sign emails from multiple domains by a specific chosen domain?

    Read the article

  • How to auto-cc a system email account any time a user creates an appointment

    - by Ferdy
    I will not bother explaining my full architecture or reasons for wanting this in order to keep this question short: Is it possible to auto-cc a certain email account any time a Exchange user creates an appointment or meeting in his own calendar? Is it possible using rules? Our Exchange 2007 server is outsourced, I cannot change the configuration or install plugins server-side Preferably, it still should work server-side, because users may use the Outlook client but also Outlook Web Access Is there any other way, perhaps using group policies? My conclusion so far is that the only viable way to accomplish this is to build an Outlook add-on. The problem there is that it will need to be managed for thousands of desktop users and that the add-on will not work when using another client (OWA, mobile). An alternative architecture could be to pull the information from the user's calendar on a scheduled basis. Given that we are talking about a lot of users, scalability is a major issue, this has also been confirmed by Microsoft. Can you confirm that my thinking is correct or do you have any other solutions?

    Read the article

  • Mac Management Without Permission and Security

    - by Bart Silverstrim
    I was going through some literature on managing OS X laptops and asked someone some questions about usage scenarios when using the MacBooks. I asked someone more knowledgeable than I about whether it was possible for my Mac to be taken over if I were visiting another site for a conference or if I went on a wifi network at a local coffee house with policies from an OS X Server with workgroup manager (either legit for the site or someone running a version of OS X Server on hardware they have hidden somewhere on the network), which apparently could be set up to do things like limit my access to Finder or impose other neat whiz-bang management features. He said that it is indeed possible for it to happen as it would be assigned via the DHCP server and the OS X server would assume my Mac is a guest and could hand out restrictions and apparently my Mac will happily accept them without notifying me or giving me an option, unlike Windows which I believe would need to be joined to a domain before it becomes "managed" by Active Directory. So my question is as network admins and sysadmins with users traveling with MacBooks, is there a way to reasonably protect your users from having their machines hijacked without resorting to just turning off networking all the time? Or isn't this much of a security hazard? What threat does this pose to the road warriors in your businesses?

    Read the article

  • OpenVPN: ERROR: could not read Auth username from stdin

    - by user56231
    I managed to setup openvpn but now I want to integrate a user/pass authentication method so, even though I haven't added the auth-nocache in the server config, whenever I try to connect it returns with the following message on the client side: ERROR: could not read Auth username from stdin My server.conf file contains basic stuff, everything works up untill I try to implement this for of authentication. mode server dev tun proto tcp port 1194 keepalive 10 120 plugin /usr/lib/openvpn/openvpn-auth-pam.so login client-cert-not-required username-as-common-name auth-user-pass-verify /etc/openvpn/auth.pl via-env ca /etc/openvpn/easy-rsa/2.0/keys/ca.crt cert /etc/openvpn/easy-rsa/2.0/keys/server.crt key /etc/openvpn/easy-rsa/2.0/keys/server.key dh /etc/openvpn/easy-rsa/2.0/keys/dh1024.pem user nobody group nogroup server 10.8.0.0 255.255.255.0 persist-key persist-tun #persist-local-ip status openvpn-status.log verb 3 client-to-client push "redirect-gateway def1" push "dhcp-option DNS 10.8.0.1" log-append /var/log/openvpn comp-lzo I searched all over the net for a solution and all answers seems to be related to the auth-nocache param which I haven't set. The directive auth-user-pass-verify /etc/openvpn/auth.pl via-env points to a script which is executed to perform the authentication. A false authentication should result in a exit 1 while a true one should result with exit 0. For testing, that script auth.pl returns exit 0 no matter what the input is but it seems that the file is not executed before the error raises. auth.pl file contents: #!/usr/bin/perl my $user = $ENV{username}; my $passwd = $ENV{password}; printf("$user : $passwd\n"); exit 0; Any ideas?

    Read the article

  • Hard drive failed, suspected filesystem corruption, still cannot salvage any data from harddrive

    - by Hippy-Head
    Firstly, I am terribly sorry if this is a duplicate, but I couldn't find a similar issue to mine, so here goes. I have a 1TB hdd bought around 8 months ago used as backup hard drive. I have not used the drive for a period of time whatsoever, and when I was trying to get back to some files on it, it was completely wiped just like that. At first it would not boot I tried everything from command line chkdsk and filesystem recovery software to rebuilt it. After a few attempts I managed to initialize it, at that time it was an achievement. The problems started when I tried to recover the data inside, I have used A LOT of software free and commercial software on both Mac and Windows, with the help of cmd or Terminal commands, however no data of any kind was recovered, even after leaving it thoroughly scan for around 9-10 hours all night sometimes longer, with no results at all. I am somewhat desperate, I am usually good at retrieving data from corrupt hard drives, but this is not the case. Call me paranoid, but I do not want to give it to someone to fix it for me, as I have a lot of photos and personal stuff that I do not want anyone to see.

    Read the article

  • How to handle server failure in an n-tier architecture?

    - by andy
    Imagine I have an n-tier architecture in an auto-scaled cloud environment with say: a load balancer in a failover pair reverse proxy tier web app tier db tier Each tier needs to connect to the instances in the tier below. What are the standard ways of connecting tiers to make them resilient to failure of nodes in each tier? i.e. how does each tier get the IP addresses of each node in the tier below? For example if all reverse proxies should route traffic to all web app nodes, how could they be set up so that they don't send traffic to dead web app nodes, and so that when new web app nodes are brought online they can send traffic to it? I could run an agent that would update all the configs to all the nodes, but it seems inefficient. I could put an LB pair between each tier, so the tier above only needs to connect to the load balancers, but how do I handle the problem of the LBs dying? This just seems to shunt the problem of tier A needing to know the IPs of all nodes in tier B, to all nodes in tier A needing to know the IPs of all LBs between tiers A and B. For some applications, they can implement retry logic if they contact a node in the tier below that doesn't respond, but is there any way that some middleware could direct traffic to only live nodes in the following tier? If I was hosting on AWS I could use an ELB between tiers, but I want to know how I could achieve the same functionality myself. I've read (briefly) about heartbeat and keepalived - are these relevant here? What are the virtual IPs they talk about and how are they managed? Are there still single points of failure using them?

    Read the article

  • Windows 7 wireless not seeing any networks

    - by jkohlhepp
    I think I have managed to confuse Windows 7. When I did the install, I had the network cable plugged in to my router, but the wireless card was also enabled. During the install, Windows 7 seemed to see my wireless network and even asked me for the WEP key. I know that it used the WEP key because I initially entered an invalid one and it gave me an error. Then the network said "SoAndSoWireless Connected". However, when I unplug or disable my wired network card, then I have no internet, and it can't see any networks. When I plug in the wired network card, it says "SoAndSoWireless Connected". Under Network and Internet Network Connections I have "Local Area Connection" and "Wireless Network Connection". The wired one's status is "SoAndSoWireless" and the wireless status is "Not Connected". Also, the wireless connection can't seem to see any other wireless networks in the area and I know there are tons. My neighbors have several. I've somehow seemingly confused Windows 7 into thinking that my wired network card is my wireless card or something. Any ideas on how to un-confuse it? This is a desktop machine by the way, if it matters. EDIT: Ah, I think part of the problem is that I named my network accidentally the same as the name of the wireless network being broadcast by the wireless router. So that might be why it says that name on the hard-wired connection. Perhaps the drivers just are completely not working for the wireless card. Thanks, ~ Justin

    Read the article

  • Issue with InnoDB engine while enabling and [ skip-innodb ] - [ERROR] Plugin 'InnoDB' init function returned error

    - by Ahn
    How to enable InnoDB, which was previously disabled with skip-innodb option. Case 1: Disabled the innodb with skip-innodb option and show engines givens as below. Engine | Support ... | InnoDB | NO ...... Case 2: As I want to enable the innodb, I commanded the #skip-innodb option and restarted. But now the show engines even not showing the InnoDB engine in the list. ? Mysql Version : 5.1.57-community-log OS : CentOS release 5.7 (Final) Log: 120622 13:06:36 InnoDB: Initializing buffer pool, size = 8.0M 120622 13:06:36 InnoDB: Completed initialization of buffer pool InnoDB: No valid checkpoint found. InnoDB: If this error appears when you are creating an InnoDB database, InnoDB: the problem may be that during an earlier attempt you managed InnoDB: to create the InnoDB data files, but log file creation failed. InnoDB: If that is the case, please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/error-creating-innodb.html 120622 13:06:36 [ERROR] Plugin 'InnoDB' init function returned error. 120622 13:06:36 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 120622 13:06:36 [Note] Event Scheduler: Loaded 0 events 120622 13:06:36 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.57-community-log' socket: '/data/mysqlsnd/mysql.sock1' port: 3307 MySQL Community Server (GPL)

    Read the article

  • About Load average in htop, how to decide if it's still doing ok?

    - by Joe Huang
    I use 'htop' to monitor my web server. It's recently quite loaded and the Load average is showing something like this: Load average: 3.10 2.56 1.63 I searched the web about these numbers and I found an article about it: http://blog.scoutapp.com/articles/2009/07/31/understanding-load-averages In the article, it says if I have 2 CPUs, 2.0 means 100% CPU utilization. And my VPS has two CPUs, so what does 3.1 mean? How could it exceed 100% CPU utilization? And from these numbers, does it mean I should be wary about the loading now? But the performance seems totally fine, and this is a managed VPS, the hosting company has not notified me any warning about it. During day time, Load average always show these high numbers... here is another snapshot while writing. Load average: 3.03 2.77 1.97 Load average: 0.41 1.29 1.60 <---- 5 more minutes later So I am wondering how much room left for this site to grow in current configurations? What kind of proactive actions I should take in advance? I don't want to wait until the server bursts. Thanks.

    Read the article

  • Cannot Install Bootcamp on Mac Fusion Drive

    - by user3377019
    I have two drive installed in my mid-2013 Macbook pro. It is running osx maverick. I set the two drives (an ssd and a regular HD) up using the fusion drive (following the diy fusion drive guides out there). I went ahead and created a 40gb partition to host my windows install. I removed the SSD, installed windows, then reinstalled the SSD. As soon as the ssd was placed back in the bootcamp partition stopped booting. I get a blinking cursor on a black screen. I checked out the partition info in disk utility and it appears that the windows partition is not marked bootable. Below is some info I managed to gather. I am wondering if there is a way to fix the partition table so my bootcamp will boot. /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *120.0 GB disk0 1: EFI EFI 209.7 MB disk0s1 2: Apple_CoreStorage 119.7 GB disk0s2 3: Apple_Boot Boot OS X 134.2 MB disk0s3 /dev/disk1 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *500.1 GB disk1 1: EFI EFI 209.7 MB disk1s1 2: Microsoft Basic Data BOOTCAMP 40.0 GB disk1s2 3: Apple_CoreStorage 459.2 GB disk1s3 4: Apple_Boot Boot OS X 650.0 MB disk1s4 /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: Apple_HFS Macintosh HD *573.4 GB disk2 Name : BOOTCAMP Type : Partition Disk Identifier : disk1s2 Mount Point : /Volumes/BOOTCAMP File System : Windows NT File System (NTFS) Connection Bus : SATA Device Tree : IODeviceTree:/PCI0@0/SATA@1F,2/PRT1@1/PMP@0 Writable : No Universal Unique Identifier : 584BAED6-4C46-4F18-93B3-957F6E27003C Capacity : 40 GB (39,998,980,096 Bytes) Free Space : 16.34 GB (16,339,972,096 Bytes) Used : 23.66 GB (23,659,003,904 Bytes) Number of Files : 86,424 Number of Folders : 0 Owners Enabled : No Can Turn Owners Off : No Can Be Formatted : No Bootable : No Supports Journaling : No Journaled : No Disk Number : 1 Partition Number : 2

    Read the article

  • Mac OS X: which folders should ClamXav Sentry watch?

    - by trolle3000
    I'm using ClamXav on my Mac. I've read this, and I am aware of the whole Macs-need-no-AV-but-they-do-anyway discussion. I guess that's why I would feel like a real jerk if I somehow managed to compromise my system! So ClamXav has been downloaded and ClamXav Sentry set up to start on log-in, but it doesn't really do anything before you tell it to. Specifically, you have to tell it which folders to watch for virusses/vira so I'm wondering, where are good places to look? Currently it's been set up to look the following places: In the home folder: ~/Downloads ~/Library/Caches ~/Library/Contextual Menu Items ~/Library/Cookies ~/Library/Internet Plug-Ins ~/Library/LaunchAgents In my system folder: /Library/Application Support /Library/Caches /Library/Contextual Menu Items /Library/Cookies /Library/Internet Plug-Ins /Library/LaunchAgents /Library/LaunchDaemons /Library/Startupitems Basically, this is 100% conjecture. All (most of) the folders have something to do with the Internet and things that start up automatically, so I'm guessing that's where vira go. But still, the qustion: Which folders should ClamXav Sentry watch, if any? FYI, I'm not using any mail applications, but please include that in your answer for anyone who might be interested.

    Read the article

  • How small (spec wise) can a virtual machine be and still boot up and run some sort of OS?

    - by IllvilJa
    One of the advantages with virtual machines is that you can be very flexible with their sizes. If the host system permits it, you can have a very large virtual machine with a lot of virtual RAM and disk. Also, you can decide to go the other way around, to give the virtual machine a very modest amount of RAM and disk and then choose and configure the OS appropriately. The question is, how small virtual machines have people managed to setup (and get to both boot up and to run)? Virtual machines doing something usuful is preferable, even if I know "useful" in this context is awfully subjective, but laboratory-cases with a configuration stripped beyond common sense could be intresting as well, just to see what people manage to boot and run. Quite open ended question and quite academic, but think of it: an extremely small VM (which still does something useful) takes very little memory and disk and can be quite quickly saved to and restored from disk. If it's also gentle on CPU resources, one might consider having a huge number of such VMs up and running on a host. (Imagine a VM running just an old Commodore 64 or Commodore Amiga in it. Ok, way wrong CPU architecture for modern Virtualization software running on a x86-based PC but still an interesting thought. You could have quite a few such small VMs running on a modern PC.)

    Read the article

  • Synergy, OSX client, Windows 7 server - No mouse on client

    - by Majenko
    I have the following Synergy setup: +------------++------------++------------++------------+ | Mac || Win 7 || Ubuntu 1 || Ubuntu 2 | |c ||s ||c ||c | +------------++------------++------------++------------+ Mac: OS/X Tiger 10.4.11 (G3) Win 7: Windows 7 Ultimate x64 Ubuntu 1 & Ubuntu 2: Desktop 10.10 Now, everything works nicely between the Win7 server and the two Ubuntu machines. What doesn't work is the Mac. I am running the very latest Synergy (1.4.2, downloaded last night). As far as the Mac is concerned everything should be working fine: Synergy 1.4.2 Client on Darwin 8.11.0 Darwin Kernel Version 8.11.0: Wed Oct 10 18:26:00 PDT 2007; root:xnu-792.24.17~1/RELEASE_PPC Power Macintosh Unable to connect to pasteboard. Clipboard sharing disabled. 2011-03-22 09:32:56.725 synergyc[406] Can't register screen saver connection 'com.apple.ScreenSaverDaemon' started client connecting to '192.168.0.202': 192.168.0.202:24800 connected to server entering screen leaving screen entering screen leaving screen But it's just not interacting with the display at all (mouse doesn't move, keyboard does nothing). I have tried running ktrace on synergyc and examining the dump, and the only clue I found was that it was trying to interact with the AccessibilityAPI which was disabled at first. Enabling Accessibility has had no effect whatsoever (it has only stopped the failure to open /var/db/.AccessibilityAPIEnabled in the ktrace dump) Anyone managed to get this to work in OS/X Tiger yet? I used to run the server on OS/X and have the windows / unix machines as clients, but as my Windows is now a laptop I'd like that to be the server.

    Read the article

  • Issues with ASP.NET via Apache/mod_mono on Ubuntu.

    - by Matthew Scharley
    I run an Ubuntu test server, and my deployment system is also Ubuntu. I've recently been trying to get ASP.NET to work on my test server so that we can take it live. I managed to get it installed, and configured properly, and my application is installed and running, but I can't get anything to work. The error I keep receiving is below, if anyone has any clue what might be going on, it would be greatly appreciated. Server Error in '/' Application Standard output has not been redirected or process has not been started. Description: HTTP 500. Error processing request. Stack Trace: System.InvalidOperationException: Standard output has not been redirected or process has not been started. at System.Diagnostics.Process.CancelErrorRead () [0x00000] at (wrapper remoting-invoke-with-check) System.Diagnostics.Process:CancelErrorRead () at Mono.CSharp.CSharpCodeCompiler.CompileFromFileBatch (System.CodeDom.Compiler.CompilerParameters options, System.String[] fileNames) [0x00000] at Mono.CSharp.CSharpCodeCompiler.CompileAssemblyFromFileBatch (System.CodeDom.Compiler.CompilerParameters options, System.String[] fileNames) [0x00000] at System.CodeDom.Compiler.CodeDomProvider.CompileAssemblyFromFile (System.CodeDom.Compiler.CompilerParameters options, System.String[] fileNames) [0x00000] at System.Web.Compilation.AssemblyBuilder.BuildAssembly (System.Web.VirtualPath virtualPath, System.CodeDom.Compiler.CompilerParameters options) [0x00000] at System.Web.Compilation.AssemblyBuilder.BuildAssembly (System.Web.VirtualPath virtualPath) [0x00000] at System.Web.Compilation.BuildManager.BuildAssembly (System.Web.VirtualPath virtualPath) [0x00000] at System.Web.Compilation.BuildManager.GetCompiledType (System.String virtualPath) [0x00000] at System.Web.HttpApplicationFactory.InitType (System.Web.HttpContext context) [0x00000] Version information: Mono Version: 2.0.50727.42; ASP.NET Version: 2.0.50727.42 Apache version String: Apache/2.2.11 (Ubuntu) mod_mono/2.0 PHP/5.2.6-3ubuntu4.2 with Suhosin-Patch Server at dev Port 80 PS: I had to add three DLL's to the /bin directory in my application, copying them from Windows because I couldn't find them in any of Mono's packages. This might or might not be causing problems, I don't know. The list that I had to add is: System.Web.Abstractions System.Web.Routing System.Web.Mvc

    Read the article

  • Linux group permissions getting overwritten by owner

    - by Andy
    I am not a user of Linux, but I am encountering some permissions problems with it that I hope someone can shed some light on. Bit of background: A colleague of mine has a Linux box (running Debian I believe) with an SVN repository on it. The repository directory and files 'owner' is my colleauge. We are both members of a group called 'users'. He manages several projects both Linux and Windows apps, while I have one Windows app. For the Windows apps, we both use TortoiseSVN via an SSH link to commit/update. Performing the command 'ls -l' shows the repository files and folders on the Linux box to have the following permissions: -rwxrwx--- john users However, when my colleauge commits to the repository, the permissions change to: -rwxrwx--- john john This then means I get 'Permission denied' when trying to access the repository myself as it appears that the group permissions have been overwritten with only 'owner' permissions. To fix this, a 'chown -R' command is applied to the files/folders to set the permissions back to owner/group, but each time he writes to the repository, the issue repeats. I'm not sure if this is solely an SVN problem, or a more fundamental owner/group issue. Anyone any clue on how to stop this happening, or where to go and look? I'm trying to help out my colleague who is having some trouble resolving this issue. Apologies for the vague info, I hope I have conveyed the problem clear enough. Like I say, I am not a Linux user, I have only put down what I have managed to pick up from looking over his shoulder. Thanks for any pointers I can pass on!

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >