Search Results

Search found 21524 results on 861 pages for 'multiple matches'.

Page 695/861 | < Previous Page | 691 692 693 694 695 696 697 698 699 700 701 702  | Next Page >

  • Does any Certificate Authority support both SAN and wildcards?

    - by nicholas a. evans
    My basic quandry is that wildcard certificates don't support subdomains of subdomains, nor do they help with alternate domain names. Basically, if my CN is example.com, I want a Subject Alternative Name field that looks roughly like so: DNS:example.com DNS*.example.com DNS:*.beta.example.com DNS:example.net DNS:*.example.net DNS:*.beta.example.net Using a self-signed cert, I verified that the browsers will work just fine with this. Unfortunately, none of the Certificate Authorities that I looked into (Thawte, GoDaddy, Verisign, Digicert) seemed to support both wildcard certs and Subject Alternative Name (sometimes referred to as "Multiple Domain UCC"). I even called up GoDaddy tech support to confirm. Is there a CA (trusted by 99% of browsers) that supports wildcards for the Subject Alternative Name? One little restriction: I'm saddled with Amazon EC2's single Elastic IP per instance limitation. Here are what I see as my backup plans: set up three extra EC2 instances, each configured for a different IP address and cert, and nginx reverse proxy from three of them into the app server(s) introduces latency(?), and even the cheapest EC2 instance isn't that cheap instead of dedicated reverse proxy instances, setup the four or more almost identical EC2 app servers, with nginx using the port to determine which cert to deliver, and use haproxy to distribute the traffic amongst themselves. complicated to configure and manage? I'm not using the cheapest EC2 instance type for my app servers. If I don't need 4+ app servers for the load, it raises the cost. set up an external server (outside of EC2) that doesn't have EC2's Elastic IP address restrictions, setup all of the alternate IP addresses and certificates on that server, and nginx reverse proxy from that server into the EC2 app servers. extra IP addresses are almost free (still need to pay for the server of course), but don't come with the robust "elasticity" that Amazon's Elastic IPs provide. even more latency than in the first scenario. Are these approaches crazy or reasonable? Do you have another one to suggest?

    Read the article

  • How to stop SophosAV from scanning directories under source control

    - by user26453
    From googling it seems its well known that SophosAV as well as other AV programs have issues with how they interact and can inhibit source control utilities like TortoiseHG or TortoiseSVN. One solution is to exclude directories under source control from on-access scanning as detailed here on Sophos's support site. There is a corollary article that mentions some issues related to this, namely need to place multiple entries for exclusions based on the possibility of the location being accessed through the short vs. long name (e.g., Progra~1 vs. "Program Files"). One other twist is I am using a junction to relocate my user directory, C:\Users\Username, to a second hard drive, E:. Since I am not sure how this interacts I have included the source control directory as they are nested in both locations. As a result, I have included the two exclusions for the on-access scanning exclusions (and to be on the safe side on-demand exclusions as well, although this should only come into play when I select a parent directory of the exclusion to be scanned on-demand, but still). You'll notice I have no need to add extra exclusions for those locations based on short vs. long name distinctions. The two exclusion I have then, for both on-access and on-demand scanning exclusions are: C:\Users\Username\source-control-directory E:\source-control-directory However, this does not seem to work as TortoiseHG still lags terribly in response to any request as AV software starts scanning when the directory is accessed via TortoiseHG. I can verify without a doubt that Sophos is causing the problems: I can completely disable on-access scanning. Once this is done TortoiseHG responds very fast to all operations. I cannot leave this disabled obviously, but since the exclusion don't seem to be working, what next?

    Read the article

  • How to automatically remove Flash history/privacy trail? Or stop Flash from storing it?

    - by Arjan van Bentem
    Many people have heard about third-party cookies, and some browsers even block those by default. Some people may even be using Private Browsing modes. However, only few seem to realise that Adobe's Flash player also leaves a cross-browser trail on your local hard drive, and allows for sending cookie-like information back to the server, including third-party sites. And because it is a plugin, Flash does not take any of the browser's privacy settings into account. Sorry for the long post, but first some details about why using Flash raises a privacy concern, followed by the results of my tests: The Flash player keeps a cross-browser history of the domain names of the Flash-sites your computer has visited. Unlike your browser's history, this history is not limited to a certain number of days. History is also recorded while using so-called Private Browsing modes. It is stored on your hard drive (though, as described below, without going to Adobe's site you won't know what is stored). I am not sure if any date and time information is kept about each visit, but to see the domain names: right-click on some Flash content, open the settings dialog, and click the Help icon or click the Advanced button within the Privacy tab. This opens a browser to the help pages on Adobe.com, where one can click through to the Website Storage Settings panel. One can clear the existing list, but one cannot stop it from being recorded again. Flash allows for storing data on your local hard drive, using so-called Local Shared Objects (aka "Flash Cookies"). Just like HTTP cookies, this data can be sent back to the server, for tracking purposes. They are cross-browser, have no expiration date, and no user defined maximum lifetime can be set in the Flash preferences either. These not being HTTP cookies, they are (of course) not blocked by a browser's cookies preferences and are not removed when the normal HTTP cookies are deleted. Adobe has announced that version 10.1 will obey Private Browsing in most popular browsers, but unfortunately no word about also removing the data whenever normal cookies are deleted manually. And its implementation might be confusing: [..] if the browser is in normal browsing mode when the Flash Player instance is created, then that particular instance will forever be in normal browsing mode (private browsing is turned off). Accordingly, toggling private browsing on or off without refreshing the page or closing the private browsing window will not impact Flash Player. Local Shared Objects are not limited to the site you visit, and third-party storage is enabled by default. At the Global Storage Settings panel one can deselect the default Allow third-party Flash content to store data on your computer. Because of the cross-browser and expiration-less nature (and the fact that few people know about it), I feel that the cross-browser third-party Flash Cookies are more dangerous for visitor tracking than third-party normal HTTP cookies. They are even used to restore plain HTTP cookies that the user tried to delete: "All advertisers, websites and networks use cookies for targeted advertising, but cookies are under attack. According to current research they are being erased by 40% of users creating serious problems," says Mookie Tenembaum, founder of United Virtualities. "From simple frequency capping to the more sophisticated behavioral targeting, cookies are an essential part of any online ad campaign. PIE ["Persistent Identification Element"] will give publishers and third-party providers a persistent backup to cookies effectively rendering them unassailable", adds Tenembaum. [..] To justify this tracking mechanism, UV's Tenembaum said, "The user is not proficient enough in technology to know if the cookie is good or bad, or how it works." When selecting None (zero KB) for Specify the amount of disk space that website websites that you haven't yet visited can use to store information on your computer, and checking Never ask again then some sites do not work. However, the same site might work when setting it to None but without selecting Never ask again, and then choose Deny whenever prompted. Both options would result in zero KB of data being allowed, but the behaviour differs. The plugin also provides a Flash Player cache for Adobe-signed files. I guess these files are not an issue. So: how to automatically delete that information? On a Mac, one can find a settings.sol file and a folder for each visited Flash-website in: $HOME/Library/Preferences/Macromedia/Flash Player/macromedia.com/support/flashplayer/sys/ Deleting the settings.sol file and all the folders in sys, removes the trail from the settings panels. However, the actual Local Shared Ojects are elsewhere (see Wikipedia for locations on other operating systems), in a randomly named subfolder of: $HOME/Library/Preferences/Macromedia/Flash Player/#SharedObjects But then: how to remove this automatically? Simply removing the folders and the settings.sol file every now and then (like by using launchd or Windows' Task Scheduler) may interfere with active browsers. Or is it safe to assume that, given the cross-browser nature, the plugin would not care if things are removed while it is active? Only clearing during log-off may not work for those who hibernate all the time. Firefox users can install BetterPrivacy or Objection to delete the Local Shared Objects (for all others browsers as well). I don't know if that also deletes the trail of website domain names. Or: how to stop Flash from storing a history trail? Change of plans: I'm currently testing prohibiting Flash to write to its own sys and #SharedObjects folders. So far, Flash has not tried to restore permissions (though, when deleting the folders, Flash will of course recreate them). I've not encountered any problems but this may take some while to validate, using multiple browsers and sites. I've not yet found a log that reports errors. On a Mac: cd "$HOME/Library/Preferences/Macromedia/Flash Player/macromedia.com/support/flashplayer" rm -r sys/* chmod u-w sys cd "$HOME/Library/Preferences/Macromedia/Flash Player" # preserve the randomly named subfolders (only preserving the latest would suffice; see below) rm -r \#SharedObjects/*/* chmod -R u-w \#SharedObjects I guess the above chmods cannot be achieved on an old Windows system (I'm not sure about XP and Vista?). Though maybe on Windows one could replace the folders sys and #SharedObjects with dummy files with the same names? Anyone? Obviously, keeping Flash from storing those Local Shared Objects for all sites may cause problems. Some test results (Flash 10 on Mac OS X): When blocking the sys folder (even when leaving the #SharedObjects folder writable) then YouTube won't remember your volume settings while viewing multiple videos. Temporarily allowing write access to the blocked folders while visiting trusted sites (to only create folders for domains you like, maybe including references in settings.sol) solves that. This way, for YouTube, Flash could be allowed to write to sys/#s.ytimg.com and #SharedObjects/s.ytimg.com, while Flash could not create new folders for other domains. One may also need to make settings.sol read-only afterwards, or delete it again. When blocking both the sys and #SharedObjects folders, YouTube and Vimeo work fine (though they might not remember any settings). However, Bits on the Run refuses to even show the video player. This is solved by temporarily unblocking the #SharedObjects folder, to allow Flash to create a subfolder with some random name. Within this folder, it would create yet another folder for the current Flash website (content.bitsontherun.com). Removing that website-specific folder, and blocking both #SharedObjects and the randomly named subfolder, still seems to allow Bits on the Run to operate, even though it still cannot write anything to disk. So: the existence of the randomly named subfolder (even when write protected) is important for some sites. When I first found the #SharedObjects folder, it held many subfolders with random names, some created on the very same day. I wonder when Flash decides it wants a new folder, and how it determines (and remembers) that random name. For a moment I considered not blocking write access for sys and #SharedObjects, but explicitly creating read-only folders for well-known third-party tracking domains (like based on a list from, for example, AdBlock Plus). That way, any other domain could still create Local Shared Objects. But the list would be long, and the domains from AdBlock Plus are probably all third-party domains anyway, so disabling Allow third-party Flash content to store data on your computer might have the very same result. Any experience anyone? (Final notes: if the above links to the settings panels do not work in the future, then use the URL that is known to Flash player as a starting point: www.adobe.com/go/settingsmanager. See also "You Deleted Your Cookies? Think Again" at Wired.com -- which uses Flash cookies itself as well... For the very suspicious using Time Machine: you may want to exclude both folders, for each user, and remove the trace that is already on your backup.)

    Read the article

  • SonicOS Enhanced 5.8.1.2 L2TP VPN Authentication Failed

    - by Dean A. Vassallo
    I have a SonicWall TZ 215 running SonicOS Enhanced 5.8.1.2-6o. I have configured the L2TP VPN using the default crypto suite ESP: 3DES/HMAC SHA1 (IKE). Proposals are as such: IKE (Phase 1) Proposal DH Group: Group 2 Encryption: 3DES Authentication: SHA1 Life Time (seconds): 28800 Ipsec (Phase 2) Proposal Protocol: ESP Encryption: 3DES Authentication: SHA1 Enable Perfect Forward Secrecy DISABLED Life Time (seconds): 28800 When attempting to connect via my Mac OS X client I get an authentication error. It appears to pass the pre-authentication but fails to complete. I am at a complete loss. I reconfigured from scratch multiple times...used simple usernames and passwords to verify this wasn't a miskeyed password issue. I have Here are the logs (noted IP has been removed for privacy): 7/1/13 8:19:05.174 PM pppd[1268]: setup_security_context server port: 0x1503 7/1/13 8:19:05.190 PM pppd[1268]: publish_entry SCDSet() failed: Success! 7/1/13 8:19:05.191 PM pppd[1268]: publish_entry SCDSet() failed: Success! 7/1/13 8:19:05.191 PM pppd[1268]: pppd 2.4.2 (Apple version 727.1.1) started by dean, uid 501 7/1/13 8:19:05.192 PM pppd[1268]: L2TP connecting to server ‘0.0.0.0’ (0.0.0.0)... 7/1/13 8:19:05.193 PM pppd[1268]: IPSec connection started 7/1/13 8:19:05.208 PM racoon[1269]: accepted connection on vpn control socket. 7/1/13 8:19:05.209 PM racoon[1269]: Connecting. 7/1/13 8:19:05.209 PM racoon[1269]: IPSec Phase 1 started (Initiated by me). 7/1/13 8:19:05.209 PM racoon[1269]: IKE Packet: transmit success. (Initiator, Main-Mode message 1). 7/1/13 8:19:05.209 PM racoon[1269]: >>>>> phase change status = Phase 1 started by us 7/1/13 8:19:05.231 PM racoon[1269]: >>>>> phase change status = Phase 1 started by peer 7/1/13 8:19:05.231 PM racoon[1269]: IKE Packet: receive success. (Initiator, Main-Mode message 2). 7/1/13 8:19:05.234 PM racoon[1269]: IKE Packet: transmit success. (Initiator, Main-Mode message 3). 7/1/13 8:19:05.293 PM racoon[1269]: IKE Packet: receive success. (Initiator, Main-Mode message 4). 7/1/13 8:19:05.295 PM racoon[1269]: IKE Packet: transmit success. (Initiator, Main-Mode message 5). 7/1/13 8:19:05.315 PM racoon[1269]: IKEv1 Phase 1 AUTH: success. (Initiator, Main-Mode Message 6). 7/1/13 8:19:05.315 PM racoon[1269]: IKE Packet: receive success. (Initiator, Main-Mode message 6). 7/1/13 8:19:05.315 PM racoon[1269]: IKEv1 Phase 1 Initiator: success. (Initiator, Main-Mode). 7/1/13 8:19:05.315 PM racoon[1269]: IPSec Phase 1 established (Initiated by me). 7/1/13 8:19:06.307 PM racoon[1269]: IPSec Phase 2 started (Initiated by me). 7/1/13 8:19:06.307 PM racoon[1269]: >>>>> phase change status = Phase 2 started 7/1/13 8:19:06.308 PM racoon[1269]: IKE Packet: transmit success. (Initiator, Quick-Mode message 1). 7/1/13 8:19:06.332 PM racoon[1269]: attribute has been modified. 7/1/13 8:19:06.332 PM racoon[1269]: IKE Packet: receive success. (Initiator, Quick-Mode message 2). 7/1/13 8:19:06.332 PM racoon[1269]: IKE Packet: transmit success. (Initiator, Quick-Mode message 3). 7/1/13 8:19:06.333 PM racoon[1269]: IKEv1 Phase 2 Initiator: success. (Initiator, Quick-Mode). 7/1/13 8:19:06.333 PM racoon[1269]: IPSec Phase 2 established (Initiated by me). 7/1/13 8:19:06.333 PM racoon[1269]: >>>>> phase change status = Phase 2 established 7/1/13 8:19:06.333 PM pppd[1268]: IPSec connection established 7/1/13 8:19:07.145 PM pppd[1268]: L2TP connection established. 7/1/13 8:19:07.000 PM kernel[0]: ppp0: is now delegating en0 (type 0x6, family 2, sub-family 3) 7/1/13 8:19:07.146 PM pppd[1268]: Connect: ppp0 <--> socket[34:18] 7/1/13 8:19:08.709 PM pppd[1268]: MS-CHAPv2 mutual authentication failed. 7/1/13 8:19:08.710 PM pppd[1268]: Connection terminated. 7/1/13 8:19:08.710 PM pppd[1268]: L2TP disconnecting... 7/1/13 8:19:08.711 PM pppd[1268]: L2TP disconnected 7/1/13 8:19:08.711 PM racoon[1269]: IPSec disconnecting from server 0.0.0.0 7/1/13 8:19:08.711 PM racoon[1269]: IKE Packet: transmit success. (Information message). 7/1/13 8:19:08.712 PM racoon[1269]: IKEv1 Information-Notice: transmit success. (Delete IPSEC-SA). 7/1/13 8:19:08.712 PM racoon[1269]: IKE Packet: transmit success. (Information message). 7/1/13 8:19:08.712 PM racoon[1269]: IKEv1 Information-Notice: transmit success. (Delete ISAKMP-SA). 7/1/13 8:19:08.713 PM racoon[1269]: glob found no matches for path "/var/run/racoon/*.conf" 7/1/13 8:19:08.714 PM racoon[1269]: pfkey DELETE failed: No such file or directory

    Read the article

  • Unable to SSH into EC2 instance on Fedora 17

    - by abhishek
    I did following steps But I am not able to SSH to it(Same steps work fine on Fedora 14 image). I am getting Permission denied (publickey,gssapi-keyex,gssapi-with-mic) I created new instance using fedora 17 amazon community image(ami-2ea50247). I copied my ssh keys under /home/usertest/.ssh/ after creating a usertest I have SELINUX=disabled here is Debug info: $ ssh -vvv ec2-54-243-101-41.compute-1.amazonaws.com ssh -vvv ec2-54-243-101-41.compute-1.amazonaws.com OpenSSH_5.2p1, OpenSSL 1.0.0b-fips 16 Nov 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to ec2-54-243-101-41.compute-1.amazonaws.com [54.243.101.41] port 22. debug1: Connection established. debug1: identity file /home/usertest/.ssh/identity type -1 debug1: identity file /home/usertest/.ssh/id_rsa type -1 debug3: Not a RSA1 key file /home/usertest/.ssh/id_dsa. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug2: key_type_from_name: unknown key type 'Proc-Type:' debug3: key_read: missing keytype debug2: key_type_from_name: unknown key type 'DEK-Info:' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /home/usertest/.ssh/id_dsa type 2 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9 debug1: match: OpenSSH_5.9 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug2: fd 3 setting O_NONBLOCK debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 131/256 debug2: bits set: 506/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug3: check_host_in_hostfile: filename /home/usertest/.ssh/known_hosts debug3: check_host_in_hostfile: match line 17 debug3: check_host_in_hostfile: filename /home/usertest/.ssh/known_hosts debug3: check_host_in_hostfile: match line 17 debug1: Host 'ec2-54-243-101-41.compute-1.amazonaws.com' is known and matches the RSA host key. debug1: Found key in /home/usertest/.ssh/known_hosts:17 debug2: bits set: 500/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/usertest/.ssh/identity ((nil)) debug2: key: /home/usertest/.ssh/id_rsa ((nil)) debug2: key: /home/usertest/.ssh/id_dsa (0x7f904b5ae260) debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic debug3: start over, passed a different list publickey,gssapi-keyex,gssapi-with-mic debug3: preferred gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup gssapi-with-mic debug3: remaining preferred: publickey,keyboard-interactive,password debug3: authmethod_is_enabled gssapi-with-mic debug1: Next authentication method: gssapi-with-mic debug3: Trying to reverse map address 54.243.101.41. debug1: Unspecified GSS failure. Minor code may provide more information Credentials cache file '/tmp/krb5cc_500' not found debug1: Unspecified GSS failure. Minor code may provide more information Credentials cache file '/tmp/krb5cc_500' not found debug1: Unspecified GSS failure. Minor code may provide more information debug2: we did not send a packet, disable method debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Trying private key: /home/usertest/.ssh/identity debug3: no such identity: /home/usertest/.ssh/identity debug1: Trying private key: /home/usertest/.ssh/id_rsa debug3: no such identity: /home/usertest/.ssh/id_rsa debug1: Offering public key: /home/usertest/.ssh/id_dsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic debug2: we did not send a packet, disable method debug1: No more authentication methods to try. Permission denied (publickey,gssapi-keyex,gssapi-with-mic).

    Read the article

  • NRPE unable to read output, but why?

    - by ticktockhouse
    I have this problem with NRPE, all the stuff I've found so far on the net seems to point me at things I've already tried. # /usr/local/nagios/plugins/check_nrpe -H nrpeclient gives NRPE v2.12 as expected. Running the command by hand (as defined in nrpe.cfg on "nrpeclient", gives the expected response nrpe.cfg: command[check_openmanage]=/usr/lib/nagios/plugins/additional/check_openmanage -s -e -b ctrl_driver=0 bat_charge "Expected response" But if I try to run the command from the Nagios server I get the following: # /usr/local/nagios/plugins/check_nrpe -H comxps -c check_openmanage NRPE: Unable to read output Can anyone think of anywhere else I might have made a mistake with this? I've done the same thing on multiple other servers with no problem. The only difference I can think of with this is that this box is RHEL 5 based, whereas the others are RHEL 4 based. Those two bits above that I've tested are the what most people seem to suggest when people have had this problem. I should mention that I get a weird error in the logs when I restart nrpe: nrpe[14534]: Unable to open config file '/usr/local/nagios/etc/nrpe.cfg' for reading nrpe[14534]: Continuing with errors... nrpe[14535]: Starting up daemon nrpe[14535]: Warning: Daemon is configured to accept command arguments from clients! nrpe[14535]: Listening for connections on port 5666 nrpe[14535]: Allowing connections from: bodbck,combck,nam-bck Even though, it's plainly reading that /usr/local/nagios/etc/nrpe.cfg file to get the stuff it's talking about further down..

    Read the article

  • Developer's PC - worth getting more than 8GB RAM?

    - by Borek
    I'm building a developer PC and am wondering whether to get 8GB or 12GB. It's a Core-i7 860 system, i.e., 1156 motherboards with 4 slots for RAM sticks, dual channel, usually up 16GB (as opposed to 1366 sockets where 6 banks / triple-channel are used). 8GB would be cheaper to get especially because price per GB is lower with 4x2GB compared to 2x4GB. Also the availability is worse for 4GB DIMMs here where I live; those are the main practical advantages of 8GB. (Edit: I should have stressed the price difference more - in the eshop I'm buying from, the difference between 12GB and 8GB is so big that I could almost buy a whole new netbook for it.) However, I understand that more RAM can never do harm which is the point of this question - how much of a difference will 12GB make as opposed to 8GB? Honestly, I've always been on 3.2GB systems (4GB but 32bit system) and never felt much pain from having too little memory - of course there could be more but for instance compiler's performance was usually held back by slow I/O or not utilizing multiple cores on my CPU. Still, I'm not questioning that 8GB will be useful, however, I'm not sure about the additional 4GB difference between 8 and 12 gig. Anyone has experience with 8GB / 12GB systems? The software I usually run all the time: Visual Studio or Eclipse (both should be fine with ~2GB RAM, after that I feel their performance is I/O bound) Firefox (it can never have enough RAM can it? :) Office (~500MB RAM should be enough) ... and then some smaller apps like Skype, other browsers, some background services etc.

    Read the article

  • Bash script 'while read' loop causes 'broken pipe' error when run with GNU Parallel

    - by Joe White
    According to the GNU Parallel mailing list this is not a GNU Parallel-specific problem. They suggested that I post my problem here. The error I'm getting is a "broken pipe" error, but I feel I should first explain the context of my problem and what causes this error. It happens when trying to use any bash script containing a 'while read' loop in GNU Parallel. I have a basic bash script like this: #!/bin/bash # linkcheck.sh while read domain do host "$domain" done Assume that I want to pipe in a large list (250mb say). cat urllist | ./linkcheck.sh Running host command on 250mb worth of URLs is rather slow. To speed things up I want to break up the input into chunks before piping it and then run multiple jobs in parallel. GNU Parallel is capable of doing this. cat urllist | parallel --pipe -j0 parallel ./linkcheck.sh {} {} is substituted by the contents of urllist line-by-line. Assume that my systems default setup is capable of running 500ish jobs per instance of parallel. To get round this limitation we can parallelize Parallel itself: cat urllist | parallel -j10 --pipe parallel -j0 ./linkcheck.sh {} This will run 5000'ish jobs. It will also, sadly, cause the error "broken pipe" (bash FAQ). Yet the script starts to work if I remove the while read loop and take input directly from whatever is fed into {} e.g., #!/bin/bash # linkchecker.sh domain="$1" host "$1" Why will it not work with a while read loop? Is it safe to just turn off the SIGPIPE signal to stop the "broken pipe" message, or will that have side effects such as data corruption? Thanks for reading.

    Read the article

  • Exchange 2007 two node cluster setup on Windows 2008 Enterprise, install error

    - by Shadow00Caster
    I am installing Microsoft Exchange 2007 x64 in a two node environment using Microsoft Windows 2008 Enterprise x64. The Failover Cluster is all setup properly and following best practices for setting up the windows clustering for use with Exchange 2007. All the validation tests pass on the cluster and all of that portion is working fine. The problem is when I go to install the first Exchange node as an Active mailbox in configuration for a two node CCR. It gets all the way through the first 3 steps (Copy Exchange Files, Management Tools, Mailbox Role) and then fails on the 4th step 'Clustered Mailbox Server' with the following error: Error: The clustered mailbox server's group 'XXXX' was not found, and should already exist. Firewalls are all disabled, DNS is all setup properly, the environment has 3 domain controllers all 2k8 ent x64, all replication works. The name I pick for the CCR cluster (XXX) does not exist in AD or in DNS. I have attempted this install from both of the two Exchange nodes and multiple times .. tried with different names. I have been banging my head against the wall for days working on this and would appreciate any feedback on the issue.

    Read the article

  • 426 Connection closed; transfer aborted.

    - by Jiaoziren
    Hi, I have an IIS FTP set up on Windows 2003 SP2 (S1). Everyday in the early morning, a script on another server (S2) will run and initiate FTP transfer of pulling log files from S1 to S2. The FTP client we're using is built-in FTP.exe in Windows 2000 on S2. Recently we replaced S1 with a new server however we kept the IP address. There are multiple IP addresses on new S1. Ever since the new S1 was in place, the '426 Connection closed; transfer aborted.' errors haven been occuring randomly. The log indicated that the transfer started ok however the file cannot be transferred completely, as per log below: mget access*.log 200 Type set to A. 200 PORT command successful. 150 Opening ASCII mode data connection for access02232010.log(205777167 bytes). 426 Connection closed; transfer aborted. ftp: 20454832 bytes received in 283.95Seconds 72.04Kbytes/sec. The firewall monitor suggested that the connection was setup in passive mode however I've been told that MS FTP.exe doesn't support passive mode. Though I can see the response of 'entering passive mode' from server when typing in 'quote pasv'. My network admin has told me to try the transfer in active mode however I don't know how to open active mode on client side. It's getting really frustrating. Wish someone here has the right knowledge/experience could shed me a light. Cheers.

    Read the article

  • Amazon AWS EC2 + Puppet, get Puppet to know AWS instance tags

    - by Piotr Jasiulewicz
    I am having a problem with my AWS deployment, fairly new to AWS and Puppet. So coming to my question - can you distinguish puppet nodes with AWS machine tags or CNAME domains? A little background about the plan: have multiple clusters of machines, one php cluster, one legacy php cluster, one java cluster, one perl cluster control configuration with puppet - still pretty new to puppet but as a developer I like the idea of being able to version control configuration of servers have autoscaling enabled on those clusters - obviously the main benefit of the cloud that makes the much hight cost when it comes to any reasonable performance worth it (those amazon machines are slower than my phone...) deployment controlled by Capistrano, this makes things a lot easier So in AWS you get those super nasty public/private machine dns's... no way you can identify machines on those. In order to easer the problem, seams like AWS want's you to tag everything - so I did. Found a script that makes a CNAME record for each machine with the tag "ShortName" thanks to the Route53 API. Every machine has a ShortName tag that becomes its CNAME, unfortunately puppet still resolves the private dns name. I'd like to have node 'perl-cluster'{} in puppet, anyone any clue ho to achieve this? Thanks

    Read the article

  • Changing externally visible IP on a multi-IP router?

    - by AlternateZ
    I work at a public library and I'm trying to configure OCLC's EzProxy software. I've run into a problem and I think it's related to our network config. I'm punching above my weight here a little so I need some help. I think I'm trying to configure a 1:1 NAT, but not sure how or if our hardware supports it. The EzProxy machine is on an internet line which supports multiple external IPs. Our router is a Billion BiGuard30. There's another server on this line, let's say its IP is x.x.x.9. The EzProxy machine is x.x.x.11 I've set up port forwarding from x.x.x.11 on the http ports to the EzProxy machine. Trying to browse to x.x.x.11 from an external PC works fine - we get to the EzProxy page we are serving. However, if we go to something like WhatIsMyIP from the EzProxy machine, it says that its IP is x.x.x.9. This causes problems with our user authentication software. How do we make the rest of the internet see that the machine is x.x.x.11? There doesnt seem to be any "outbound port forwarding" on the Billion router, nor is there any "1:1 NAT" options in its config webpage. The EzProxy machine is running Ubuntu 12.04, if that helps.

    Read the article

  • When using procmail with maildir, it returns error with code I found

    - by bradlis7
    I'm not an expert at procmail, but I have this code: DROPPRIVS=yes DEFAULT=$HOME/Maildir/ :0 * ? /usr/bin/test -d $DEFAULT || /bin/mkdir $DEFAULT { } :0 E { # Bail out if directory could not be created EXITCODE=127 HOST=bail.out } MAILDIR=$HOME/Maildir/ But, when the directory already exists, sometimes it will send a return email with this error: 554 5.3.0 unknown mailer error 127. The email still gets delivered, mind you, but it sends back an error code. I fixed this temporarily by commenting out the EXITCODE and HOST lines, but I'd like to know if there is a better solution. I found this block of code in multiple places across the net, but couldn't really find why this error was coming back to me. It seems to happen when I send an email to a local user, sometimes the user has a .forward file to send it on to other users, sometimes not, but the result has been the same. I also tried removing DROPPRIVS, just in case it was messing up the forwarding, but it did not seem to affect it. Is the line starting with * ? /usr/bin/test a problem? The * signifies a regex, but the ? makes it return an integer value, correct? What is the integer being matched against? Or is it just comparing the integer return value? Thanks for the help.

    Read the article

  • Issues resolving DNS entries for multi-homed servers

    - by I.T. Support
    This is difficult to explain, so bear with me. We have 2 domain controllers, each multi-homed to straddle 2 internal subnets, (subnet A and subnet B) and provide dns, dhcp, and ldap authentication. Both domain controllers each have 2 DNS entries. both entries have identical host names, but correspond to subnet A & subnet B respectively (example entries shown): dc1 host 192.168.8.1 dc1 host 192.168.9.1 dc2 host 192.168.8.2 dc2 host 192.168.9.2 We also have a 3rd subnet for our dmz, (subnet C) which neither domain controller has an IP address on, but our firewall/routing tables provide access to subnet A from subnet C and vice versa, but don't allow access to subnet B from subnet C. Here's my issue. How can I force/determine which dns entry is used when a server on subnet C queries either domain controller by host name? Right now it seems to randomly pick one of the two entries, swaps out the name for the IP address and that's that. The problem is if it randomly selects the entry that corresponds to the 9.x subnet B (no access from subnet C), then the server fails to resolve. If it picks the entry for the 8.x subnet A then it resolves (firewall/routing tables defined for communication between these 2 subnets) Here's what I'd like to know: What are Best Practices (if any) for dealing with DNS resolution on subnets that the DNS servers don't have a presence on? Can I control something akin to a metric value to force an order of DNS resolution when there are multiple entries for the same host name that correspond to different IP subnets? Should I even have 2 DNS HOST entries for the same name? Here's what I'd like to avoid: Making edits to the HOSTS files of servers on subnet C to force DNS resolution of the hostname to the appropriate subnet Adding NIC's to the DC's to have them straddle the DMZ as well, thus obtaining a third DNS entry that corresponds to subnet C Again, my apologies if this was too verbose / unclear. Thanks!

    Read the article

  • Can't get an IBM xSeries 345 server to load Windows Server 2003 using ServerGuide utility

    - by Kyle Noland
    I have a client that has an IBM xSeries 345 eServer. Per the IBM support website, I have downloaded the ServerGuide Setup 7.4.17 installation ISO and burned a bootable CD. The CD boots fine and loads the utility. I walk through the following screens without any issue: Set the date and Time Detect the IBM ServeRAID card and install the latest firmware Clear the hard disks Set up the RAID array The next step is format the NOS partition. I select my partition size and the utility goes through the following steps: Creating NOS partition Formatting NOS partition (NTFS) Copying W32 files The copying W32 files takes about 10 minutes. I see the CD drive and disks working hard. When the copying is complete, I'm taken to a blank page just NOS Partitioning at the top. At the bottom of the screen are the familiar Back and Exit buttons. I see the place where the Next button should be, and if I click on it I can tell there is something there, but the space is empty. No button is displayed and clicking the empty spot doesn't ever take me to the next screen. I can't load the OS until I get past this part. I have already tried: Burning multiple copies and versions of the ServerGuide CD Letting the final screen just sit there over the weekend thinking it might advance after syncing the drives or something Has anybody else seen this? I'm really at a loss here. EDIT: I found another person who has the exact same problem as me: http://www.ibm.com/developerworks/forums/thread.jspa?messageID=14451763

    Read the article

  • How \Deleted flag can be unset for all mails in cyrus-imapd mailbox?

    - by Sachin Divekar
    I have a 5GB mailbox which I moved using imapsync. But somehow I messed up with --delete/--delete2 option and end up with almost all the messages having \Deleted flag set. I do not have delayed expunge enabled, so I can not use unexpunge utility. I am using cyrus-imapd v2.3.7. Using cyrus-imapd's debugging feature I found out that email client(Roundcube in my case) fires following IMAP command to unset it. UID STORE 179 -FLAGS.SILENT (\Deleted) I don't know if somehow I can fire this command for all the mails. Is there any way I can unset \Deleted flag for all the mails in the mailbox? UPDATE: Using @geekosaur's tip of specifying range of message-ids in the above command, I could solve it for one mailbox under INBOX like INBOX.folder1. Is there any way I can do it for multiple mailboxes under INBOX recursively? Now I am working on solving it using/creating some script, maybe using Perl's IMAP related module. But still I need to solve it asap so inputs are welcome.

    Read the article

  • Staying anonymous while hosting your site?

    - by jamesCroft
    I don't mean anonymous surfing. I mean hosting and having your own domain and such. The reason is that my blog is about religious/political topics which may cause me trouble in the future. This is the domain I am working on: www.james-croft.com I know that using Whois search my name can come up: http://www.networksolutions.com/whois-search/james-croft.com The solution to that, as far as I understood, is to buy a privacy package from the domain registrar. in my case it is lucky register: http://i.stack.imgur.com/uvOdc.png Also hosting is a concern. I use the same hosting service for multiple websites. My question is this: Can my hosting be tracked and be used to identify me? Also: Are there other methods of finding out my identity from either Google Adsense or Amazon affiliate programs? I couldn't find any relevant articles online. If there is anything else that is relevant, please let me know. I appreciate any response.

    Read the article

  • Online Storage and security concerns

    - by Megge
    I plan to set up a small fileserver. I already own a small server at HostEurope (VirtualServer L, 250GB space), but they don't offer enough space (there is the HostEurope Cloud, but paying for bandwidth isn't an option here, video-streaming should be possible) Requirements summarized: Storage: 2TB, Users: ~15, Filesizes: < 100GB, should be easily reachable (Mount as a networkdrive or at least have solid "communication" software) My first question would be: Where can I get halfway affordable online storages? And how should I connect them to my server? Getting an additional server is a bit overkill, as I know no hoster which allows 2 TB on a small 2 Ghz Dual Core 2 GB RAM thingy (that would be enough by far, I just need much space), and connecting it via NFS or FTP over Internet seems a bit strange and cripples performance. Do you have any advice where I could get that storage service from? (I sent HostEurope a custom request today, but they didn't answer till now. If they can provide me with that space, this question will be irrelevant, but the 2nd one is the more important one anway, don't do much more than recommend me some based on experience, you don't have to crawl hours through hosting services) livedrive for example offers 5 TB for 17€ / month, I'd be happy with 2 TB for 20 €, the caveat is: It doesn't allow multiple users, which leads me to my second question: Where are the security problems? Which protocol is sufficient (I want private and "public" folders etc. the usual "every user has its own and a public space"-thing), secure and fast? (I'd tend to (S)FTP, problem with FTP is: Most of those hosting services don't even allow FTP with mutliple users and single users lead me into "hacking" a solution (you could map the basic folder structure on the main server and just mount every subfolder from the storage, things get difficult with a public folder with 644 permissions though) Is useing something like PKI or 802.1X overkill for private uses?

    Read the article

  • Running Mathematica-5 remotely

    - by oxinabox.ucc.asn.au
    I have Mathematica 5 - a powerful CAS. I have a cheap netbook (running Windows XP), wich not only is too slow to run mathmatica on, I doubt it has the harddrive space. I do however have remote access to a number of very powerful computers, (most of wich run variose Linuxes, but one of which is Windows Server 2008, though I'ld rather not use this one*). Mostly over SSH but other protocols can be arraged for some, I'm sure. So I'ld like to install Mathematica onto one of these machine and then run it remotely. Either from the command line via Putty or via some other method. I glanced through the mathematical documentation and read something about using some MathLink program, which links the front end installed on my computer to a remote kernel. Anyone have any experience with this? I'm not sure if this belongs here or in SuperUser. At the moment, it's being tinkered with, and when the tinkering stops it'll likely be used to run multiple thin terms. As compared to the Linux machines: I have access to a dual 2.4 Xeon with 3GB RAM, which the rest of the world seems to have completely forgotten about (runs freeBSD!).

    Read the article

  • Windows Terminal Server: occasional memory violation for applications

    - by syneticon-dj
    On a virtualized (ESXi 4.1) Windows Server 2008 SP2 32-bit machine which is used as a terminal server, I occasionally (approximately 1-3 event log entries a day) see applications fail with an 0xc0000005 error - apparently a memory access violation. The problem seems quite random and only badly reproducable - applications may run for hours, fail with 0xc0000005 and restart quite fine or just throw the access violation at startup and start flawlessly at the second attempt. The names of executables, modules and offset addresses vary, although a single executable tends to fail with same modules and the same memory offset addresses (like "OUTLOOK.EXE" repeatedly failing on module "olmapi32.dll" with the offset "0x00044b7a") - even across multiple user's logons and with several days passing without a single failure inbetween. The offset addresses seem to change across reboots, however. Only selective executables seem affected by the problem, although I may simply not be seeing a sufficient number of application runs from the other ones. I first suspected a possible problem with the physical machine's RAM, but ruled this out as a rather unlikely cause - the memory comes with ECC and I've already moved the virtual machine across several times, without any perceptable change. I've seen that DEP was enabled in "OptOut" mode on this machine: C:\Users\administrator>wmic OS Get DataExecutionPrevention_SupportPolicy DataExecutionPrevention_SupportPolicy 3 and tried changing the policy to OptIn via startup options: bcdedit.exe /set {current} nx OptIn but have yet to see any effect - I also would expect Outlook 12 or Adobe Reader 9 (both affected applications) to play well with DEP. Any other ideas why the apps may be failing?

    Read the article

  • Getting Error while running RED5 server - class path resource [red5.xml] cannot be opened because it does not exist

    - by sunil221
    HI , I have installed java version "1.6.0_14" and Ant version 1.8.2 for red5 Server. when i am trying to run red5 server i am getting the following error please help Root: /usr/local/red5 Deploy type: bootstrap Logback selector: org.red5.logging.LoggingContextSelector Setting default logging context: default 11:27:39.838 [main] INFO org.red5.server.Launcher - Red5 Server 1.0.0 RC1 $Rev: 4171 $ (http://code.google.com/p/red5/) Red5 Server 1.0.0 RC1 $Rev: 4171 $ (http://code.google.com/p/red5/) SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/local/red5/red5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/local/red5/lib/logback-classic-0.9.26.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 11:27:39.994 [main] INFO o.s.c.s.FileSystemXmlApplicationContext - Refreshing org.springframework.context.support.FileSystemXmlApplicationContext@39d85f79: startup date [Mon Dec 21 11:27:39 EST 2009]; root of context hierarchy 11:27:40.149 [main] INFO o.s.b.f.xml.XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [red5.xml] Exception org.springframework.beans.factory.BeanDefinitionStoreException: IOException parsing XML document from class path resource [red5.xml]; nested exception is java.io.FileNotFoundException: class path resource [red5.xml] cannot be opened because it does not exist Bootstrap complete

    Read the article

  • Resolving "JBoss Web Console is Accessible to Unauthenticated Remote Users" vulnerability

    - by IAmJeff
    Our security team has determined there is a vulnerability in one of our systems. We are using version JBoss 5.1.0GA on RHEL 5.10. Vulnerability description: JBoss Web Console is Accessible to Unauthenticated Remote Users Yes, this looks familiar. Refer to Question 501417. I do not find the answer there complete. Can someone (or multiple someones) answer Does a newer version of JBoss fix this vulnerability? Are there links describing, in more detail, manual modification of JBoss configuration files to resolve the issue? Are there others options to remediate this vulnerability? Why don't I find the other answer complete? I'm not at all familiar with JBoss, so this answer seems a bit too simple. The web-console.war contains commented-out templates for basic security in its WEB-INF/web.xml as well as commented-out setup for a security domain in WEB-INF/jboss-web.xml. Just uncomment those basic security blocks and restart? Is there anything else I need to include? This seems generic. Do I need to include anything about my environment, such as absolute paths, etc.? Am I making this too complicated?

    Read the article

  • Python easy_install confused on Mac OS X

    - by slf
    environment info: $ echo $PATH /opt/local/bin:/opt/local/sbin:/sw/bin:/sw/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/usr/X11R6/bin:/opt/local/bin:/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin:~/.utility_scripts $ which easy_install /usr/bin/easy_install specifically, let's try the simplejson module (I know it's the same thing as import json in 2.6, but that isn't the point) $ sudo easy_install simplejson Searching for simplejson Reading http://pypi.python.org/simple/simplejson/ Reading http://undefined.org/python/#simplejson Best match: simplejson 2.1.0 Downloading http://pypi.python.org/packages/source/s/simplejson/simplejson-2.1.0.tar.gz#md5=3ea565fd1216462162c6929b264cf365 Processing simplejson-2.1.0.tar.gz Running simplejson-2.1.0/setup.py -q bdist_egg --dist-dir /tmp/easy_install-Ojv_yS/simplejson-2.1.0/egg-dist-tmp-AypFWa The required version of setuptools (>=0.6c11) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U setuptools'. (Currently using setuptools 0.6c9 (/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python)) error: Setup script exited with 2 ok, so I'll update setuptools... $ sudo easy_install -U setuptools Searching for setuptools Reading http://pypi.python.org/simple/setuptools/ Best match: setuptools 0.6c11 Processing setuptools-0.6c11-py2.6.egg setuptools 0.6c11 is already the active version in easy-install.pth Installing easy_install script to /usr/local/bin Installing easy_install-2.6 script to /usr/local/bin Using /Library/Python/2.6/site-packages/setuptools-0.6c11-py2.6.egg Processing dependencies for setuptools Finished processing dependencies for setuptools I'm not going to speculate, but this could have been caused by any number of environment changes like the Leopard - Snow Leopard upgrade, MacPorts or Fink updates, or multiple Google App Engine updates.

    Read the article

  • Public DNS redirect subdomain to Windows Server 2003 DNS

    - by user125248
    I'm a programmer by trade but often dabble in sysadmin tasks and responsibilities. I have recently been tasked with setting up a Windows Server 2003 networking environment for a small business with multiple branches. The business already has a domain name they use to host a website at www.example.com. Currently the DNS nameservers are at Zerigo and I would very much like it to remain that way (as they specialize in just providing DNS services and they do this very well). We also have a bunch of other subdomains we use to conviniently connect to the various branches that have static IPs assigned from ISPs, so we're able to connect easily to branch1.example.com. Is it possible to 'redirect' all intranet.example.com DNS requests to a Windows box? I've been doing a little reading and I see there are NS records that might be able to do this, and the Windows DNS server could then perform all of the lookups for that subdomain, say, server1.intranet.example.com or client5.intranet.example.com. This would seem better to me, than registering a new domain name for the organisation, as keeping a single domain name makes more organizational sense.

    Read the article

  • Whitelist IP from google-authenticator in sshd pam

    - by spudwaffle
    My Ubuntu 12.04 server uses the google-authenticator pam module to provide two step authentication for ssh. I need to make it so that a certain IP does not need to type the verification code. The /etc/pam.d/sshd file is below: # PAM configuration for the Secure Shell service # Read environment variables from /etc/environment and # /etc/security/pam_env.conf. auth required pam_env.so # [1] # In Debian 4.0 (etch), locale-related environment variables were moved to # /etc/default/locale, so read that as well. auth required pam_env.so envfile=/etc/default/locale # Standard Un*x authentication. @include common-auth # Disallow non-root logins when /etc/nologin exists. account required pam_nologin.so # Uncomment and edit /etc/security/access.conf if you need to set complex # access limits that are hard to express in sshd_config. # account required pam_access.so # Standard Un*x authorization. @include common-account # Standard Un*x session setup and teardown. @include common-session # Print the message of the day upon successful login. session optional pam_motd.so # [1] # Print the status of the user's mailbox upon successful login. session optional pam_mail.so standard noenv # [1] # Set up user limits from /etc/security/limits.conf. session required pam_limits.so # Set up SELinux capabilities (need modified pam) # session required pam_selinux.so multiple # Standard Un*x password updating. @include common-password auth required pam_google_authenticator.so I've already tried adding a auth sufficient pam_exec.so /etc/pam.d/ip.sh line above the google-authenticator line, but I can't understand how to check an IP adress in the bash script.

    Read the article

< Previous Page | 691 692 693 694 695 696 697 698 699 700 701 702  | Next Page >