Search Results

Search found 15139 results on 606 pages for 'scripting interface'.

Page 482/606 | < Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >

  • Problems with ipsec betwen Cisco ASA 5505 and Juniper ssg5

    - by Oskar Kjellin
    I am trying to set up an ipsec tunnel between our ASA 5505 and a Juniper ssg5. The tunnel is up and running, but I cannot get any data through it. The local network I am on is 172.16.1.0 and the remote is 192.168.70.0. But I cannot ping anything on their netowork. I receive a "Phase 2 OK" when I set up the ipsec. I think this is the part of the config that is applicable. It seems like the data is not routed through the tunnel, but I am not sure... object network our-network subnet 172.16.1.0 255.255.255.0 object network their-network subnet 192.168.70.0 255.255.255.0 access-list outside_cryptomap extended permit ip object our-network object their-network crypto ipsec ikev1 transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto map outside_map 1 match address outside_cryptomap crypto map outside_map 1 set pfs crypto map outside_map 1 set peer THEIR_IP crypto map outside_map 1 set ikev1 phase1-mode aggressive crypto map outside_map 1 set ikev1 transform-set ESP-3DES-MD5 crypto map outside_map 1 set ikev2 pre-shared-key ***** crypto map outside_map 1 set reverse-route crypto map outside_map interface outside webvpn group-policy GroupPolicy_THEIR_IP internal group-policy GroupPolicy_THEIR_IP attributes vpn-filter value outside_cryptomap ipv6-vpn-filter none vpn-tunnel-protocol ikev1 tunnel-group THEIR_IP type ipsec-l2l tunnel-group THEIR_IP general-attributes default-group-policy GroupPolicy_THEIR_IP tunnel-group THEIR_IP ipsec-attributes ikev1 pre-shared-key ***** ikev2 remote-authentication pre-shared-key ***** ikev2 local-authentication pre-shared-key *****

    Read the article

  • Running the LibreOffice MSI installer in English

    - by Scott Severance
    I'm trying to install LibreOffice on a machine running a Korean version of Windows XP. I don't know Korean. I haven't used Windows with any frequency in many years, so I'm pretty lost. When I run the installer, it shows up in Korean. But, I want to customize the installation, so I need the installer to be in English. Googling took me to this page, where I found an example command to run the installer in Gaelic, which I modified for my system as follows: msiexec /i LibO_3.6.1_Win_x86_install_multi.msi TRANSFORMS=:1084 This works, except that I know less about Gaelic than I do about Korean. The help page provided a link to a page where I could look up the ID codes. From that page, I determined that the correct code was 1033 for US English and 2057 for UK English. When I substituted the code, I got an error message. Here's the messages as translated by Google, followed by the original: Transform can not be applied.Verify that the specified transform paths are valid. ?? ??? ??? ? ????. ??? ?? ??? ???? ?????. I can't very well search on a machine translation, so I don't know where to go from here.What is the problem? How can I make the installer operate in English? Alternatively, how can I change XP to display its interface in English, while keeping full functionality for typing in Korean?

    Read the article

  • Cannot connect on TFS 2012 server through SSL with invalid certificate

    - by DaveWut
    I saw the problem on some forums and even here, but not as specific as mine. So here's the thing, So I've configured a TFS 2012 server, on one of my personnel server at home, and now, I'm trying to make it available through the internet, with the help of apache2 on a different UNIX based, physical server. The thing is working perfectly, I don't have any problem accessing the address https://tfs.something.com/tfs through my browser. The address can be pinged and I do have access to the TFS control panel through it. How does it work? Well, with apache2 you can set a virtual host and set up the ProxyPass and ProxyPassReserver setting, so the traffic can externally comes from a secure SSL connection, through a specified domain or sub-domain, but it can be locally redirect on a clear http session on a different port. This is my current setup. As I already said, I can access the web interface, but when I'm trying to connect with Visual Studio 2012, it can't be done. Here's the error I receive: http://i.imgur.com/TLQIn.png The technical information tells me: The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. My SSL certificate is invalid and was automatically generated on my UNIX server. Even if I try to add it in the Trusted Root Certification Authorities either on my TFS server or on my local workstation, it doesn't work. I still receive the same error. Is there's a way to completely ignore certificate validation? If not, what's have I done? I mean, I've added the certificate in the trusted root certificates, it should works as mentioned on some forums... If you need more information, please ask me, I'll be pleased to provide you more. Dave

    Read the article

  • Make dhcp assign same IP and hostname for different interfaces at one machine

    - by Egeshi
    I have a feeling that question itself looks stupid but it is not. Please let me clarify. I have dynamic DNS with BIND and NIS configured at my LAN and have laptop which I am using in both wireless and wired mode. I mean that sometimes I have to use wired interface to achieve higher throughput but most of time I don't need it and using wireless mode. Everything works great. Issue is that I want both interfaces get same IP from DHCP. Just for convenient firewall setup. If I add both hosts to dhcp in this manner # bt wireless host bt { hardware ethernet 00:1f:1f:62:60:28; fixed-address 172.16.77.110; } # bt wired host bt { hardware ethernet 00:14:22:b7:5a:de; fixed-address 172.16.77.110; } DHCP says logs following message dhcpd: Dynamic and static leases present for 172.16.77.110 dhcpd: Remove host declaration bt-wired or remove 172.16.77.110 dhcpd: from the dynamic address pool for 172.16/16 Host records are added outside of any subnet, but it makes no difference if I put them there, effect is still the same. This is not critical but either is not my whim because even if DHCP seems to work fine for that "bt" host, I cannot make connection TO it from remote machine anymore with this definitely incorrect DHCP config. I'd be thankful if one spares a minute for advice about how to configure DHCPD correctly. UPDATE. I realize that there's a soulution to assign different hostname in DHCP config but would like to use benefits of short host names.

    Read the article

  • Routing WIFI and LAN for specific traffic

    - by jakebird451
    I have two network devices aboard my macbook pro: WIFI (en1): Used for general traffic. Connects to an ip of 192.168.19.* via DHCP LAN (en0): Used for specific traffic. Connects to an ip of 192.168.2.10 as a static IP. Does not connect to a router, only a switch for direct routing connection. I have 4 IP addresses I need to access on the LAN: 192.168.2.1 192.168.2.21 192.168.2.20 192.168.2.30 The rest of the traffic needs to go to WIFI. I have tried setting up a routing table for the specific ip addresses, but I only managed to mess up my network. I do not venture out into the world of networking too often, but this was the latest command I have been trying: sudo route add -host 192.168.2.30 -interface en0 This command killed my ability to use ping. It told me that ping could not allocate memory (is that even possible)? It also killed my wifi access. Logging out and back in fixed the issue. I really do not mind to make this solution permanent, so I am fine with a temporary routing. EDIT: If I currently have been trying: sudo route flush sudo route add default 192.168.19.1 This gets everything to work for about a minute. But after such minute it "forgets" the routing to WiFi while retaining LAN's (en0) routing. If I unplug and replug my LAN (en0) cable, the process works for another minute.

    Read the article

  • Can't Login to phpPgAdmin

    - by Devin
    I'm trying to set up phpPgAdmin on my test machine so that I can interface with PostgreSQL without always having to use the psql CLI. I have PostgreSQL 9.1 installed via the RPM repository, while I installed phpPgAdmin 5.0.4 "manually" (by extracting the archive from the phpPgAdmin website). For the record, my host OS is CentOS 6.2. I made the following configuration changes already: PostgreSQL Inside pg_hba.conf, I changed all METHODs to md5. I gave the postgres account a password I added a new account named webuser with a password (note that I did not do anything else to the account, so I can't exactly say that I know what permissions it has and all) phpPgAdmin config.inc.php Changed the line $conf['servers'][0]['host'] = ''; to $conf['servers'][0]['host'] = '127.0.0.1'; (I've also tried using localhost as the value there). Set $conf['extra_login_security'] to false. Whenever I try to log in to phpPgAdmin, I get "Login failed", even if I use successful credentials (ones that work in psql). I've tried to go through some of the steps noted in Question 3 in the FAQ, but it hasn't worked out well so far there. It likely does not help that this is my first day working with PostgreSQL. I'm farily familiar with MySQL, but I have to use PostgreSQL for the project I'm working on. Could anyone offer some help for how to set up phpPgAdmin on CentOS 6.2? If I've done something terribly wrong in my configuration so far, it's no big deal to blow something/everything away, as it's not like I've stored any data there yet! I appreciate any insight you may have!

    Read the article

  • Create Windows AMI with instance storage

    - by Jonathan Oliver
    I have a business use case and workflow where local/instance/ephemeral storage for an EC2 instance is ideal. Unfortunately I'm coupled to a Windows platform for this particular task and the EC2 Windows offering appears to have some deficiencies related to AMI creation. In essence, I'm trying to figure out if there's a way to attach local instance storage to a Windows EC2 instance using the typical command line interface (because the Amazon Website GUI doesn't support it) and then to somehow create an AMI based upon that. I've tried creating a snapshot and then creating a Windows AMI based upon the snapshot, but of course the docs say this is unsupported and makes an unbootable AMI. In short, here's what I'm trying to do: Be able to run a Windows instance (EBS/S3 instance doesn't matter) Attach local instance storage as drive D: Persist that configuration as an AMI such that I can start lots of them as necessary from either the GUI, command line, or REST API. Be able to take a launched instance, update software, shutdown, and create another AMI based upon that. Wash, rinse, repeat. One other potential option which isn't horrible, but isn't ideal is to create an AMI which has 2 EBS volumes already attached (system+apps and data). Essentially, every time I startup an instance based upon the AMI it'll create 2 new EBS volumes of pre-determined size. I'm trying to avoid that scenario if possible.

    Read the article

  • DHCP forwarding behind access list on a Cisco Catalyst

    - by Ásgeir Bjarnason
    I'm having some trouble with forwarding DHCP from a subnet behind an access list on a Cisco Catalyst 4500 switch. I'm hoping somebody can see the mistake I'm making. The subnet is defined like this: (first three octets of IP addresses and vrf name anonymized) interface Vlan40 ip vrf forwarding vrf_name ip address 10.10.10.126 255.255.255.0 secondary ip address 10.10.10.254 255.255.255.0 ip access-group 100 out ip helper-address 10.10.20.36 no ip redirects I tried turning on a VMWare machine on this subnet that was configured to use DHCP, but I never got a DHCP response and the DHCP server didn't receive a request. I tried putting the following in the access-list: access-list 100 permit udp host 10.10.10.254 host 10.10.20.36 eq bootps access-list 100 permit udp host 10.10.10.254 host 10.10.20.36 eq bootpc access-list 100 permit udp host 10.10.20.36 host 10.10.10.254 eq bootps access-list 100 permit udp host 10.10.20.36 host 10.10.10.254 eq bootpc That didn't help. Can anybody see what the problem is? I know that the DHCP server works; our whole network is running off of this DHCP server I also know that the subnet works because we have active servers running on the network The DHCP scope is already defined on the DHCP server The subnet is correctly defined on the VMWare server (already servers running on the subnet on VMWare) Edit 2012-10-19: This is solved! The subnet had formerly been defined as a /25 network, but was then expanded into a /24 network. When the DHCP scope was altered after this change it was done incorrectly; the gateway was moved to .254, the leasable IP range was in the lower half of the /24 subnet but we forgot to change the CIDR prefix from /25 into /24. This happened some 2 years ago, and we didn't need to use DHCP on this server network again until this week. Thank you MDMarra and Jason Seemann for looking at the question and trying to troubleshoot. Now I'm wondering if I should mark Jason's answer as the accepted answer (I am new to the Stack Exchange network, so I don't know the etiquette of what to do if I misstated the question like in this case).

    Read the article

  • Joining Samba to Active Directory with local user authentication

    - by Ansel Pol
    I apologise that this is somewhat incoherent, but hopefully someone will be able to make enough sense of this to understand what I'm trying to achieve and provide pointers. I have a machine with two network interfaces connected to two different networks (one of which it's providing several other services for, such as DNS), running two separate instances of Samba, one bound to each interface. One of the instances is just a workgroup-style setup using share-level authentication, which is all working fine. The problem is that I'm looking to join the other instance to an MS Active Directory domain (provided by MS Windows Small Business Server 2003) to enable a subset of the domain users to access the shares from Windows machines on the other network. The users who need access from the domain environment have accounts (whose names are all-lowercase versions of their domain usernames) on the machine running Samba, but I'm not sure about how to map the UIDs and everything I've read concerns authenticating accounts on that machine against either AD or another LDAP server. To clarify: I only want the credentials for AD users accessing the non-workgroup Samba instance to be authenticated against AD, not the accounts on the machine running Samba. I hope this is sufficiently clear. EDIT: In addition to being able to access the Samba shares from AD, I do also need to be able to access a share on the domain from the machine running Samba but would still like everything non-Samba-related to authenticate locally.

    Read the article

  • Using multiple computers effectively

    - by Benjamin Oakes
    I have some extra (old) Macs and PCs around the house and a MacBook that's sometimes overworked. I'm looking for tips on using multiple computers effectively. Basically, I'd like to add to the following list. Here's what I'm using so far: Teleport: lets you use a single mouse and keyboard to control several Macs, like Synergy Built-in file sharing: lets me run programs on another Mac, but only maintain one copy of the data Bazaar: distributed version control Mail.app, Thunderbird, etc.: IMAP for my mail accounts TuneConnect: control iTunes on another Mac with a nice interface, using the library on my MacBook (if I choose it by pressing option at startup) over file sharing OmniFocus: syncs across computers pretty seamlessly Web browsing across computers VNC/Remote Desktop Running X-windows programs using ssh -Y hostname for headless operation (but they die when I sleep the connecting computer -- something like GNU screen would be ideal) Plain-old ssh with GNU screen Really, a better idea of what I do might be necessary. Generally though, I'd like to distribute tasks across more than one computer when possible, but not have much overhead in doing so. The perfect solution? An Xgrid-like program that pushes processing across multiple computers automatically and seamlessly (although that seems unlikely). Here's what I have, in case it makes a difference: MacBook (Dual 2.16 GHz, OS X 10.6.3) eMac (1.25 GHz, OS X 10.4.11, soon to be 10.5) Dell Dimension (800 MHz, some version of Ubuntu) -- no dedicated monitor PowerMac G3 (400 MHz, OS X 10.4.11) -- no dedicated monitor iMac G3 DV (400 MHz, OS X 10.4.11) -- currently in the kitchen for recipes, email, web browsing, music, movies (DVDs), etc. (Total, they cost me around $650, mostly for the MacBook. Freecycle is wonderful, just in case you haven't heard of it.) I'm really only using the MacBook and eMac at this point, but I'd like to push more onto it and possibly the PowerMac and Dell.

    Read the article

  • Add Route for machine in same DC

    - by gary
    My routing table on my machine with IP of 46.84.121.243 currently looks like this - Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 46.84.121.225 46.84.121.243 21 46.84.121.224 255.255.255.224 On-link 46.84.121.243 276 46.84.121.239 255.255.255.255 On-link 46.84.121.243 21 46.84.121.243 255.255.255.255 On-link 46.84.121.243 276 46.84.121.255 255.255.255.255 On-link 46.84.121.243 276 I'm trying to access 46.84.121.239, which is my other machine in the same DC but my guess is the first rule is blocking it as it is trying to go via the gateway and failing - Tracing route to [46.84.121.239] over a maximum of 30 hops: 1 OWNEROR-9O83HBL [46.84.121.243] reports: Destination host unreachable. Trace complete. I'm doing all this via RDP and already tried changing the metric on the persistent rule with devastating consequences! Here's the persistent rule (working) - Persistent Routes: Network Address Netmask Gateway Address Metric 0.0.0.0 0.0.0.0 46.84.121.225 1 Any help to be able to access the 46.84.121.243 would be very helpful thanks very much.

    Read the article

  • Software distribution from web server to client using PHP/FTP

    - by Jenolan
    I develop and maintain a number of add-ons and utilities for various widget (mainly aMember) which generally means I need to install php based codes onto other people's systems. Whilst I have a VPS and have access to rsync and all sorts of yummy tools most of the people I deal with have a basic ftp access and that's all folks. To upload from my local system is also a problem as I am satellite based (two-way) so it is fairly slow and expensive and in any case the files are already on my server. So there is no rsync, fxp, ssh and I can't really install anything as it is obviously not my system, they would be justifiably miffed if I started installing file managers or other things onto their sites. What I have been trying to find is a utility that I can run on my server from the web, preferably php based, that will be like a file manager but a bit different. Two panels. LH-Side the local server .. pretty much like a standard FM application RH-Side ability to login via FTP to the clients system Then I can fiddle as required. The closest thing I have found is net2ftp but it doesn't have the gui interface, at the moment I simply ssh into my server power up ncftp and run that way, but something easier to use would be mucho niceness. Thanks in advance! Larry

    Read the article

  • Problem with wireless networking

    - by Rodnower
    Hello, I have atheros wifi hardware, intell chipset, gigabyte laptop and CentOS 5 installed. Now I try to use wireless network and get problems. First of all I want to say that I have 2 OS on my laptop, and when I load Windows XP I still may to access to the wireless network. First I try to get it on Linux was to make active wlan0 interface in: system - administration - network but I get: Determining IP information for wlan0... failed. Second I try also was unsuccessfully: [root 1 network-scripts]# ifup-wireless Error : unrecognised wireless request "off" This relevant output of iwconfig is: Warning: Driver for device wlan0 recommend version 21 of Wireless Extension, but has been compiled with version 20, therefore some driver features may not be available... wlan0 IEEE 802.11 ESSID:"" Mode:Managed Frequency:2.462 GHz Access Point: Not-Associated Tx-Power=27 dBm Retry min limit:7 RTS thr:off Fragment thr=2352 B Encryption key:off Link Quality:0 Signal level:0 Noise level:0 Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 {output not in the original format} The same things are happen even if I do: modprobe wlan0 (this not get error) Important to say that modprobe not succeed to find ath_pci, tharefor I decide to download latest version of the madwifi driver from http://madwifi-project.org. I extracted this, but when I make this, this is what I get: [root 1 madwifi-0.9.4]# make /bin/sh: line 0: cd: /lib/modules/2.6.18-164.el5/build: No such file or directory Makefile.inc:66: * /lib/modules/2.6.18-164.el5/build is missing, please set KERNELPATH. Stop. I tried to set KERNELPATH, but I think that it was incorrect: [root 1 madwifi-0.9.4]# make KERNELPATH=/lib/modules/2.6.18-164.el5/kernel/ /bin/sh: cc: command not found Makefile.inc:81: * Cannot detect kernel version - please check compiler and KERNELPATH. Stop. Some one have any ideas? Thank you very much for ahead.

    Read the article

  • Dual booting Linux/Win7, Grub refuses to load Win7

    - by JohnB
    Decided to give Linux Mint a try (Ubuntu's interface annoys me), so I installed it with the intention of dual booting with Windows 7. Installation went fine, but now I can only boot into Linux Mint. Grub lists two Windows 7 menu options, but selecting either of them causes an "unknown file system" error and dumps me into a Grub recovery prompt. There, I have to manually reset the root and prefix options, as they reset hd0,msdos6 when they should be hd0,msdos5. I ran Boot Repair twice, once to fix grub errors, once to rebuild the MBR, but it didn't fix anything. Here is the log: http://paste.ubuntu.com/1029675/ fdisk output: Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 1486249145 743021149 7 HPFS/NTFS/exFAT /dev/sda3 1486249982 1953523711 233636865 5 Extended /dev/sda5 1486249984 1945141247 229445632 83 Linux /dev/sda6 1945143296 1953523711 4190208 82 Linux swap / Solaris grub.cfg: ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Windows 7 (loader) (on /dev/sda1)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set=root 86184D18184D091F chainloader +1 } menuentry "Windows 7 (loader) (on /dev/sda2)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root 56D84F84D84F60FB chainloader +1 } ### END /etc/grub.d/30_os-prober ### I have found a few similar troubleshooting guides so far, but so far no amount of updating/configuring Grub has been successful. Last resort is, I suppose, use the W7 recovery disc and start over. Thanks in advance! Linux Mint 13 Maya, 64-bit Windows 7 Home Edition, 64-bit

    Read the article

  • How to write re-usable puppet definitions?

    - by Oliver Probst
    I'd like to write a puppet manifest to install and configure an application on target servers. Parts of this manifest shall be re-usable. Thus I used define for defining my re-usable functionality. Doing so, I've always the problem that there are parts of the definition which are not re-usable. A simple example is a bunch of configuration files to be created. These file must be placed in the same directory. This directory must be created only once. Example: nodes.pp node 'myNode.in.a.domain' { mymodule::addconfig {'configfile1.xml': param => 'somevalue', } mymodule::addconfig {'configfile2.xml': param => 'someothervalue', } } mymodule.pp define mymodule::addconfig ($param) { $config_dir = "/the/directory/" #ensure that directory exits: file { $config_dir: ensure => directory, } #create the configuration file: file { $name: path => "${config_dir}/${name}" content => template('a_template.erb'), require => File[$config_dir], } } This example will fail, because now the resource file {$config_dir: is defined twice. As far as I understood, it is required to extract these parts into a class. Then it looks like this: nodes.pp node 'myNode.in.a.domain' { class { 'mymodule::createConfigurationDirectory': } mymodule::addconfig {'configfile1.xml': param => 'somevalue', require => Class ['mymodule::createConfigurationDirectory'], } mymodule::addconfig {'configfile2.xml': param => 'someothervalue', require => Class ['mymodule::createConfigurationDirectory'], } } But this makes my interface hard use. Every user of my module has to know, that there is a class which is additionally required. For this simple use case the additional class might be acceptable. But with growing module complexity (lots of definitions) I'm a bit afraid of confusing the modules user. So I'd like to know is there a better way to handle this dependencies. Ideally, classes like createConfigurationDirectory are hidden from the user of the modules api. Or are there some other "Best Practices"/Patterns handling such dependencies?

    Read the article

  • Adding network share as the system account breaks Win7 backup to network

    - by ChrisBenn
    (On Win7 Ultimate x64 SP1) So I've been using Win7 backup to \192.168.0.100\Backup\main-desktop\ for awhile without issue. Yesterday I tried to setup crashplan to synchronize my dropbox folder and a network share. I then found out that crashplan, as it runs under the system account, can't see my user mapped drives. So I created a startup script net use O: \192.168.0.100\Documents /USER:192.168.0.100\username password and set it to run, on startup, after the network interface is up, as the system account. (the username & password are the same for the net use script above, the locally logged in user, and the explicit username/password entered in windows backup). I woke up this morning to find error flags from the windows backup and get "Network location cannot be used" (0x800704B3). If I disable the startup task & reboot then windows backup works fine. I'm not sure why having a mapped drive for another user is killing windows backup (same server, different folder). I can work around the issue by using another program to synchronize the two folders, but I'm completely in the dark as to why this happens (and it's 100% repeatable). Uninstalling the crashplan client doesn't change anything - it's the net use run under the system account that breaks win7 backup (to a network location).

    Read the article

  • Best Practical RT, sorting email into queues automatically using procmail

    - by user52095
    I'm trying to get incoming e-mail to automatically go directly into whichever queue/ticket they are related to or create a new one if none exist and the right queue e-mail setup in the web interface is used. I will have too many queues to have two line items within mailgate per queue. A similar issue was discussed here (http://serverfault.com/questions/104779/procmail-pipe-to-program-otherwise-return-error-to-sender), but I thought it best to open a new case instead of tagging on what appeared to be an answer to that person's query. I'm able to send and receive e-mail (via PostFix) to the default rt user and this user successfully accepts all e-mail for the relative domain. I have no idea where the e-mail goes - it's successfully delivered, but it does not update existing tickets (with a Subject line match) and it does not create any new. Here's and example of my ./procmail.log: procmail: [23048] Mon Aug 23 14:26:01 2010 procmail: Assigning "MAILDOMAIN=rt.mydomain.com " procmail: Assigning "RT_MAILGATE=/opt/rt3/bin/rt-mailgate " procmail: Assigning "RT_URL=http://rt.mydomain.com/ " procmail: Assigning "LOGABSTRACT=all " procmail: Skipped " " procmail: Skipped " " procmail: Assigning "LASTFOLDER={ " procmail: Opening "{ " procmail: Acquiring kernel-lock procmail: Notified comsat: "rt@18337:./{ " From [email protected] Mon Aug 23 14:26:01 2010 Subject: RE: [RT.mydomain.com #1] Test Ticket Folder: { 1616 Does the notified comsat portion mean that it notified RT? The contents of my ./procmailrc: #Preliminaries SHELL=/bin/sh #Use the Bourne shell (check your path!) #MAILDIR=${HOME} #First check what your mail directory is! MAILDIR="/var/mail/rt/" LOGFILE="home/rt//procmail.log" LOG="--- Logging ${LOGFILE} for ${LOGNAME}, " VERBOSE=yes MAILDOMAIN="rt.mydomain.com" RT_MAILGATE="/opt/rt3/bin/rt-mailgate" #RT_MAILGATE="/usr/local/bin/rt-mailgate" RT_URL="http://rt.mydomain.com/" LOGABSTRACT=all :0 { # the following line extracts the recipient from Received-headers. # Simply using the To: does not work, as tickets are often created # by sending a CC/BCC to RT TO=`formail -c -xReceived: |grep $MAILDOMAIN |sed -e 's/.*for *<*\(.*\)>* *;.*$/\1/'` QUEUE=`echo $TO| $HOME/get_queue.pl` ACTION=`echo $TO| $HOME/get_action.pl` :0 h b w |/usr/bin/perl $RT_MAILGATE --queue $QUEUE --action $ACTION --url $RT_URL } I know that my get_queue.pl and get_action.pl scripts work, as those have been previously tested. Any help and/or guidance you can give would be greatly appreciated. Nicôle

    Read the article

  • Acer Aspire one is falling apart on Ubuntu

    - by Narcolapser
    Question: How do I have my netbook detect and install hardware? Info: I have an Acer Aspire One, it came with windows XP and I loaded it with Win7. I decided I wanted to change to Ubuntu so I tried Ubuntu netbook remix, which failed horribly, and so after attempting 3 or so other OS's I ended with Ubuntu Desktop 9.10. Which worked fine for a while, but there were some minor issues so I asked a question about it here and decided to change my OS again. This last weekend I tried mandriva like that guy in my other question suggested, no success. when I had though my netbook lost the ability to use it's touch pad, I didn't think much of it, just thought it must be a driver or something. But When Mandriva failed, and I also while I was at it tried Damn small linux and Debian, which both failed to, I decided to switch back to Ubuntu Desktop(some where in here my keyboard stopped working for one attempt to). But first I gave the netbook remix one more try. it worked this time, with the exception of it didn't have any networking. I thought it was a driver issue again and finished the weekend with ubuntu desktop 9.10 again. But now things get really crazy. it doesn't know it has a wireless card or an ethernet card. It doesn't know my phone is connected trying to provide wireless broadband either. I'm clueless on what could be the problem. And with only a minimal amount of experience with Ubuntu can't navigate the entire interface with only my keyboard(it doesn't detect a USB mouse when I plug it in, it had when I installed it. in fact the network interfaces were working just fine when I live boot ubuntu to installed it). Even so, I don't know where to go or what to do to make it recognize it's hardware. I'm in a dire situation, any help is welcome.

    Read the article

  • RRAS VPN on windows 2k3 AD, can access rras server only.

    - by nopsax
    I'm setting up a test lab and here is the current configuration: 192.168.86.201 - a windows 2003 machine acting as PDC with AD/DNS/DHCP/WINS. 192.168.86.62 - windows 2003 machine is the RRAS server with IAS, also a file/print server. 192.168.86.6 - gateway/router to internet 192.168.86.21 - Windows XP Workstation Everything works on the internal network, File/Print/AD etc. Whenever a user connects via vpn to the RRAS server remotely using their domain credentials, they are assigned an ip address from the 192.168.86.201 machine along with the wins server address etc. The vpn user can then ping/access resources on the RRAS server, but cannot ping/access resources of any other machines by name or ip. However, if I ping by name, it does resolve to the correct ip address, just no replies. I did notice that on the RRAS server the 'internal' interface gets an ip address of 192.168.86.75 when a remote user connects, and the remote user is assigned, for example 192.168.86.71 . The RRAS server responds on both the .62 and .75 ip addresses. The client also unchecks the 'use remote default gateway option'. Also, I tried connecting a laptop to the physical network, joining the domain, then going remote and dialing the connection before domain login, and everything seems to work, e.g. browse-able shares via network neighborhood. But I can't really join the domain remotely if I cannot access any other resources. I really need to monitor traffic to see whats happening to those packets but won't be able to until this weekend. Any help is appreciated, will provide whatever configurations are needed.

    Read the article

  • Configuring port forwarding for SSH - no response outside LAN

    - by WinnieNicklaus
    I recently moved, and at the same time purchased a new router (Linksys E1200). Prior to the move, I had my old router set up to forward a port for SSH to servers on my LAN, and I was using DynDNS to manage the external IP address. Everything worked great. I moved and set up the new router (unfortunately, the old one is busted so I can't try things out with it), updated the DynDNS address, and attempted to restore my port forwarding settings. No joy. SSH connections time out, and pings go unanswered. But here's the weird part (i.e., key to the whole thing?): I can ping and SSH just fine from within this LAN. I'm not talking about the local 192.168.1.* addresses. I can actually SSH from a computer on my LAN to the DynDNS external address. It's only when the client is outside the LAN that connections are dropped. This surely suggests a particular point of failure, but I don't know enough to figure out what it is. I can't figure out why it would make a difference where the connections originate, unless there's a filter for "trusted" IP addresses, which is perhaps just restricted to my own. No settings have been touched on the servers, and I can't find any settings suggesting this on the router admin interface. I disabled the router's SPI firewall and "Filter anonymous traffic" setting to no avail. Has anyone heard of this behavior, and what can I do to get past it?

    Read the article

  • Mac OS X Client With Static DHCP Assignment Requests Wrong IP via Option 50

    - by Starchy
    I have a number of Mac (and a few Linux) laptops getting DHCP from a Force10 layer 3 switch, the only DHCP server on the subnet. There's a global dynamic pool, and for each full-time employee's laptop I have a single IP static pool set by MAC address. One and only one of the clients, running OS X 10.7.5, consistently fails to get a static assignment. The MAC address in the static pool definition has been carefully re-checked. Running tcpdump on a mirrored port when the laptop connects, I see that it is specifically requesting 10.100.0.252 (a dynamic address): 11:32:10.108280 IP (tos 0x0, ttl 255, id 28293, offset 0, flags [none], proto UDP (17), length 328) 0.0.0.0.bootpc > broadcasthost.bootps: [udp sum ok] BOOTP/DHCP, Request from 3c:07:54:xx:xx:xx (oui Unknown), length 300, xid 0x1399da89, Flags [none] (0x0000) Client-Ethernet-Address 3c:07:54:xx:xx:xx (oui Unknown) Vendor-rfc1048 Extensions Magic Cookie 0x63825363 DHCP-Message Option 53, length 1: Request Parameter-Request Option 55, length 9: Subnet-Mask, Default-Gateway, Domain-Name-Server, Domain-Name Option 119, LDAP, Option 252, Netbios-Name-Server Netbios-Node MSZ Option 57, length 2: 1500 Client-ID Option 61, length 7: ether 3c:07:54:xx:xx:xx Requested-IP Option 50, length 4: 10.100.0.252 Lease-Time Option 51, length 4: 7776000 Hostname Option 12, length 10: "host-name" END Option 255, length 0 PAD Option 0, length 0, occurs 8 I haven't been able to find any extra system prefs or unusual software on the laptop. Disabling the interface and rebooting or temporarily setting the IP manually both fail to make any difference. Any suggestions appreciated.

    Read the article

  • Migrating to CF9: trouble getting JRun working with SSL

    - by DaveBurns
    I have a client on MX7 who wants to migrate to CF9. I have a dev environment for them on my WinXP machine where I've configured MX7 to run with JRun's built-in web server. I've had that working for a long time with both regular and SSL connections. I installed CF9 yesterday side-by-side with the existing MX7 install to start testing. The install was smooth and detected MX7, adjusted CF9's port numbers for no conflict, etc. Testing started well: MX7 over regular and SSL still worked and CF9 worked over regular HTTP. But I can't get CF9 to work with SSL. I installed a new certificate with keytool, FireFox (v3.6) complained about it being unsigned, I added it to the exception list, and now I get this: Secure Connection Failed An error occurred during a connection to localhost:9101. Peer reports it experienced an internal error. (Error code: ssl_error_internal_error_alert) I've been Googling that in all variations but can't find much help to get past this. I don't see any info in any log files either. FWIW, here's my SSL config from SERVER-INF/jrun.xml: <service class="jrun.servlet.http.SSLService" name="SSLService"> <attribute name="enabled">true</attribute>` <attribute name="interface">*</attribute> <attribute name="port">9101</attribute> <attribute name="keyStore">{jrun.rootdir}/lib/mykey</attribute> <attribute name="keyStorePassword">*deleted*</attribute> <attribute name="trustStore">{jrun.rootdir}/lib/trustStore</attribute> <attribute name="socketFactoryName">jrun.servlet.http.JRunSSLServerSocketFactory</attribute> <attribute name="deactivated">false</attribute> <attribute name="bindAddress">*</attribute> <attribute name="clientAuth">false</attribute> </service> Anyone here know of any issues re setting up SSL and CF9? Anyone had success with it? Dave

    Read the article

  • Bridged network on OS X only gets UDP broadcast traffic

    - by a paid nerd
    I've created a bridged network Mac OS X 10.8.5 using ifconfig and TUNTAP for OS X to bridge my wireless connection, en0, with a virtual interface, tap0, which I can use for guest VMs: $ sudo sysctl -w net.inet.ip.forwarding=1 $ sudo sysctl -w net.link.ether.inet.proxyall=1 $ sudo sysctl -w net.inet.ip.fw.enable=1 $ sudo ifconfig bridge0 create $ sudo ifconfig bridge0 addm en0 addm tap0 $ sudo ifconfig bridge0 up $ ifconfig en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 28:cf:xx:xx:xx:xx inet6 xxxx::xxxx:xxxx:xxxx:xxxx%en0 prefixlen 64 scopeid 0x4 inet 192.168.100.64 netmask 0xffffff00 broadcast 192.168.100.1 media: autoselect status: active bridge0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether ac:de:xx:xx:xx:xx Configuration: priority 0 hellotime 0 fwddelay 0 maxage 0 ipfilter disabled flags 0x2 member: en0 flags=3<LEARNING,DISCOVER> port 4 priority 0 path cost 0 member: tap0 flags=3<LEARNING,DISCOVER> port 8 priority 0 path cost 0 tap0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500 ether ca:3d:xx:xx:xx:xx open (pid 88244) However, if I tcpdump -i tap0, I only see broadcast traffic. Shouldn't I see a mirror of everything on en0? (192.168.100.33, the host doing the broadcasting, is another unrelate, noisy server on my LAN.) (I asked a similar question here and will probably close it.)

    Read the article

  • HP Procurve 2610 intervlan routing

    - by user19039
    Can anyone tell me why inter vlan routing is working for all vlans except my newly created vlan 4/ I have an hp procurve 2610. Any help would be appreciated. I have basically this 1 switch with all unmanaged switches attached to the core. We have a second 2610 on port 28 Running configuration: ; J9085A Configuration Editor; Created on release #R.11.25 hostname "Core_HP" interface 22 speed-duplex 100-full exit ip routing snmp-server community "public" Unrestricted vlan 1 name "DEFAULT_VLAN" untagged 1-12,17-22,26-27 ip address 192.168.4.6 255.255.255.0 tagged 25 no untagged 13-16,23-24,28 exit vlan 2 name "WAN" untagged 28 ip address 10.254.254.3 255.255.255.0 exit vlan 3 name "Wireless" untagged 13-16,24 ip address 192.168.7.6 255.255.255.0 ip helper-address 192.168.4.2 tagged 27 exit vlan 35 name "guest" untagged 23 tagged 24 exit vlan 4 name "esxi" untagged 25 ip address 10.10.1.1 255.255.248.0 exit ip route 192.168.5.0 255.255.255.0 10.254.254.1 ip route 192.168.6.0 255.255.255.0 10.254.254.1 ip route 0.0.0.0 0.0.0.0 192.168.4.10 show ip route IP Route Entries Destination Gateway VLAN Type Sub-Type M etric Dist. ------------------ --------------- ---- --------- ---------- - --------- ----- 0.0.0.0/0 192.168.4.10 1 static 1 1 10.10.0.0/21 esxi 4 connected 0 0 10.254.254.0/24 WAN 2 connected 0 0 127.0.0.0/8 reject static 0 250 127.0.0.1/32 lo0 connected 0 0 192.168.4.0/24 DEFAULT_VLAN 1 connected 0 0 192.168.5.0/24 10.254.254.1 2 static 1 1 192.168.6.0/24 10.254.254.1 2 static 1 1 192.168.7.0/24 Wireless 3 connected 0 0 show ip Internet (IP) Service IP Routing : Enabled Default TTL : 64 Arp Age : 20 VLAN | IP Config IP Address Subnet Mask Prox y ARP ------------ + ---------- --------------- --------------- ---- ----- DEFAULT_VLAN | Manual 192.168.4.6 255.255.255.0 No WAN | Manual 10.254.254.3 255.255.255.0 No Wireless | Manual 192.168.7.6 255.255.255.0 No esxi | Manual 10.10.1.1 255.255.248.0 No guest | Disabled

    Read the article

  • BGP Multihomed/Multi-location best practice

    - by Tom O'Connor
    We're in the process of designing a new iteration of our network where we improve resilliency by adding a second datacentre. We'll be adding a second datacentre, with an identical configuration of servers as our primary location. To achieve network connectivity, we're looking into a couple of possible methods. See earlier questions http://serverfault.com/questions/86736/best-way-to-improve-resilience and http://serverfault.com/questions/101582/dns-round-robin-failover-and-load-balancing I'm pretty convinced that BGP is the right way to go about this, and this question is not about RRDNS. 1) If we have 2 locations, do we announce the same IP address block from both locations? 2) If we did this, but had a management ssh interface on x.x.x.50 from datacentre A, but it was on x.x.x.150 in datacentre B. What is the best practice mechanism for achieving this? Because if I were nearest to A, then all my traffic would go to x.50, but if i attempted to connect to x.150, I'd not be able to connect, because this address wouldn't be valid at A, but only at B. Is the best solution to announce 2 different netblocks, one at each location, facilitating the need for RRDNS, or to announce a single block, and run some form of VPN between the two sites for managment traffic?

    Read the article

< Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >