Search Results

Search found 25400 results on 1016 pages for 'enable manual correct'.

Page 779/1016 | < Previous Page | 775 776 777 778 779 780 781 782 783 784 785 786  | Next Page >

  • Fix bad superblock on logical partition

    - by Chris
    I was following http://www.howtoforge.com/linux_resi...xt3_partitions and when i reboot and run: root@Microknoppix:/home/knoppix# fsck -n /dev/sda7 fsck from util-linux-ng 2.17.2 e2fsck 1.41.12 (17-May-2010) fsck.ext2: Superblock invalid, trying backup blocks... fsck.ext2: Bad magic number in super-block while trying to open /dev/sda7 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> so i ran e2fsck with all the block numbers that you need (forget exactly what tool i used to find where the superblocks are hidden) no dice then i ran testdisk and had it look for the superblock, no results anyone have any ideas? fdisk -l for reference: root@Microknoppix:/home/knoppix# fdisk -l Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x97646c29 Device Boot Start End Blocks Id System /dev/sda1 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 38912 312046593 f W95 Ext'd (LBA) /dev/sda5 64 326 2104320 82 Linux swap / Solaris /dev/sda6 * 327 2938 20972544 83 Linux /dev/sda7 2938 38912 288968672+ 83 Linux To be honest it looks like I lost it... Next step if that happens is to dump the partition to an image file and hope i can find or write some software to parse through the data looking for known file headers, i think.

    Read the article

  • Why is the wrong name server information at crsnic.net & gtld-servers.net ?

    - by danorton
    Did I screw this up? I don’t even know how this might have happened, so I’d like to learn. I’m trying out HostGator’s reseller service and I bought a domain name through it, but I didn’t want the default name servers and so I changed them during the registration. After registration the domain name record is correct everywhere except at whois-servers.net and whois.crsnic.net and it looks like the DNS network is using that same information. $ whois -h whois.enom.com. example.com ... Name Servers: dns1.name-services.com dns2.name-services.com dns3.name-services.com dns4.name-services.com dns5.name-services.com ... $ whois -h whois.crsnic.net. example.com Domain Name: EXAMPLE.COM Registrar: ENOM, INC. Whois Server: whois.enom.com Referral URL: http://www.enom.com Name Server: NS1.HOSTGATOR.COM Name Server: NS2.HOSTGATOR.COM Status: clientTransferProhibited Updated Date: 01-jun-2010 Creation Date: 31-may-2010 Expiration Date: 31-may-2011 >>> Last update of whois database: Tue, 01 Jun 2010 19:20:47 UTC <<< ... $ dig +norecurse @b.gtld-servers.net. example.com. NS ... ;; AUTHORITY SECTION: example.com. 172763 IN NS ns2.hostgator.com. example.com. 172763 IN NS ns1.hostgator.com. ... My next step is to let HostGator have a look, but first I want to better understand how this happened. Thanks.

    Read the article

  • Setting the default permissions for files uploaded via FTP to a directory

    - by Kerri
    Disclaimer: I'm just a web designer/coder, and server admin stuff is my weakest point of them all. So be easy on me (and very specific). I'm using a simple CMS (Unify) on a site, where part of the functionality is that the client can upload files to a specified directory (using FTP). The permissions for the upload directory are set to 755. But when files are uploaded through the interface, they are uploaded with permissions set to 640 (instead of 644), so site visitors cannot acces the files. When I emailed the CMS's support about this, they told me that it was a server setting, and I need to make sure that files uploaded through FTP are set to 644. Makes perfect sense, but I have no idea how to do this. Any help would be greatly appreciated. This site is a shared site hosted by Network Solutions (Unix), so my access options are limited. I can edit .htaccess files, and php.ini, but that's about all I have access to. It appears I can't even log on via shell. ETA: 11/11/2010 Thanks all. I was able to work around this problem by setting up the CMS's settings in a different way. I'd be interested in following up on Nick O'Niel's suggestions, because I think he's on the right track, but unfortunately I can't access the necessary files on this particular server. So, anyway, I'm leaving this open, since the original questions isn't exactly resolved. Unfortunately, I probably can't put a correct answer to the test, since the shared server in question has nearly all of its config files tightly locked down.

    Read the article

  • ubuntu dmidecode is not functioning properly

    - by Alaa Alomari
    dmidecode is giving irrelevant and conflicted results. it shows that i have two slots while the correct is 8 (the board is Tyan S5350.) uname -a Linux synd01 3.0.0-16-server #29-Ubuntu SMP Tue Feb 14 13:08:12 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux root@synd01:/home/badmin# dmidecode -t 16 dmidecode 2.9 SMBIOS 2.33 present. Handle 0x0011, DMI type 16, 15 bytes Physical Memory Array Location: System Board Or Motherboard Use: System Memory Error Correction Type: None Maximum Capacity: 4 GB Error Information Handle: Not Provided Number Of Devices: 2 while root@synd01:/home/badmin# dmidecode -t 17 | grep Size Size: No Module Installed Size: No Module Installed Size: 1024 MB Size: 1024 MB Size: No Module Installed Size: No Module Installed Size: 1024 MB Size: 1024 MB also lshw shows: *-memory description: System Memory physical id: 11 slot: System board or motherboard size: 4GiB *-bank:0 description: DIMM DDR Synchronous 166 MHz (6.0 ns) [empty] physical id: 0 slot: J3B1 clock: 166MHz (6.0ns) *-bank:1 description: DIMM DDR Synchronous 166 MHz (6.0 ns) [empty] physical id: 1 slot: J3B3 clock: 166MHz (6.0ns) *-bank:2 description: DIMM DDR Synchronous 166 MHz (6.0 ns) physical id: 2 slot: J2B2 size: 1GiB width: 64 bits clock: 166MHz (6.0ns) *-bank:3 description: DIMM DDR Synchronous 166 MHz (6.0 ns) physical id: 3 slot: J2B4 size: 1GiB width: 64 bits clock: 166MHz (6.0ns) *-bank:4 description: DIMM DDR Synchronous 166 MHz (6.0 ns) [empty] physical id: 4 slot: J3B2 clock: 166MHz (6.0ns) *-bank:5 description: DIMM DDR Synchronous 166 MHz (6.0 ns) [empty] physical id: 5 slot: J2B1 clock: 166MHz (6.0ns) *-bank:6 description: DIMM DDR Synchronous 166 MHz (6.0 ns) physical id: 6 slot: J2B3 size: 1GiB width: 64 bits clock: 166MHz (6.0ns) *-bank:7 description: DIMM DDR Synchronous 166 MHz (6.0 ns) physical id: 7 slot: J1B1 size: 1GiB width: 64 bits clock: 166MHz (6.0ns) what might cause this conflict and how can i fix it? Thanks

    Read the article

  • Code to update HyperV Export file

    - by Andy Schneider
    I am using the HyperV Module from Codeplex to do a "config only" export from a 2008R2 Hyper-V server. In order to import the configuration on another HyperV server, I need to edit the value of CopyVMStorage in the EXP file. This file is an XML file. I wrote the following code in PowerShell to do the update for me. The variable $existing is the existing exp file. $xml = [xml](get-content $existing) $xpath = '//PROPERTY[@NAME ="CopyVmStorage"]' foreach ($node in $xml.SelectNodes($xpath)) {$node.Value = 'TRUE'} $xml.Save($existing) This code makes the correct changes to the XML. However, when I go to import the file on the Hyper-V server, I get an error that says the file format is incorrect. I am wondering if the encoding of the file is incorrect or if there is something else going on. If I edit the file manually in wordpad, it imports without an issue. The filename is a GUID with a .exp extension, and it appears that the file name is too long for notepad to open. Notepad throws an error trying to open the file, which is why I went with WordPad. I have noticed that the file that is updated with PowerShell comes out formatted whereas the raw file is xml all bunched together with no whitespace. Any ideas on what "file format" means in this HyperV error message and how I might be able to use my code to automate this change in the XML without changing the file format?

    Read the article

  • Removing resource limits on Solaris 10

    - by mikeydonkey
    How should one remove all potential artificial resource limitations for a process? I just saw a case where a server application consumed resources so that some limitation was hit. The other shells into the same server etc were all extremely slow (waiting for something to free up for them; ie. prstat starting 5 minutes). It wasn't CPU/memory related problem so I think it has got something to do with ulimits / projects. Already managed to set the maximum open files to 500 000 and it helped a little bit. However there is something else and I can not figure out what resource is maxed out. I can get some in-house administrator probably to check this but I would like to understand how I could make sure there shouldn't be any limitations! If you think I am going the wrong way (would be better to figure out what limitation should be specfically tuned etc) please feel free to point me to the correct way. I know technical stuff - it's just Solaris 10 that is giving me headache :/

    Read the article

  • Windows 2008 R2 IPsec encryption in tunnel mode, hosts in same subnet

    - by fission
    In Windows there appear to be two ways to set up IPsec: The IP Security Policy Management MMC snap-in (part of secpol.msc, introduced in Windows 2000). The Windows Firewall with Advanced Security MMC snap-in (wf.msc, introduced in Windows 2008/Vista). My question concerns #2 – I already figured out what I need to know for #1. (But I want to use the ‘new’ snap-in for its improved encryption capabilities.) I have two Windows Server 2008 R2 computers in the same domain (domain members), on the same subnet: server2 172.16.11.20 server3 172.16.11.30 My goal is to encrypt all communication between these two machines using IPsec in tunnel mode, so that the protocol stack is: IP ESP IP …etc. First, on each computer, I created a Connection Security Rule: Endpoint 1: (local IP address), eg 172.16.11.20 for server2 Endpoint 2: (remote IP address), eg 172.16.11.30 Protocol: Any Authentication: Require inbound and outbound, Computer (Kerberos V5) IPsec tunnel: Exempt IPsec protected connections Local tunnel endpoint: Any Remote tunnel endpoint: (remote IP address), eg 172.16.11.30 At this point, I can ping each machine, and Wireshark shows me the protocol stack; however, nothing is encrypted (which is expected at this point). I know that it's unencrypted because Wireshark can decode it (using the setting Attempt to detect/decode NULL encrypted ESP payloads) and the Monitor Security Associations Quick Mode display shows ESP Encryption: None. Then on each server, I created Inbound and Outbound Rules: Protocol: Any Local IP addresses: (local IP address), eg 172.16.11.20 Remote IP addresses: (remote IP address), eg 172.16.11.30 Action: Allow the connection if it is secure Require the connections to be encrypted The problem: Though I create the Inbound and Outbound Rules on each server to enable encryption, the data is still going over the wire (wrapped in ESP) with NULL encryption. (You can see this in Wireshark.) When the arrives at the receiving end, it's rejected (presumably because it's unencrypted). [And, disabling the Inbound rule on the receiving end causes it to lock up and/or bluescreen – fun!] The Windows Firewall log says, eg: 2014-05-30 22:26:28 DROP ICMP 172.16.11.20 172.16.11.30 - - 60 - - - - 8 0 - RECEIVE I've tried varying a few things: In the Rules, setting the local IP address to Any Toggling the Exempt IPsec protected connections setting Disabling rules (eg disabling one or both sets of Inbound or Outbound rules) Changing the protocol (eg to just TCP) But realistically there aren't that many knobs to turn. Does anyone have any ideas? Has anyone tried to set up tunnel mode between two hosts using Windows Firewall? I've successfully got it set up in transport mode (ie no tunnel) using exactly the same set of rules, so I'm a bit surprised that it didn't Just Work™ with the tunnel added.

    Read the article

  • What options to use for Accurate bacula backup ?

    - by Kiss Stefan
    It's actually 2 question in one. First is a bit more theoretically. So when specifying accurate options how does bacula figure out if a file needs to be backed up ? it's a simple AND ? As in if the options are Accurate = sm5 bacula will not backup the file if ((size = old size) AND (modtime = old modtime) AND (md5 = old md5)) Is that correct ? Do any of the options take precedence ? as in would be a file skipped if modif time is diffreent but it has the same md5sum ? Are there any implied options that you cannot ignore ? Practical case, ( bacula 5.0.1 ) I have to back-up a svn repo, in order to be able to make incremental backups as simple as posible i am hotcopying (client run before) it to another location, that bacula will backup ( then delete it with client run after). Now in the fileset i have Accurate = spnd5 This should tell bacula to take into consideration size , permission bits number of links , decreases in size and md5sum. However , an incremental is also including a full copy of the svn. What am i doing wrong ? it seems that it takes into account creation time even tho i have not specified it.

    Read the article

  • (help help!!!) Easy_install the wrong version of python modules (Mac OS)

    - by user71415
    I installed Python 2.7 in my mac. When typing "python" in terminal, it shows: Ma-Xiaolongs-MacBook-Pro-2:~ MaXiaolong$ python Python 2.7 (r27:82508, Jul 3 2010, 20:17:05) [GCC 4.0.1 (Apple Inc. build 5493)] on darwin Type "help", "copyright", "credits" or "license" for more information. The Python version is correct here. But when I try to easy_install some modules. The system will install the modules with python version 2.6 which are not able be imported to Python 2.7. And of course I can not do the functions I need in my code. Here's an example of easy_install graphy: Ma-Xiaolongs-MacBook-Pro-2:~ MaXiaolong$ easy_install graphy Searching for graphy Reading pypi.python.org/simple/graphy/ Reading http://code.google.com/p/graphy/ Best match: Graphy 1.0.0 Downloading http://pypi.python.org/packages/source/G/Graphy/Graphy- 1.0.0.tar.gz#md5=390b4f9194d81d0590abac90c8b717e0 Processing Graphy-1.0.0.tar.gz Running Graphy-1.0.0/setup.py -q bdist_egg --dist-dir /var/folders/fH/fHwdy4WtHZOBytkg1nOv9E+++TI/-Tmp-/easy_install-cFL53r/Graphy-1.0.0/egg-dist-tmp-YtDCZU warning: no files found matching '.tmpl' under directory 'graphy' warning: no files found matching '.txt' under directory 'graphy' warning: no files found matching '.h' under directory 'graphy' warning: no previously-included files matching '.pyc' found under directory '.' warning: no previously-included files matching '~' found under directory '.' warning: no previously-included files matching '.aux' found under directory '.' zip_safe flag not set; analyzing archive contents... graphy.all_tests: module references file Adding Graphy 1.0.0 to easy-install.pth file Installed /Library/Python/2.6/site-packages/Graphy-1.0.0-py2.6.egg Processing dependencies for graphy Finished processing dependencies for graphy So it installs graphy for python 2.6. Can someone help me with it? I just want to set my default easy_install version as 2.7... Thank you very much!!!!!!

    Read the article

  • Is there a good alternative to Videora iPod Converter?

    - by Richard
    I use Videora to convert my videos (in DivX/XviD format) to something I can play on my iPod Classic. I really dislike it. It's clunky, riddled with adverts, sometimes doesn't convert properly (the infamous "invalid public atom" error - see Google for more) and has a UI that truly stinks. On the upside, it's free, accepts a list of video files (via the oddly hard to find "1-click convert" button) and just gets on with the converting as it already knows the correct settings for my iPod. One final nice touch is that once they are converted, it'll automatically upload them into iTunes. Are there any alternatives which have all the upsides but none of the downsides? Bonus points if they can set the metadata in iTunes correctly for TV shows (season, show, episode) and delete the converted file afterwards (as my iTunes settings means that a copy is made elsewhere). I've looked at a bunch of applications (handbrake, virtualdub, mediacoder, format factory, any video converter, convertxtodvd) but many of them fail the "just select a list of files and get on with converting" test - let alone all the other features I want. I have no desire to individually set the video size of each file or the codec or the post-processing options. I'm currently using the command line version of HandBrake (handbrakecli) and a hand-written DOS batch file to go through every file in a folder and convert it. It does most of what I want, just not in a very slick way. Can anyone recommend anything better? It needs to work on Windows 7 and be free.

    Read the article

  • Prevent SSL certificate being returned for a specific domain

    - by jezmck
    Apologies for a long question: We've taken on a new client whose web hosting was previously on their in-house server which still has their Exchange/Outlook email. We now host their domain (and many others) on our server. They're complaining that they're getting errors in Outlook. I don't understand the AutoDiscover stuff at the root of the problem, but believe that I just need to stop the SSL certificate on our server being returned when requested at a particular domain: Yes it is, the issue lies with "{newclient}.com" being pointed to your server IP and that server has Port 443 open with an SSL certificate associated to it. So when Outlook/ActiveSync use autodiscover to find the mailbox settings it find your SSL (because 443 is open) and flags it as an error. The solution is to close 443 so its not discovered, Autodiscover will then proceed to mail.{newclient}.com via the MX / ServiceRecords and discover the correct SSL. I'm new here and there was no hand-over, so I don't know whether other currently hosted sites need to accept SSL connections, though I suspect some will, or may in future. This is a live server, so I can't risk trying loads of options in case I take the server offline! I feel like I should be adding something like the following to vhosts.conf. <VirtualHost *:443> ServerName {newclient}.com ServerAlias www.{newclient}.com SSLEngine Off SSLCertificateFile {NONE} SSLCertificateKeyFile {NONE} </VirtualHost> Apologies for the fact that I don't know enough about this subject to be able to ask the question more clearly!

    Read the article

  • SBD killing both cluster nodes when there are even small SAN network problems

    - by Wieslaw Herr
    I am having problems with stonith SBD in a openais-based cluster. Some background: The active/passive cluster has two nodes, node1 and node2. They are configured to provide an NFS service to users. To avoid problems with split-brain, they are both configured to use SBD. SBD is using two 1MB disks available to the hosts via an multipath fibre-channel network. The problems start if something happens with the SAN network. For example, today one of the brocade switches got rebooted and both nodes lost 2 out of 4 paths to each disks, which resulted in both nodes committing suicide and rebooting. This, of course, was highly undesirable because a) there were paths left b) even if the switch would be out for 10-20 seconds a reboot cycle of both nodes would take 5-10 minutes and all NFS-locks would be lost. I tried increasing the SBD timeout values (to 10sec+ values, dump attached at the end), however a "WARN: Latency: No liveness for 4 s exceeds threshold of 3 s" hints that something isn't working as I would it expect to. Here is what I would like to know: a) Is SBD working as it should killing nodes when 2 paths are available? b) If not, is the multipath.conf file attached correct? The storage controller we use is an IBM SVC (IBM 2145), should there be any specific configuration for it? (as in multipath.conf.defaults) c) How should I go about increasing the timeouts in SBD attachements: Multipath.conf and sbd dump (http://hpaste.org/69537)

    Read the article

  • Adventures in Drupal multisite config with mod_rewrite and clean urls

    - by moexu
    The university where I work is planning to offer Drupal hosting to staff/faculty who want a Drupal site. We've set up Drupal multisite with clean urls and it's mostly working except for some weird redirects. If you have two sites where one is a substring of the other then you'll randomly be redirected to the other site. I tracked the problem to how mod_rewrite does path matching, so with a config file like this: RewriteCond %{REQUEST_URI} ^/drupal RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupal/index.php?q=$1 [last,qsappend] RewriteCond %{REQUEST_URI} ^/drupaltest RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupaltest/index.php?q=$1 [last,qsappend] /drupaltest will match the /drupal line and all of the links on the /drupaltest page will be rewritten to point to /drupal. If you put the end of string character ($) at the end of each rewrite condition then it will always match on the correct site and the links will always be rewritten correctly. That breaks down as soon as a user logs in though because the query string is appended to the url so just the base url will no longer match. You can also fix the problem by ordering the sites in the config file so that the smallest substring will always be last. I suggested storing all of the sites in a table and then querying, sorting, and rewriting the config file every time a Drupal site is requested so that we could guarantee the order. The system administrator thought that was kludgy and didn't address the root problem. Disabling clean urls should also fix the problem but the users really want them so I'd prefer to keep them if possible. I think we could also fix it by using an .htaccess file in each site to handle the clean url rewriting but that also seems suboptimal since it will generate a higher load on the server and the server is intended to host the majority of the university's external facing web content. Is there some magic I can do with mod_rewrite to get it to work? Would another solution be better? Am I doing something the wrong way to begin with?

    Read the article

  • How to set up mysql storage for certain rsyslog input matches?

    - by ylluminate
    I'm draining various logs from Heroku to an rsyslog linux (ubuntu) server and am starting to have a little more to bite off than I can chew in terms of working with my log histories. I am needing to be able drill back in time based on more flexible details and more flexible access than what the standard syslog file(s) provide. I'm thinking that logging to mysql may be the correct approach, but how do I set this up such that it pulls only certain log entries into a table based on an identified? For example, I see a long hex string identifying each log entry from a certain Heroku app instance. I assume that I can just pipe those into the mysql socket vs ALL rsyslog input into mysql... Could someone please direct me to a resource that can walk me through the process of setting something like this up or simply provide some details that can help? I have 15+ years of Unix experience so I just need some nudging in the right direction as I've not really done a tremendous amount of work with syslog daemons previously in terms of pooling various servers into one. Additionally, I'd be interested in any log review tools that could make drilling through log arrangements like this more handy for developers.

    Read the article

  • Proxychains, Tortunnel, Privoxy: cannot connect() to port

    - by Benjamin
    Hi all, I'm trying to do an nmap scan through tor using tortunnel, privoxy and proxychains like explained in the following video: http://vimeo.com/6238958 I'm getting rather weird results. I can successfully perform any SYN scan on any port. However as soon as I try to do connect() scans, proxychains cannot connect itself to all ports. In other words, I can perform connect() scans to port 80: proxychains nmap -P0 -A -sV www.zzz.com -p80 but not port 21: proxychains nmap -P0 -A -sV www.zzz.net -p21 I get the following error: Starting Nmap 4.62 ( http://nmap.org ) at 2010-06-02 08:34 UTC ProxyChains-2.1 (http://proxychains.sf.net) random chain (1):....127.0.0.1:5060....can't connect to..113.I2.1W1.YY:21 random chain (1):....127.0.0.1:5060....can't connect to..113.I2.1W1.YY:21 random chain (1):....127.0.0.1:5060....can't connect to..113.I2.1W1.YY:21 random chain (1):....127.0.0.1:5060....can't connect to..113.I2.1W1.YY:21 random chain (1):....127.0.0.1:5060....can't connect to..113.I2.1W1.YY:21 random chain (1):....127.0.0.1:5060....can't connect to..113.I2.1W1.YY:21 My only guess would be that the exit node I'm using does not allow connections to port 21. Would that be correct? How could I fix it? Thanks for your time.

    Read the article

  • SendMail not working in CentOs 6.4

    - by Kane
    I am trying to send e-mails from my CentOS 6.4 but it does not work. My knowledge about servers is quite limited, so I hope someone can help me. Here is what I did: First i tried to send an email using the "mail" command, but it was not in the OS so I installed it. # yum install mailx After that, I tried sending an email using the "mail" command, but it did not send anything. I checked it on the internet and I realized I needed an e-mail server like sendmail, so I installed it. # yum install sendmail sendmail-cf sendmail-doc sendmail-devel After that, I configured it following some tutorials. First, sendmail.mc file. # vi /etc/mail/sendmail.mc Commented out the next line: BEFORE # DAEMON_OPTIONS('Port=smtp, Name=MTA') dnl AFTER # dnl DAEMON_OPTIONS('Port=smtp, Name=MTA') dnl Check that the next lines are correct: # FEATURE(`virtusertable', `hash -o /etc/mail/virtusertable.db')dnl # ... # FEATURE(use_cw_file)dnl # ... # FEATURE(`access_db', `hash -T<TMPF> -o /etc/mail/access.db')dnl Update sendmail.cf # m4 /etc/mail/sendmail.mc > /etc/mail/sendmail.cf Open the port 25 adding the proper line in the iptables file # vi /etc/sysconfig/iptables # -A INPUT -m state --state NEW -m tcp --dport 25 -j ACCEPT restart iptables and sendmail # service iptables restart # service sendmail restart So i thought that would be ok, but when i tried: # mail '[email protected]' # Subject: test subject # test content #. I checked the mail log: # vi /var/log/maillog And that is what I found: Aug 14 17:36:24 dev-admin-test sendmail[20682]: r7D8RItS019578: to=<[email protected]>, ctladdr=<[email protected]> (0/0), delay=1+00:09:06, xdelay=00:00:00, mailer=esmtp, pri=2460500, relay=alt4.gmail- smtp-in.l.google.com., dsn=4.0.0, stat=Deferred: Connection timed out with alt4.gmail-smtp-in.l.google.com. I do not understand why there is a connection time out. Am I missing something? Can anyone help me, please? Thank you.

    Read the article

  • How do you change a Cisco ASA 5510 management interface?

    - by Sam Sanders
    I want to add a redundant interface to my Cisco ASA 5510. The management interface is currently using Ethernet0/1 (10.1.25.254/24) one of the interface I want to use for the redundant interfaces. So I wanted to setup Management0/0 as the new management interface. The other interface I want to use is Ethernet0/2 (10.1.0.254/24) for the redundant interface. The Ethernet0/3 (10.1.251.5/24) interface is not going to be part of the redundant interface. I gave the Management0/0 an IP address of 10.1.254.5, and was able to connect a win7 box to Management0/0 and use 10.1.254.5 as a gateway; and ping another address on the (10.1.251.0/24) network, but I can't ping the interface (10.1.254.5) itself. I also can't use ASDM/SSH to log onto the ASA at 10.1.254.5. I setup rules in Configuration Device Management Management Access ASDM/HTTPS/Telnet/SSH. That look like the original rules for the Ethernet0/1 interface. The last thing I can think to try would be to change the Configuration Device Management Management Access Management Interface. I'm a bit nervous about changing it, the description of it is a bit vague. What it's going to do if I change it? What is the correct way to change a management interface?

    Read the article

  • E-mail duplication problem

    - by Gavin Osborn
    I have taken out a hosting agreement with a well respected hosting provider for a couple of internet facing servers. We have deployed several applications to these servers which send various e-mails back to us for reporting purposes. Context: Each server runs Windows Server 2003 R2 with the IIS 6.0 SMTP service installed. Each application is configured to use the local instance of IIS to send e-mails. The external IP address of each server is mapped to a particular domain eg: server1.mydomain.com server2.mydomain.com These e-mails are sent from a company domain name and not the domain name of the hosted servers (eg: [email protected]) Symptoms: A small number (<1%) of e-mails sent from these applications appear to be duplicated. These are exact duplicate in terms of both content and message headers. The Fix: I contacted my hosting provider and they told me this was a common problem & instructed me to: Change the HELO response of your mail server service to a FQDN (server1.mydomain.com && server2.mydomain.com) Create a DNS A record that resolves the FQDN of your mail server to the primary IP address of your sending mail server. Create a PTR record that resolves your primary IP address back to your mail server's FQDN In the sending domain's (mycompanydomain.com) DNS zone file, add the appropriate SPF record for your hosted servers. eg: v=spf1 a mx include:mydomain -all The Problem Continues: I made all of the changes as prescribed above, I was a little hesitant because these steps seemed to suggest they were more for stopping your messages getting blocked than they were for stopping them from being duplicated - but I am certainly no expert in these matters. It has been 5 days since I applied this fix and the problem still persists. I am certain that these problems are not a bug in the software because they are 4 different applications installed on 2 different servers, all of whom are exhibiting this strange behaviour. This behaviour has also not been seen in our UAT environment. Were my hosts correct to suggest this fix? If not, does anyone know what could be the cause of this problem? Many Thanks

    Read the article

  • Unable to login through varnish cache

    - by ArunS
    I am setting up Active Collab Site in my new server. The setup is like below Internet --- varnish ---- apache But i am not able to login to the site through varnish cache.. But i can login to site through apache. Here is my VCL file backend default { .host = "localhost"; .port = "8080"; } acl purge { "localhost"; } sub vcl_recv { if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return(lookup); } if (req.url ~ "^/$") { unset req.http.cookie; } } sub vcl_hit { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged."; } } sub vcl_miss { if (req.request == "PURGE") { error 404 "Not in cache."; } if (!(req.url ~ "wp-(login|admin)")) { unset req.http.cookie; } if (req.url ~ "^/[^?]+.(jpeg|jpg|png|gif|ico|js|css|txt|gz|zip|lzma|bz2|tgz|tbz|html|htm)(\?.|)$") { unset req.http.cookie; set req.url = regsub(req.url, "\?.$", ""); } if (req.url ~ "^/$") { unset req.http.cookie; } } sub vcl_fetch { if (req.url ~ "^/$") { unset beresp.http.set-cookie; } if (!(req.url ~ "wp-(login|admin)")) { unset beresp.http.set-cookie; }} When i try to login through varnish i was redirect back to login page. If i enter wrong password, then it will ask for enter correct password.

    Read the article

  • Window 7 Computer name changing on its own?

    - by DC
    Very odd problem... I have a Dell Latitude D830 with XP Pro that has been running on my local domain for many years. I recently Installed Windows 7 Enterprise on the D830 using a brand new HDD so that I could still use XP if I needed by just swapping out the HDD's. I added the W7 installed system to my domain using a completely different machine name than that used for the XP system and everything seemed to be functioning as it should. On boot up over the last 2 weeks or so I occasionally (3 times now) get to the login screen and try to login to the domain only to get an error saying that the Computer name is not a trusted machine in the domain I'm trying to log in to. Come to find out that the machine name on the W7 system has been changed somehow to that of my old XP system. If on the W7 system I then change the name back to the correct name, disjoin the domain, reboot, add the machine back into the domain … all is well for an unknown period of time until this happens again. This last time, I know for a fact that everything was fine the day before when I shut down the system. I came in today, powered up the system and the machine name had been changed to that of my old XP system again. Has anybody else seen this behavior or hav any ideas on what could be causing it? Thanks!

    Read the article

  • Keytool and SSL Apache config

    - by Safari
    I have a question about SSL certificate... I have generate a certificate using this keytool command.. keytool -genkey -alias myalias -keyalg RSA -keysize 2048 and I used this command to export the certificate keytool -export -alias myalias -file certificate.crt So, I have a file .crt Now I would to configure my Apache ssl module. I need to use keytool...At the moment I can't to use Openssl How can I configure the module if I have only this certificate.crt file? I see these sections in my ssl.conf # Server Certificate: # Point SSLCertificateFile at a PEM encoded certificate. If # the certificate is encrypted, then you will be prompted for a # pass phrase. Note that a kill -HUP will prompt again. A new # certificate can be generated using the genkey(1) command. #SSLCertificateFile /etc/pki/tls/certs/localhost.crt # Server Private Key: # If the key is not combined with the certificate, use this # directive to point at the key file. Keep in mind that if # you've both a RSA and a DSA private key you can configure # both in parallel (to also allow the use of DSA ciphers, etc.) #SSLCertificateKeyFile /etc/pki/tls/private/localhost.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/pki/tls/certs/server-chain.crt How can I configure the correct section?

    Read the article

  • How to test a HTTPS URL with a given IP address

    - by GreatFire
    Let's say a website is load-balanced between several servers. I want to run a command to test whether it's working, such as curl DOMAIN.TLD. So, to isolate each IP address, I specify the IP manually. But many websites may be hosted on the server, so I still provide a host header, like this: curl IP_ADDRESS -H 'Host: DOMAIN.TLD'. In my understanding, these two commands create the exact same HTTP request. The only difference is that in the latter one I take out the DNS lookup part from cURL and do this manually (please correct me if I'm wrong). All well so far. But now I want to do the same for an HTTPS url. Again, I could test it like this curl https://DOMAIN.TLD. But I want to specify the IP manually, so I run curl https://IP_ADDRESS -H 'Host: DOMAIN.TLD'. Now I get a cURL error: curl: (51) SSL: certificate subject name 'DOMAIN.TLD' does not match target host name 'IP_ADDRESS'. I can of course get around this by telling cURL not to care about the certificate (the "-k" option) but it's not ideal. Is there a way to isolate the IP address being connected to from the host being certified by SSL?

    Read the article

  • Using URL rewrite module for http to https redirect

    - by johnnyb10
    Following ruslany's suggestion on the URL Rewrite Tips page here, I'm trying to use URL Rewrite to redirect http:// requests for my site to https://. I've written and tested the rule using a test site I set up, and so now the final piece is to create a second site (http) to redirect to my https site. (I need to use a second site because I don't want to uncheck the "Require SSL encryption" checkbox on my existing site.) I'm an IIS newbie so my question is: how do I do this? Should I create a site with the same name and host header, only it will be bound to http? Will IIS let me create a site with the same name? I don't want to screw anything up with my existing site (which is a SharePoint site, currently used by external users). That site currently has http and https bound to it. So my assumption is that, using ISS (not SharePoint), I will create a new site (http only) with the same name and host header as my existing site, and add the URL Rewrite rule to the http site. And then I guess I should remove the http binding from my existing site? Does that seem correct? Any advice, gotchas, etc., would be appreciated. Thanks.

    Read the article

  • isa 2004 - banned site rule cause slow internet

    - by Holian
    Hi Gods, We have windows server 2003 with isa 2004. Our clients uses internet with proxy. We have two isa rule: order name action protocolls from/listener to condition 1. trafic ALLOW all outbound all networks all networks all users 2. FTP ALLOW FTP Server EXTERNAL/INTERNAL/Local host 10.1.1.1 So we have to "bann" a few webpage (like facebook, youtube...etc...), so we make a new rule 0. banned DENY HTTP internal denied pages all users In the denied pages we have the *.facebook.com domain set. After we enable this rule, the entire internet slows down. The banning rule works well, redirect to an internal site, but the other sites.... If i open a page..it normally takes 3-10 sec to load, but after this rule this time is: 2-4 minutes. In the monitor / logging menu we got a few FAILED CONNECTION ATTEMPT like: Log type: Web Proxy (Forward) Status: 304 Not Modified Rule: All local traffic Source: Internal ( 10.1.1.1:0 ) Destination: External ( 172.24.28.22:3128 ) Request: GET http://www.konyvelozona.hu/wp-content/uploads/nyugdijas-holgy-2.jpg Filter information: Req ID: 17270b72 Protocol: http User: anonymous Additional information Client agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.3072... Object source: Verified Cache Processing time: 9047 Cache info: 0x18801002 MIME type: - In the event log we got a few log: Description: The Web Proxy filter failed to bind its socket to 10.1.1.1 port 80. This may have been caused by another service that is already using the same port or by a network adapter that is not functional. To resolve this issue, restart the Microsoft Firewall service. The error code specified in the data area of the event properties indicates the cause of the failure. The failure is due to error: 0x8007271d The Web Proxy filter failed to bind its socket to 127.0.0.1 port 80. This may have been caused by another service that is already using the same port or by a network adapter that is not functional. To resolve this issue, restart the Microsoft Firewall service. The error code specified in the data area of the event properties indicates the cause of the failure. The failure is due to error: 0x8007271d If i tpye: netstat -o -n -a | findstr 0.0:80 then i got, tcp 0.0.0.0:80 0.0.0.0:0 LISTEN 4 udp 0.0.0.0:8031 *.* 2780 udp 0.0.0.0:8082 *.* 2780 Some month ago we installed XMAP, but now we only use mysql. Apache service stopped. In the Xamp port check menu i see: Service POrt Status Apache (http) 80 Process: System Maybee this is the problem? I dont know what should i do now... Thank you folks.

    Read the article

  • Is there a limit setting a php_admin_value in php-fpm?

    - by PeeHaa
    I am trying to set a large value in the configuration of a pool in php-fpm, but at some point it just doesn't start anymore. php_admin_value[disable_functions] = dl,exec,passthru,shell_exec,system,proc_open,popen,curl_exec,curl_multi_exec,parse_ini_file,show_source,pcntl_exec,include,include_once,require,require_once,posix_mkfifo,posix_getlogin,posix_ttyname,getenv,get_current_use,proc_get_status,get_cfg_va,disk_free_space,disk_total_space,diskfreespace,getcwd,getlastmo,getmygid,getmyinode,getmypid,getmyuid,ini_set,mail,proc_nice,proc_terminate,proc_close,pfsockopen,fsockopen,apache_child_terminate,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,fopen,tmpfile,bzopen,gzopen,chgrp,chmod,chown,copy,file_put_contents,lchgrp,lchown,link,mkdi,move_uploaded_file,rename,rmdi,symlink,tempnam,touch,unlink,iptcembed,ftp_get,ftp_nb_get,file_exists,file_get_contents,file,fileatime,filectime,filegroup,fileinode,filemtime,fileowne,fileperms,filesize,filetype,glob,is_di,is_executable,is_file,is_link,is_readable,is_uploaded_file,is_writable,is_writeable,linkinfo,lstat,parse_ini_file,pathinfo,readfile,readlink,realpath,stat,gzfile,create_function When trying to restart php-fpm it fails with the following message: Stopping php-fpm: [ OK ] Starting php-fpm: [20-Oct-2013 22:31:52] ERROR: [/etc/php-fpm.d/codepad.conf:235] value is NULL for a ZEND_INI_PARSER_ENTRY [20-Oct-2013 22:31:52] ERROR: Unable to include /etc/php-fpm.d/codepad.conf from /etc/php-fpm.conf at line 235 [20-Oct-2013 22:31:52] ERROR: failed to load configuration file '/etc/php-fpm.conf' [20-Oct-2013 22:31:52] ERROR: FPM initialization failed [FAILED] When I remove the last disabled function (create_function) it start again. I also tried with other functions, but this gives the same error so it's not related to the create_function function. The string currently is just over 1KB in size so it looks like I have hit a limit here? Is my assumption correct? Is there a way to overcome this limit? I also tried to add another php_admin_value[disable_functions] underneath it (hoping it would be appended), but that didn't work (it just used the first one).

    Read the article

< Previous Page | 775 776 777 778 779 780 781 782 783 784 785 786  | Next Page >