Search Results

Search found 21955 results on 879 pages for 'self host'.

Page 705/879 | < Previous Page | 701 702 703 704 705 706 707 708 709 710 711 712  | Next Page >

  • Time drift in Cloud Server - need to mainpulate GRUB config

    - by Aditya Advani
    We are hosting a VPS on a popular host and are experiencing a regular time drift of several minutes a day forward (approx 7). Linux Kernel: 2.6.18-164.11.1.el5 GNU/Linux Distro: CentOS release 5.4 (Final) We reached out to our hosting provider and their support advised us " This is a known issue with Cloud Servers. To fix this you will need to add one line to your grub config located at: /boot/grub/menu.lst The line you need to add is: noapic nolapic divider=10 nolapic_timer This should correct this issue. You will need to restart after this is added in. " Because I am wary of manipulating grub, mostly I'm terrified that our server may fail to restart - I ask you guys, the pro *nix admins - where exactly in this file does the recommended insertion below: # line from 1&1 for time syncing issue (Case 5163) noapic nolapic divider=10 nolapic_timer go? Please specify where exactly, and whether the order of commands is or is not important. Why is the block below "title CentOS ..." indented? If someone could give me an overview of how this works or point me to a resource that's easy to follow, that's what I'm looking for immediately, a light overview or basic understanding of what I;m doing. If GRUB and bootloaders are a deep dark treasure trove of kernel hacking or something, that's great well-recommended in-depth resources are also very welcome. This is my current /boot/grub/menu.lst # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file #boot=/dev/sda # serial --unit=0 --speed=57600 terminal --timeout=5 serial console timeout=5 title CentOS (2.6.18-164.11.1.el5) root (hd0,0) kernel /boot/vmlinuz-2.6.18-164.11.1.el5 ro root=/dev/hda1 console=tty0 console=tty initrd /boot/initrd-2.6.18-164.11.1.el5.img MOST IMPORTANT: I need to know where in the file above it is appropriate to paste the suggested line so I can confidently restart my VPS after manipulating GRUB config

    Read the article

  • Which apache/mysql/php package is best for windows?

    - by crosenblum
    I have tried appservnetwork, was the best so far, but I haven't seen them do an update in ages, EasyPHP is just slow to load always. Wamp and Xamp, all put in their description that is not for production. I do not plan to host publicly this site or site's I am working on. But I do want a fast loading apache/mysql/php server for development purposes. I used to really like WLMP, which is Lighttpd for Windows, but that project seems unupdated or abandoned. I refuse to use IIS, but i have no desire to get into any wars over it. I run windows xp sp3 at my home pc. I will need to have a web server setup for professional work, as well as some fun websites I am working on. I just want it fast enough, so i can run it via localhost, and not take forever to load in the browser. Thank you... I plan mostly do php programming and perhaps coldfusion via this.

    Read the article

  • ZFS - destroying deduplicated zvol or data set stalls the server. How to recover?

    - by ewwhite
    I'm using Nexentastor on a secondary storage server running on an HP ProLiant DL180 G6 with 12 Midline (7200 RPM) SAS drives. The system has an E5620 CPU and 8GB RAM. There is no ZIL or L2ARC device. Last week, I created a 750GB sparse zvol with dedup and compression enabled to share via iSCSI to a VMWare ESX host. I then created a Windows 2008 file server image and copied ~300GB of user data to the VM. Once happy with the system, I moved the virtual machine to an NFS store on the same pool. Once up and running with my VMs on the NFS datastore, I decided to remove the original 750GB zvol. Doing so stalled the system. Access to the Nexenta web interface and NMC halted. I was eventually able to get to a raw shell. Most OS operations were fine, but the system was hanging on the zfs destroy -r vol1/filesystem command. Ugly. I found the following two OpenSolaris bugzilla entries and now understand that the machine will be bricked for an unknown period of time. It's been 14 hours, so I need a plan to be able to regain access to the server. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6924390 and http://bugs.opensolaris.org/bugdatabase/view_bug.do;jsessionid=593704962bcbe0743d82aa339988?bug_id=6924824 In the future, I'll probably take the advice given in one of the buzilla workarounds: Workaround Do not use dedupe, and do not attempt to destroy zvols that had dedupe enabled. Update: I had to force the system to power off. Upon reboot, the system stalls at Importing zfs filesystems. It's been that way for 2 hours now.

    Read the article

  • Task Scheduler not able to execute .vbs successfully

    - by Django Reinhardt
    Hi there, got this weird problem, which will hopefully have an obvious solution for some enlightened soul: We have several daily tasks we run via a .vbs script on our server (through the Task Scheduler), and for months it has been fine, but recently we've hit a problem. The .vbs script stopped successfully executing... but oddly it worked fine when ran manually! The error given in these circumstances was always "Timeout". We thought we try a little creative thinking, and run the .vbs another way: Via a .bat file. Again we hit weird issues, but with a little more debugging information, this time around. The .bat file is nothing more than... CScript "C:\location\script.vbs" > Log.txt But the Task Scheduler fails with the following error: 0x1: An incorrect function was called or an unknown function was called. The log.txt file says: CScript Error: Initialization of the Windows Script Host failed. (Not enough storage is available to process this command. ) But get this: The .bat file executes perfectly (vbs script and all) if it's executed with a double click! There's only a problem when it's run by Task Scheduler. What the hell? We're running Windows Server 2008 R2 (x64) and yes, the Task Sheduler's results are the same whether the user is logged in or not. Also, the user that can run the scripts successfully manually, is also the same user that runs the scripts in Task Scheduler. Thanks for any help for this weird problem!

    Read the article

  • Sendmail delivering locally instead of to MTA in MX record

    - by CreativeNotice
    Ok, so I've got a box named websrv1.mydomain.com. It's a web server running ubuntu, apache2, sendmail, etc. My email is outsourced to a third party. So in my DNS I've got MX set to mx.thirdparty.net. I've no reason to accept incoming mail on my web server, every email should be sent to the third party. This works correctly accept with sending mail from the webserver (aka via cron or console). So from my web server, if I send an email to [email protected], it just disappears. No errors, nothing in dead.letter, nothing. I can send to any other address with no issues. If I send to [email protected] it's delivered locally which is fine. 1) Doing an nslookup shows the mx record is correct. 2) Running /mx mydomain.com from sendmail -bt returns the correct result. 3) Running sendmail -bv [email protected] returns: sudo sendmail -bv [email protected] [email protected]... deliverable: mailer esmtp, host mydomain.com., user [email protected] 4) Running 3,0 [email protected], returns: 3,0 [email protected] canonify input: me @ mydomain . com Canonify2 input: me Canonify2 returns: me canonify returns: me parse input: me Parse0 input: me Parse0 returns: me Parse1 input: me MailerToTriple input: me MailerToTriple returns: me Parse1 returns: $# esmtp $@ mydomain . com . $: me parse returns: $# esmtp $@ mydomain . com . $: me So I'm at a loss. Sendmail seems to see the mx record, but it's not using it.

    Read the article

  • Cygwin, ssh, and git on Windows Server 2008

    - by Paul
    Hi everyone. I'm trying to setup a git repository on an existing Windows 2008 (R2) server. I have successfully installed Cygwin & added git and ssh to the packages, and everything works perfectly (thanks to Mark for his article on it). I can ssh to localhost on the server, and I can do git operations locally on the server. When I try to do either from the client, however, I get the "port 22, Bad file number" error. Detailed SSH output is limited to this: OpenSSH_4.6p1, OpenSSL 0.9.8e 23 Feb 2007 debug1: Connecting to {myserver} [{myserver}] port 22. debug1: connect to address {myserver} port 22: Attempt to connect timed out without establishing a connection ssh: connect to host {myserver} port 22: Bad file number Google tells me that this means I'm being blocked, usually, by a firewall. So, double-checked the firewall settings on the server, rule is there allowing port 22 traffic. I even tried turning off the firewall briefly, no change in behavior. I can ssh just fine from that client to other servers. The hosting company swears that there's no other firewalls blocking that server on port 22 (or any other port, they claim, but I find that hard to believe). I have another trouble ticket into them, just in case the first support person was full of it, but meanwhile I wanted to see if anyone could think of anything else it can be. Thanks, Paul

    Read the article

  • Internet Explorer / Windows 7 does not want to show HTML file from local network drive

    - by Jaanus
    Setup: I have Windows 7 running inside VirtualBox on Mac OS X host. I have a shared drive with some HTML files, that I am mounting as a local drive W: in Windows, from the VirtualBox server \VBOXSVR. I want to look at them with a browser in Windows. Chrome in Windows 7 opens and shows those HTML files just fine (file:///W:/welcome.html). But Internet Explorer does not, and shows this error instead of the files: Internet Explorer cannot display the web page What you can try: [button Diagnose Connection Problems] More information This problem can be caused by a variety of issues, including: Internet connectivity has been lost. The website is temporarily unavailable. The Domain Name Server (DNS) is not reachable. The Domain Name Server (DNS) does not have a listing for the website's domain. If this is an HTTPS (secure) address, click Tools, click Internet Options, click Advanced, and check to be sure the SSL and TLS protocols are enabled under the security section. For the internet zone in the status bar, it shows: Internet | Protected Mode: On IE settings are a mystery to me, and I could possibly get it to work by tweaking IE settings, but I don't know which ones. How do I make IE show the same files that Chrome is happy to show? (Chrome showing them means that the files themselves are fine, there is something about the setup that just makes IE be a diva.)

    Read the article

  • Selenium server causes crazy load on server - how to prevent?

    - by Eric
    I'm running this linux: Linux host.themepark.com 2.6.32-220.4.1.el6.x86_64 #1 SMP Tue Jan 24 02:13:44 GMT 2012 x86_64 x86_64 x86_64 GNU/Linux And I run the Selenium stand-alone server on my box with this command: java -jar /home/l/cron/selenium-server-standalone-2.24.1.jar > /logs/selenium.log 2>&1 & Here's the problem: as soon as I do that, the server load starts skyrocketing. I even went back and downloaded older versions of the Selenium server, but same results with 2.23.1, 2.23.0, and 2.19.0. Note that the server load starts going nuts before I issue ANY commands to Selenium or do anything else. All I'm doing is firing up the server, per the command above. This used to work perfectly on my server without causing massive server load, so something has changed, but I'm not sure what. My server is a managed VPS so I don't know if there is some kind of auto-update script that kicked in or what... but it's a problem. (Incidentally, even though the server load climbs like crazy, everything still works: after firing up Selenium, my server creates a screen with Xvfb so Firefox will be happy, then a PHP script talks to Selenium to do what it needs to do before shutting everything down. It takes a LONG time, and the load gets all the way up to 8 [!!!] before it is finished, which kills my web server and makes the main site horribly unresponsive... but it does get everything done.) Any suggestions for what is going on, why it's started doing this and/or, most importantly, how I can make Selenium not kill the server when it starts up... would be GREATLY appreciated!

    Read the article

  • Error while detecting start profile for instance with ID _u

    - by Techboy
    I am upgrading a SAP Java instance and am getting the error shown in the log below. There are not backup profile files in the profiles directory. The string _u does not exist in the default, start or instance profile for instance SCS01. Please can you suggest what the issue might be? Mar 13, 2010 12:09:01 PM [Info]: com.sap.sdt.ucp.phases.AbstractPhaseType.initializexAbstractPhaseType.javax758x [Thread[main,5,main]]: Phase PREPARE/INIT/DETERMINE_PROFILES has been started. Mar 13, 2010 12:09:01 PM [Info]: com.sap.sdt.ucp.phases.AbstractPhaseType.initializexAbstractPhaseType.javax759x [Thread[main,5,main]]: Phase type is com.sap.sdt.j2ee.phases.PhaseTypeDetermineProfiles. Mar 13, 2010 12:09:01 PM [Info]: ...ap.sdt.j2ee.phases.PhaseTypeDetermineProfiles.checkVariablesxPhaseTypeDetermineProfiles.javax284x [Thread[main,5,main]]: All 4 required variables exist in variable handler. Mar 13, 2010 12:09:01 PM [Info]: ....j2ee.tools.sysinfo.AbstractInfoController.updateProfileVariablexAbstractInfoController.javax302x [Thread[main,5,main]]: Parameter /J2EE/StandardSystem/DefaultProfilePath has been detected. Parameter value is \server1\SAPMNT\PI7\SYS\profile\DEFAULT.PFL. Mar 13, 2010 12:09:01 PM [Info]: com.sap.sdt.j2ee.tools.sysinfo.ProfileDetector.detectInstancexProfileDetector.javax340x [Thread[main,5,main]]: Instance SCS01 with profile \server1\SAPMNT\PI7\SYS\profile\PI7_SCS01_server1, start profile \server1\SAPMNT\PI7\SYS\profile\START_SCS01_server1, and host server1 has been detected. Mar 13, 2010 12:09:01 PM [Error]: com.sap.sdt.ucp.phases.AbstractPhaseType.doExecutexAbstractPhaseType.javax862x [Thread[main,5,main]]: Exception has occurred during the execution of the phase. Mar 13, 2010 12:09:01 PM [Error]: com.sap.sdt.j2ee.tools.sysinfo.ProfileDetector.detectInstancexProfileDetector.javax297x [Thread[main,5,main]]: Error while detecting start profile for instance with ID _u. Mar 13, 2010 12:09:01 PM [Info]: com.sap.sdt.ucp.phases.AbstractPhaseType.cleanupxAbstractPhaseType.javax905x [Thread[main,5,main]]: Phase PREPARE/INIT/DETERMINE_PROFILES has been completed. Mar 13, 2010 12:09:01 PM [Info]: com.sap.sdt.ucp.phases.AbstractPhaseType.cleanupxAbstractPhaseType.javax906x [Thread[main,5,main]]: Start time: 2010/03/13 12:09:01. Mar 13, 2010 12:09:01 PM [Info]: com.sap.sdt.ucp.phases.AbstractPhaseType.cleanupxAbstractPhaseType.javax908x [Thread[main,5,main]]: End time: 2010/03/13 12:09:01. Mar 13, 2010 12:09:01 PM [Info]: com.sap.sdt.ucp.phases.AbstractPhaseType.cleanupxAbstractPhaseType.javax909x [Thread[main,5,main]]: Duration: 0:00:00.078. Mar 13, 2010 12:09:01 PM [Info]: com.sap.sdt.ucp.phases.AbstractPhaseType.cleanupxAbstractPhaseType.javax910x [Thread[main,5,main]]: Phase status is error.

    Read the article

  • Puppet: array in parameterized classes VS using resources

    - by Luke404
    I have some use cases where I want to define multiple similar resources that should end up in a single file (via a template). As an example I'm trying to write a puppet module that will let me manage the mapping between MAC addresses and network interface names (writing udev's persistent-net-rules file from puppet), but there are also many other similar usage cases. I searched around and found that it could be done with the new parameterised classes syntax: if implemented that way it should end up being used like this: node { "myserver.example.com": class { "network::iftab": interfaces => { "eth0" => { "mac" => "ab:cd:ef:98:76:54" } "eth1" => { "mac" => "98:76:de:ad:be:ef" } } } } Not too bad, I agree, but it would rapidly explode when you manage more complex stuff (think network configurations like in this module or any other multiple-complex-resources-in-a-single-config-file stuff). In a similar question on SF someone suggested using Pienaar's puppet-concat module but I doubt it could get any better than parameterised classes. What would be really cool and clean in the configuration definition would be something like the included host type, it's usage is simple, pretty and clean and naturally maps to multiple resources that will end up being configured in a single place. Transposed to my example it would be like: node { "myserver.example.com": interface { "eth0": "mac" => "ab:cd:ef:98:76:54", "foo" => "bar", "asd" => "lol", "eth1": "mac" => "98:76:de:ad:be:ef", "foo" => "rab", "asd" => "olo", } } ...that looks much better to my eyes, even with 3x options to each resource. Should I really be passing arrays to parameterised classes, or there is a better way to do this kind of stuff? Is there some accepted consensus in the puppet [users|developers] community? By the way, I'm referring to the latest stable release of the 2.7 branch and I am not interested in compatibility with older versions.

    Read the article

  • Poor NFS Performance: OpenFiler

    - by Safin09
    Good Day Everyone, I have an issue with OpenFiler, a Linux-based operating that converts a computer system into a SAN/NAS appliance. Here is the problem. In my environment we have two Netapp Storevault 500 appliances that I normally perform backups to a NFS share. There are two backup cronjobs that use ghettoVCB to backup two groups of VM's. One group is a pool of 3 VMs. This takes 13 mins to complete. A second job that backups a pool of 5 VMs to a 2nd Storevault appliance which takes 2 hours. We then installed Openfiler on a old server that has 2 core Xeon processors. There is a software RAID 5 process in place. When performing the same backups to a NFS Openfiler share, the first backup job, which takes 13 mins, takes around 4 hours. The second backup job, which takes 2 hours, takes almost 10 hours to complete. This is unacceptable!!!! Especially considering the strain placed on the host ESX Server. I assumed that because of the software RAID 5, the overhead on the CPU explained the long backup times. I then installed Openfiler on a 2nd server, an IBM x306 machine which has a P4 Intel processor. This time no software RAID or any RAID at all. A single 750GB hard drive that contained the OS and the rest of the disk uses to backup VMs to a NFS share. I performed the first backup job of the pool of 3 VMs. This time the backup job took 1 and 1/2 hours to complete instead of 13 mins!!!!!!!!!! Is Openfiler simply poor at being an NFS Server!!!!!!!!!!!!! Has anyone else had these issues with Openfiler?

    Read the article

  • Public DNS redirect subdomain to Windows Server 2003 DNS

    - by user125248
    I'm a programmer by trade but often dabble in sysadmin tasks and responsibilities. I have recently been tasked with setting up a Windows Server 2003 networking environment for a small business with multiple branches. The business already has a domain name they use to host a website at www.example.com. Currently the DNS nameservers are at Zerigo and I would very much like it to remain that way (as they specialize in just providing DNS services and they do this very well). We also have a bunch of other subdomains we use to conviniently connect to the various branches that have static IPs assigned from ISPs, so we're able to connect easily to branch1.example.com. Is it possible to 'redirect' all intranet.example.com DNS requests to a Windows box? I've been doing a little reading and I see there are NS records that might be able to do this, and the Windows DNS server could then perform all of the lookups for that subdomain, say, server1.intranet.example.com or client5.intranet.example.com. This would seem better to me, than registering a new domain name for the organisation, as keeping a single domain name makes more organizational sense.

    Read the article

  • Has anyone used the sharedband connection bonding product?

    - by John Rennie
    See http://www.sharedband.com/ for details on the product. Obviously Sharedband aren't too keen on giving away their technical secrets, but I would guess that it bonds the connections at the IP layer i.e. their routers send the IP packets to the SharedBand routers over all available lines and the ShareBand routers handle all the virtual circuitry and provide the NATing to whatever IP address(es) they've assigned you. It looks a clever idea, and a good way to provide some resilience over ADSL links. You can even use ADSL links from different ISPs and SharedBand will still bond them for you. But, I find myself wondering how well it really works, and whether it's worth it. The Draytek routers can already load balance (though not bond) up to four ADSL lines, so the SharedBand product really only offers an advantage if you're hosting servers i.e. you can have one IP address to accept incoming connections through all your (working) ADSL lines. But should you really try and host servers using ADSL lines, especially since ADSL upload performance isn't stellar? Wouldn't it be better to use a hosted server, or maybe pay up for a leased line with a SLA? So I'm asking if anyone is using SharedBand, and if so what do you think of it? JR

    Read the article

  • Getting an error when mounting LVM snapshot

    - by Sandra
    I have migrated a file based Xen guest to LVM using dd bs=1M if=/dev/zero of=/dev/vg00/vm10 qemu-img convert ~/vm10.qcow2 -O raw /dev/vg00/vm10 and changed the Xen domain file for the VM to use the LV instead of the old file. The VM boots up, and now on the Xen host would I like to make a snapshot of the running VM. # lvcreate --size 10G --snapshot --name vm10-snapshot /dev/vg00/vm10 Logical volume "vm10-snapshot" created # mount /dev/vg00/vm10-snapshot /mnt/snapshot/ mount: you must specify the filesystem type # dmesg |tail EXT3 FS on dm-3, internal journal EXT3-fs: mounted filesystem with ordered data mode. hfs: unable to find HFS+ superblock VFS: Can't find ext3 filesystem on dev dm-4. hfs: unable to find HFS+ superblock hfs: unable to find HFS+ superblock VFS: Can't find ext3 filesystem on dev dm-2. hfs: unable to find HFS+ superblock hfs: unable to find HFS+ superblock hfs: unable to find HFS+ superblock For some reason it can't see it is an EXT3 filesystem. I have also tried to mount with -t ext3, but still didn't mount. # lvdisplay --- Logical volume --- LV Name /dev/vg00/vm10 VG Name vg00 LV UUID I1y1vQ-Bac5-5jwW-melh-TY5h-l9NO-qaelKk LV Write Access read/write LV snapshot status source of /dev/vg00/vm10-snapshot [active] LV Status available # open 2 LV Size 8.00 GB Current LE 2048 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2 --- Logical volume --- LV Name /dev/vg00/vm10-snapshot VG Name vg00 LV UUID GWsOx3-TPpr-GW64-uiMz-u1YN-QU4h-l0Kala LV Write Access read/write LV snapshot status active destination for /dev/vg00/vm10 LV Status available # open 0 LV Size 8.00 GB Current LE 2048 COW-table size 10.00 GB COW-table LE 2560 Allocated to snapshot 0.00% Snapshot chunk size 4.00 KB Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:4 # What could the problem be?

    Read the article

  • How to give a user NTFS rights to a folder, via Powershell

    - by Don
    I'm trying to build a script that will create a folder for a new user on our file server. Then take the inherited rights away from that folder and add specific rights back in. I have it successfully adding the folder (if i give it a static entry in the script), giving domain admin rights, removing inheritance, etc...but i'm having trouble getting it to use a variable I set as the user. I don't want there to be a static user each time, I want to be able to run this script, have it ask me for a username, it then goes out and creates the folder, then gives that same user full rights to that folder based on the username i've supplied it. I can use Smithd as a user, like this: New-Item \\fileserver\home$\Smithd –Type Directory But can't get it to reference the user like this: New-Item \\fileserver\home$\$username –Type Directory Here's what i have: Creating a new folder and setting NTFS permissions. $username = read-host -prompt "Enter User Name" New-Item \\\fileserver\home$\$username –Type Directory Get-Acl \\\fileserver\home$\$username $acl = Get-Acl \\\fileserver\home$\$username $acl.SetAccessRuleProtection($True, $False) $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Administrators","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow") $acl.AddAccessRule($rule) $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Domain\Domain Admins","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow") $acl.AddAccessRule($rule) $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Domain\"+$username,"FullControl", "ContainerInherit, ObjectInherit", "None", "Allow") $acl.AddAccessRule($rule) Set-Acl \\\fileserver\home$\$username $acl I've tried several ways to get it to work, but no luck. Any ideas or suggestions would be welcome, thanks.

    Read the article

  • Ubuntu Postfix Gmail SMTP Relay Not Working

    - by Nick DeMayo
    I currently have postfix set up to relay messages from my websites through gmail, and up until recently it was working perfectly. However, within the last week or so (not really sure when) I started getting the below error whenever attempting to send an email: Jul 20 07:40:46 localhost postfix/smtp[11958]: connect to smtp.gmail.com[2001:4860:800a::6c]:587: Network is unreachable Jul 20 07:40:46 localhost postfix/smtp[11958]: connect to smtp.gmail.com[173.194.76.109]:587: Connection refused Jul 20 07:40:46 localhost postfix/smtp[11958]: connect to smtp.gmail.com[173.194.76.108]:587: Connection refused Here is my configuration file: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h #readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = [my domain name] alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases #myorigin = /etc/mailname mydestination = [my host name], localhost.localdomain, localhost relayhost = [smtp.gmail.com]:587 mynetworks = 127.0.0.0/8 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = loopback-only inet_protocols = all ########################################## ##### non debconf entries start here ##### ##### client TLS parameters ##### smtp_tls_loglevel=1 smtp_tls_security_level=encrypt smtp_sasl_auth_enable=yes smtp_sasl_password_maps=hash:/etc/postfix/sasl/passwd smtp_sasl_security_options = noanonymous ##### map username@localhost to [email protected] ##### smtp_generic_maps=hash:/etc/postfix/generic Nothing changed on my server, as far as I know...any ideas what could have caused it to stop working?

    Read the article

  • Unable to connect to EC2 instance after "reboot"

    - by KPL
    I am not able to connect to my m1.small instance after rebooting it. I have already associated the public IP with this instance. Upon checking the system log, this seems to be the issue: cloud-init-nonethttp://11.84: waiting 10 seconds for network device cloud-init-nonethttp://21.85: waiting 120 seconds for network device cloud-init-nonethttp://141.85: gave up waiting for a network device. Cloud-init v. 0.7.3 running 'init' at Sun, 18 May 2014 07:02:55 +0000. Up 142.54 seconds. ci-info: +++++++++++++++++++++++Net device info++++++++++++++++++++++++ ci-info: +--------+-------+-----------+-----------+-------------------+ ci-info: | Device | Up | Address | Mask | Hw-Address | ci-info: +--------+-------+-----------+-----------+-------------------+ ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | . | ci-info: | eth0 | False | . | . | 02:43:xx:xx:xx:xx | ci-info: +--------+-------+-----------+-----------+-------------------+ ci-info: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!Route info failed!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! A bunch of these follow the above message: 2014-05-18 07:02:56,178 - url_helper.pyWARNING: Calling http://169.254.169.254/2009-04-04/meta-data/instance-id failed 0/120s: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by : Errno 101] Network is unreachable) This is obviously related to the network interface not being working correctly. I have tried this so far: Relaunch a new instance from the custom AMI (created from EBS) of the failing instance. The same error shows up in the logs. Attach a new network interface to the EC2 instance. The error still persists. eth1 shows up in the list but the "up" column is False.

    Read the article

  • missing network usage through iptables

    - by Purres
    I inserted a rule to iptables to track the input usage to a certain ip address. The vps server's IP is 192.168.1.5 and the guest os's IP is 192.168.1.115. I ran 'yum update' inside the guest OS to get some network traffic. Then I ran iptables -vnL from the hypervisor. However it only showed network usage to the host, but not to the guest. Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target source destination 0 0 0.0.0.0/0 0.0.0.0/0 destination IP range 192.168.1.115-192.168.1.115 1853 114K 0.0.0.0/0 0.0.0.0/0 destination IP range 192.168.1.5-192.168.1.5 I ran tcpdump and the log showed that there're data packets to the guest os. 16:17:43.932514 IP mirrordenver.fdcservers.net.http > 192.168.1.115.34471: Flags [.], seq 17694667:17696115, ack 1345, win 113, options [nop,nop,TS val 1060308643 ecr 1958781], length 1448 16:17:43.932559 IP 192.168.1.115.34471 > mirrordenver.fdcservers.net.http: Flags [.], ack 17696115, win 15287, options [nop,nop,TS val 1958869 ecr 1060308643], length 0 Why the guest OS network usage couldn't be tracked? iptables -L will return the INPUT chain as following: Chain INPUT (policy ACCEPT) target prot opt source destination all -- anywhere anywhere destination IP range 192.168.1.115-192.168.1.115 all -- anywhere anywhere destination IP range 192.168.1.5-192.168.1.5 all -- anywhere anywhere

    Read the article

  • Howo to get Multipath IO with Dell MD3600i into active/active setup?

    - by Disco
    I'm desperately trying to improve performance of my SAN connection. Here's what i have: [root@xnode1 dell]# multipath -ll mpath1 (36d4ae520009bd7cc0000030e4fe8230b) dm-2 DELL,MD36xxi [size=5.5T][features=3 queue_if_no_path pg_init_retries 50][hwhandler=1 rdac][rw] \_ round-robin 0 [prio=200][active] \_ 18:0:0:0 sdb 8:16 [active][ready] \_ 19:0:0:0 sdd 8:48 [active][ghost] \_ 20:0:0:0 sdf 8:80 [active][ghost] \_ 21:0:0:0 sdh 8:112 [active][ready] And multipath.conf : defaults { udev_dir /dev polling_interval 5 prio_callout none rr_min_io 100 max_fds 8192 user_friendly_names yes path_grouping_policy multibus default_features "1 fail_if_no_path" } blacklist { device { vendor "*" product "Universal Xport" } } devices { device { vendor "DELL" product "MD36xxi" path_checker rdac path_selector "round-robin 0" hardware_handler "1 rdac" failback immediate features "2 pg_init_retries 50" no_path_retry 30 rr_min_io 100 prio_callout "/sbin/mpath_prio_rdac /dev/%n" } } And sessions. [root@xnode1 dell]# iscsiadm -m session tcp: [13] 10.0.51.220:3260,1 iqn.1984-05.com.dell:powervault.md3600i.6d4ae520009bd7cc000000004fd7507c tcp: [14] 10.0.50.221:3260,2 iqn.1984-05.com.dell:powervault.md3600i.6d4ae520009bd7cc000000004fd7507c tcp: [15] 10.0.51.221:3260,2 iqn.1984-05.com.dell:powervault.md3600i.6d4ae520009bd7cc000000004fd7507c tcp: [16] 10.0.50.220:3260,1 iqn.1984-05.com.dell:powervault.md3600i.6d4ae520009bd7cc000000004fd7507c I'm getting very poor read performance : dd if=/dev/mapper/mpath1 of=/dev/null bs=1M count=1000 The SAN is configured as follows: CTRL0,PORT0 : 10.0.50.220 CTRL0,PORT1 : 10.0.50.221 CTRL1,PORT0 : 10.0.51.220 CTRL1,PORT1 : 10.0.51.221 And on the host : IF0 : 10.0.50.1 IF1 : 10.0.51.1 (Dual 10GbE Ethernet Card Intel DA2) It's connected to a 10gbE switch dedicated for SAN traffic. My questions being; why the connection is set up as 'ghost' and not 'ready' like an active/active configuration ?

    Read the article

  • apache pointing to the wrong version of python on ubuntu how do I change?

    - by one
    I am setting up a flask application on and Ubuntu 12.04.3 LTS EC2 instance and everything seemed to be working well (i.e. I could get to the webpage via the publicly available url) until I tried to import a module (e.g. numpy) and realised the apache python differs from the one I used to compile the mod_wsgi and also the one I am using I am running apache2. The apache2 logs show the warnings (specifically the last line shows the path hasnt changed): [warn] mod_wsgi: Compiled for Python/2.7.5. [warn] mod_wsgi: Runtime using Python/2.7.3. [warn] mod_wsgi: Python module path '/usr/lib/python2.7/:/usr/lib/python2.7/plat-linux2:/usr/lib/python2.7/lib-tk:/usr/lib$ I have tried to set the path in my virtual host conf (my python is located in /home/ubuntu/anaconda/bin along with all of the other libraries): WSGIPythonHome /home/ubuntu/anaconda WSGIPythonPath /home/ubuntu/anaconda <VirtualHost *:80> ServerName xx-xx-xxx-xxx-xxx.compute-1.amazonaws.com ServerAdmin [email protected] WSGIScriptAlias / /var/www/microblog/microblog.wsgi <Directory /var/www/microblog/app/> Order allow,deny Allow from all </Directory> Alias /static /var/www/microblog/app/static <Directory /var/www/FlaskApp/FlaskApp/static/> Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> But I still get the warnings and the apache python path hasnt changed - where do I need to put the relevant directives to point apache at my python version and modules (e.g. scipy, numpy etc)? Separately, could I have avoided this using virtual environments? Thanks in advance.

    Read the article

  • Unable to install PEM/pkcs12 created by gnutls to Cisco ASA

    - by ACiD GRiM
    I've been pulling some hair out trying to figure out why cisco devices don't like my certificates. My primary need is to get a trustpoint set up with CA,cert,key on the ASA for VPN systems, however I'm having the same issues on my IOS devices. I created a pkcs12 with openssl a few months ago that imported with no issues, but now that I'm getting ready to move this lab to production I'm using gnutls certtool as I found it adds alt_dns and ip_address fields properly to the certificate, (which cost me a few more hairs trying to get to work with openssl's ca tool) I'm including the current test certs below, don't worry I'm not using these in production ;) The maddening thing is that after I thought gnutls was generating certs incorrectly, I tried making a pkcs12 for a printserver and it imported with no issues. Here's my command flow for creating these certs: certtool --generate-privkey --disable-quick-random --outfile nn-ca.key certtool --generate-self-signed --load-privkey nn-ca.key --outfile nn-ca.crt certtool --generate-privkey --disable-quick-random --outfile nn-g0.key certtool --generate-certificate --load-privkey nn-g0.key --outfile nn-g0.crt --load-ca-privkey nn-ca.key --load-ca-certificate nn-ca.crt openssl pkcs12 -export -certfile nn-ca.crt -in nn-g0.crt -inkey nn-g0.key -out nn-g0.p12 openssl enc -base64 -in nn-g0.p12 -out nn-g0.base64.p12 The password for the attatched pkcs12 is "ciscohelp" without quotes. Thanks for any help TestCerts

    Read the article

  • IIS 7 rewriting subdomain to point at a specific port

    - by Tommy Jakobsen
    Having installed Team Foundation Server 2010 on Windows Server 2008, I need an easy URL for our developers to access their repositories. The default URL for the TFS repositories is http://localhost:8080/tfs Now I want the subdomain domain tfs.server.domain.com to point at http://localhost:8080/tfs. And when you access tfs.server.domain.com/repos_name it should redirect to http://localhost:8080/tfs/repos_name. How can I do this in IIS7? I already tried using the following rule, but it does not work. I get a 404. <rewrite> <globalRules> <rule name="TFS" stopProcessing="true"> <match url="^(?:tfs/)?(.*)" /> <conditions> <add input="{HTTP_HOST}" pattern="^tfs.server.domain.com$" /> </conditions> <action type="Rewrite" url="http://localhost:8080/tfs/{R:1}" /> </rule> </globalRules> </rewrite> EDIT I actually got this working by adding a binding for the site on port 80 with host name tfs.server.domain.com. But using tfs.server.domain.com, I can't authenticate using Windows Authentication. Is there something that I need to configure for Windows Authentication? You can see a trace here: http://pastebin.com/k0QrnL0m

    Read the article

  • Bash Script - Traffic Shaping

    - by Craig-Aaron
    hey all, I was wondering if you could have a look at my script and help me add a few things to it, How do I get it to find how many active ethernet ports I have? and how do I filter more than 1 ethernet port How I get this to do a range of IP address? Once I have a few ethenet ports I need to add traffic control to each one #!/bin/bash # Name of the traffic control command. TC=/sbin/tc # The network interface we're planning on limiting bandwidth. IF=eth0 # Network card interface # Download limit (in mega bits) DNLD=10mbit # DOWNLOAD Limit # Upload limit (in mega bits) UPLD=1mbit # UPLOAD Limit # IP address range of the machine we are controlling IP=192.168.0.1 # Host IP # Filter options for limiting the intended interface. U32="$TC filter add dev $IF protocol ip parent 1:0 prio 1 u32" start() { # Hierarchical Token Bucket (HTB) to shape bandwidth $TC qdisc add dev $IF root handle 1: htb default 30 #Creates the root schedlar $TC class add dev $IF parent 1: classid 1:1 htb rate $DNLD #Creates a child schedlar to shape download $TC class add dev $IF parent 1: classid 1:2 htb rate $UPLD #Creates a child schedlar to shape upload $U32 match ip dst $IP/24 flowid 1:1 #Filter to match the interface, limit download speed $U32 match ip src $IP/24 flowid 1:2 #Filter to match the interface, limit upload speed } stop() { # Stop the bandwidth shaping. $TC qdisc del dev $IF root } restart() { # Self-explanatory. stop sleep 1 start } show() { # Display status of traffic control status. $TC -s qdisc ls dev $IF } case "$1" in start) echo -n "Starting bandwidth shaping: " start echo "done" ;; stop) echo -n "Stopping bandwidth shaping: " stop echo "done" ;; restart) echo -n "Restarting bandwidth shaping: " restart echo "done" ;; show) echo "Bandwidth shaping status for $IF:" show echo "" ;; *) pwd=$(pwd) echo "Usage: tc.bash {start|stop|restart|show}" ;; esac exit 0 thanks

    Read the article

  • Apache mod_rewrite weird behavior in Internet Explorer

    - by morrty
    I'm attempting to setup redirection for a couple of root domains. Firstly, here is the code in my httpd-vhosts.conf file: <VirtualHost *:80> ServerAdmin ****@example.com ServerName example.com ServerAlias example2.com RewriteEngine On RewriteCond %{HTTP_HOST} !^192\.168\.0\.1$ # This is our WAN IP RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteCond %{HTTP_HOST} !^$ RewriteRule ^/?(.*) http://www.%{HTTP_HOST}/$1 [L,R,NE] </VirtualHost> What this does is redirect the root domain of example.com or example2.com or any host other than www to www.example(2).com The part I'm having a problem with is the RewriteRule itself. the $1 is supposed to match the pattern of the RewriteRule and add it in the substitution. For example: "http://example.com/test.html" should rewrite to "http://www.example.com/test.html" It works in all modern browsers like it's supposed to except for IE8 or IE9 (I didn't test other IE versions). In IE, this works: "http://example.com" to "http://www.example.com" In IE, this does not work: "http://example.com/test.html" to "http://www.example.com/test.html" Does anyone have an explanation for this behavior? I hope I've explained it well enough. Thank you.

    Read the article

  • Inexpensive, simple screen recording application for mac

    - by donut
    I am more and more consistently running into the need to create screencasts (record my screen) for clients to show them how to use programs or websites. Up until now I've been using Jing and it's been wonderful. But I would like something that can give me something less annoying than a .swf. A .mov or, best of all, something that plays without fuss on Mac and Windows. Also, the 5-minute limit is annoying, but not show stopping. Basically, I'd like to be able to actually give them the file on a CD or something instead of relying on whatever host I use staying up for eternity. To sum up, here's what I require: Record a portion or all of the screen. Records audio from mic while recording screen. Exports files easily playable on Mac and Windows (requiring Quicktime is okay, but not ideal) Will work on Mac OS 10.5+ Allows recording videos of at least 5 minutes. Text in recorded videos is easily readable when exported. Bonuses points for: Records videos greater than 5 minutes Exported videos will work in Windows Media player without any fuss. I haven't upgraded to Snow Leopard yet but I know it has some screen recording stuff built in but I don't know if it would be sufficient or not. The reason I say, "simple" is because most of the applications I've seen do much more than I need (I mean, Jing is nearly perfect for my needs) and cost more than I would like to spend.

    Read the article

< Previous Page | 701 702 703 704 705 706 707 708 709 710 711 712  | Next Page >