Search Results

Search found 21802 results on 873 pages for 'erx vb next coder'.

Page 652/873 | < Previous Page | 648 649 650 651 652 653 654 655 656 657 658 659  | Next Page >

  • How can I install Satchmo?

    - by Jonathan Hayward
    I am trying to install Satchmo 0.9 on an Ubuntu 9.10 32-bit guest off of the instructions at http://bitbucket.org/chris1610/satchmo/downloads/Satchmo.pdf. I run into difficulties at 2.1.2: pip install -r http://bitbucket.org/chris1610/satchmo/raw/tip/scripts/requirements.txt pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9#egg=satchmo The first command fails because a compile error for how it's trying to build PIL. So I ran an "aptitude install python-imaging", locally copy the first line's requirements.text, and remove the line that's unsuccessfully trying to build PIL. The first line completes without reported error, as does the second. The next step tells me to change directory to the /path/to/new/store, and run: python clonesatchmo.py A little bit of trouble here; I am told that clonesatchmo.py will be in /bin by now, and it isn't there, but I put some Satchmo stuff under /usr/local, create a symlink in /bin, and run: python /bin/clonesatchmo.py This gives: jonathan@ubuntu:~/store$ python /bin/clonesatchmo.py Creating the Satchmo Application Traceback (most recent call last): File "/bin/clonesatchmo.py", line 108, in <module> create_satchmo_site(opts.site_name) File "/bin/clonesatchmo.py", line 47, in create_satchmo_site import satchmo_skeleton ImportError: No module named satchmo_skeleton A find after apparently checking out the repository reveals that there is no file with a name like satchmo*skeleton* on my system. I thought that bash might be prone to take part of the second pip invocation's URL as the beginning of a comment; I tried both: pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9\#egg=satchmo pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9#egg=satchmo Neither way of doing it seems to take care of the import error mentioned above. How can I get a Satchmo installation under Ubuntu, or at least enough of a Satchmo installation that I am able to start with a skeleton of a store and then flesh it out the way I want? Thanks, Jonathan

    Read the article

  • How to use connectify to share files between 2 laptops, one running Windows 7?

    - by p2pnode
    I have 2 laptops one running XP and another Windows 7. Both have WiFi-card installed. I want to be able to share files between these 2 machines and then in next step also be able to share internet connection. I have heard that Connectify on Windows 7 is a very handy and easy tool to set up a hotspot and other machines even XP,Vista can connect to this host hotspot and share files and internet connection. I am able to search for the network (set on host) on my client machine. But how to share files? I don't see any such menu or anything. Also after I have installed connectify on Windows 7, I am not able to connect to internet using data card. It throws error that "Error 31:A A device attached to the system is not functioning". And on client machine, if I connect to data card, as well as connect to wireless network setup by host machine, I no longer am able to access internet, even though the data card is connected. The browser throws error: Please help. Any other utility similar to connectify etc?

    Read the article

  • Remote NX login to Ubuntu, Gnome can't mount CD/DVD drive

    - by T.J. Crowder
    Even though I'm sitting next to it, I log into my Ubuntu 10.04 LTS system via NX Free Edition from another system at the moment (this is temporary, not worth buying a KVM for). Curiously, though, when I do that Gnome's auto-mounting fails for CD/DVD media (I haven't tried other kinds) with a "Not Authorized" error. For instance here's what happens when I put the Ubuntu 10.04 LTS installation CD in: This does not happen if I log into it locally (not via NX) with the same user account. When using NX, I can mount the media if I go to mount directly: tjc@midnight:~$ sudo mkdir /media/dvd tjc@midnight:~$ sudo mount -r -t iso9660 /dev/sr0 /media/dvd tjc@midnight:~$ ls /media/dvd autorun.inf casper dists install isolinux md5sum.txt pics pool preseed README.diskdefines ubuntu wubi.exe ...which, along with the "not authorized" error, suggests some kind of permissions problem to me (doh). What I find odd is that the same user is involved in both cases (local and via NX). I'm new to Ubuntu on the desktop (used it and other distros on servers for years), so I'm afraid I don't know how this auto-mounting is happening. I think it's handled by the gvfs package and its daemon, but that's about as far as I got (and perhaps I've taken a left turn even getting that far). Although I can work around it with mount, does anyone know how I might get auto-mounting to work?

    Read the article

  • upstart config to start sync daemon as non-root user

    - by Rudiger Wolf
    I am planning to use inosync to sync data from master server to several client servers. I have created a user called rsyncuser in both master and slaves with access permissions and passwordless ssh access from master to slave servers. Inosync is working when I use it from the command line as rsyncuser. Next I want this to start automatically when server is turned on. I figured upstart is the way to get this working. I am unable to find the right upstart command to get this working. Here is my upstart conf file. The problem seems to be around running "inosync -d -c /etc/inosync/inosync_rsyncuser.py" as a given user. As you can see I have tried a number of various options! description "start inosync to sync data to other CDN Servers as rsyncuser" console output #start on startup #stop on shutdown start on (net-device-up and local-filesystems) stop on runlevel [016] #start on runlevel [2345] #stop on runlevel [!2345] #kill timeout 30 env RUN_AS_USER=rsyncuser expect fork script echo "Inosync updtart job seems to have started" /tmp/upstart.log # exec sudo -u rsyncuser -c "ls -la" /tmp/upstart.log 2&1 # LOGFILE=/var/log/logfile.`date +%Y-%m-%d`.log # exec su - $RUN_AS_USER -c "inosync -d -c /etc/inosync/inosync_rsyncuser.py" $LOGFILE 2&1 # exec su -c "ls -la" /tmp/upstart.log 2&1 # emit inosync_running end script

    Read the article

  • Google Chrome mouse wheel takes keyboard focus

    - by Steve Crane
    I recently switched the default browser on all my machines from Firefox to Google Chrome. In general I’m loving Chrome but there is one behaviour that is driving me nuts. Scrolling with the mouse wheel takes keyboard focus away from the document. Here’s what happens. I’ve just opened a page or switched to a new tab and the page has keyboard focus as I can use the keyboard up/down and PgUp/PgDn keys to scroll it; no problem there. But if I then use the wheel on my mouse to scroll the page, it loses the keyboard focus and no longer responds to up/down, PgUp/PgDn, or in fact any other keyboard keys. I have to click on the page background to restore the keyboard focus. This is a minor inconvenience for scrolling but where it really drives me nuts is in Google Reader and Gmail where I use keyboard shortcuts a lot. Here I find that I scroll the article or e-mail I’m reading with the mouse wheel then get no response when I press j/k to move to the next or previous article or e-mail. I am using Windows 7 and the Chrome dev channel (version 4.0.249.43).

    Read the article

  • Solaris 10 zlogin logs in, logs out immediately

    - by Spelevink
    On a SPARC v445 running Solaris 10 9/10, had to rebuild rpool and reattached the three existing mirrored zpools on the other existing disks, with their zfs filesystems and NG zones intact. The zones have been configured with zonecfg -z ZONENAME create etc. ... and are now online using zoneadm -z ZONENAME attach -U then simply booting after being in installed state, but I cannot zlogin to any of the zones except one. It shows that I am logged in, then a blank line, then immediately logged out again. When I try to login using zlogin -C ZONENAME I cannot; the error message is: May 15 15:43:46 <hostname> login: open_module: stat(/usr/lib/security/pam_mkhomedir.so.1) failed: no such file or directory. May 15 15:43:46 <hostname> login: load_modules: cannot open module /usr/lib/security/pam_mkhomedir.so.1 But /usr/lib/pam_mkhomedir.so.1 does not exist, and it does not exist on my other servers, but those zones are accessible using zlogin. I can only zlogin to the zones with zlogin -S ZONENAME. What to do next? Thank you.

    Read the article

  • Routing multiple static IPs from ISP at the cable modem?

    - by Jakobud
    I'm taking over IT responsibilities for a previous IT guy. We have a 50mb cable modem connection from Comcast along with 5 static IP addresses: XXX.XXX.XXX.180 XXX.XXX.XXX.181 XXX.XXX.XXX.182 XXX.XXX.XXX.183 XXX.XXX.XXX.184 We are in the process of replacing our firewall machine. Currently the firewall box is the only thing connected to the cable modem. However the cable modem has multiple ethernet ports on it, similarly to a router. I have assembled a new firewall machine and its time to start testing and configuring it. So that means that I also need it plugged into the cable modem (remember it has multiple ethernet ports on it). So now with multiple computer plugged into the cable modem, how does the cable modem know where to route the traffic? If some request on the internet is made to XXX.XXX.XXX.181, which goes to our cable modem, how does the cable modem know which connected computer that traffic is supposed to be sent? Looking at the web interface for the cable modem, there doesn't seem to be anything special setup on it with regards to routing or NATing IP addresses. Is that because when there is only one computer connected to the modem, all traffic is sent to it by default? Now that I am going to (temporarily) have multiple computers plugged into the cable modem, do I need to specify routing or NAT rules on the modem itself? I am going to speak to Comcast about this next, but I figured I'd ask here first just so I can get a better grasp on how this type of thing generally plays out.

    Read the article

  • Starfield Wildcard SSL Certificate Not Trusted in All Browsers

    - by Austen Cameron
    I am at a loss as to what else I might try in order to debug this issue with a Starfield Wildcard SSL Certificate. The problem is that in certain browsers (Safari or the most-updated chrome you can get for OS X 10.5.8 for example) the certificate comes up as untrusted, even on the root domain. My server setup / background info: General LAMP setup - CentOS 6.3 - on a Godaddy VPS Starfield Technologies Wildcard SSL certificate Installed using the instructions from godaddy's support pages ssl.conf lines are basically as follows: SSLCertificateFile /path/to/cert/mysite.com.cert SSLCertificateKeyFile /path/to/cert/mysite.key SSLCertificateChainFile /path/to/cert/sf_bundle.crt Everything seemingly worked fine until the other night when I noticed the problem in OS X, I assume it's more browser version related, but have only been able to replicate it on that particular machine. What I have tried: Updating sf_bundle.crt from godaddy's cert repository and Starfield's repository versions Following This ServerFault answer from Jim Phares - changing the ChainFile line to sf_intermediate.crt from Starfield's repository Using http://www.sslshopper.com/ssl-checker.html on my url It says the domain is correctly listed on the certificate but comes up with an error that reads The certificate is not trusted in all web browsers. You may need to install an Intermediate/chain certificate to link it to a trusted root certificate. What might I try next to remedy the untrusted certificate issue? Let me know if there is any other information needed that might help debugging this issue. Thanks in advance!

    Read the article

  • Problems with freetype on OSX 10.7.4

    - by eythor
    I'm trying to install mplayer with OSD using homebrew. I've added both --enable-menu and --with-freetype-config=/usr/local/Cellar/freetype/2.4.10/freetype-config to the brew recipe. ==> Downloading http://www.mplayerhq.hu/MPlayer/releases/MPlayer-1.1.tar.xz Already downloaded: /Library/Caches/Homebrew/mplayer-1.1.tar.xz xz -dc "/Library/Caches/Homebrew/mplayer-1.1.tar.xz" | /usr/bin/tar xf - ==> ./configure --prefix=/usr/local/Cellar/mplayer/1.1 --cc=cc --host-cc=cc --disable-cdparanoia --disable-libopenjpeg --enable-menu --disable-x11 -- with-freetype-config=/usr/local/Cellar/freetype/2.4.10/freetype-config ./configure --prefix=/usr/local/Cellar/mplayer/1.1 --cc=cc --host-cc=cc --disable-cdparanoia --disable-libopenjpeg --enable-menu --disable-x11 --with -freetype-config=/usr/local/Cellar/freetype/2.4.10/freetype-config Checking for cc version ... clang 4.2.1 (experimental support only) Checking for working compiler ... yes Detected operating system: Darwin Detected host architecture: x86_64 Checking for cross compilation ... no Checking for host cc ... cc Checking for CPU vendor ... GenuineIntel (6:15:10) Checking for CPU type ... Intel(R) Core(TM)2 Duo CPU T7700 @ 2.40GHz For freetype-config I've tried three seperate paths; /usr/X11R6/bin/freetype-config, /usr/X11/bin/freetype-config and the one in Cellar. Checking for freetype always fails: Checking for freetype >= 2.0.9 ... no Checking for fontconfig ... no (FreeType support needed) Although freetype itself seems to be installed. mufasa:bin eythor$ freetype-config --version 15.0.9 mufasa:bin eythor$ freetype-config --ftversion 2.4.10 mufasa:bin eythor$ freetype-config --libs -L/usr/local/Cellar/freetype/2.4.10/lib -lfreetype -lz -lbz2 mufasa:bin eythor$ freetype-config --cflags -I/usr/local/Cellar/freetype/2.4.10/include/freetype2 - I/usr/local/Cellar/freetype/2.4.10/include I'm not sure what to try next or how to figure out why freetype isn't recognized. Can anyone point me in a sensible direction?

    Read the article

  • Laptop choice for development: MacBook Pro 17 vs Dell Studio XPS 16 vs HP Envy 15

    - by Shalan
    Hey! First things first - let me state that I am not intending to play games on this - I have narrowed down to these 3 purely based on specs and its individual brand reliability in the market. I intend to primarily use: Visual Studio 2008 Pro a lot (develop and deploy on Windows platforms) SQL Server 2005 Oracle 10g Adobe Photoshop CS4 Microsoft Expression Studio Google Sketchup I currently use a desktop PC (Core2Duo 2.66Ghz with 3GB DDRII memory) running Vista Business 32-bit - and I have to admit that, especially for Visual Studio, its quite sluggish to a point where it affects productivity. Furthermore, I intend to only use the notebook on a table - with a cooled surface, like granite :) - so I would appreciate people's input with regard to heat issues. Im aware that the Dell's primary exhaust gets blocked by the lid when open, but some reviews don't seem to place extraordinary emphasis on heat issues resulting from this. My options for the Dell/Alienware: Core i7 720QM 4GB DDRIII memory ATI mobility 3670 (512) 128GB Solid State Drive 16-inch Full HD RGB-LED LCD display (1080p) 3-year next-business-day support My configuration for the Apple MBP: Core2Duo 2.8Ghz (Im assuming the T9600) 4GB DDRIII memory 128GB Solid State Drive standard 1 year support The one advantage I think of with the MBP is that I can have the addition of OSX (though Im unsure what I would use it for, but purely to play around with a much-boasted-about OS) What are your thoughts on this, especially regarding build-quality, heat, performance and battery-life? Much thanks! ~shalan

    Read the article

  • itunes iphone sync stuck on "Waiting for items to copy"

    - by lindes
    superuser crowd, I've found a number of threads about this, but have yet to find a satisfactory answer. Perhaps I'll have better luck on this site?? I hope so... I'm having a problem with iTunes where syncing my iPhone ends up seeming to basically finish, but it says "Waiting for items to copy" after everything else, and just stays there for... well, a very long time, if I let it. Oh, and I'm not sure this is always the case, but at the moment, it's doing this while "Syncing Genius Data to [my iPhone] (Step 8 of 8)". If I click the little x in iTunes, it then gets stuck on "Canceling sync", for an equally indefinite sort of period. If I simply unplug the phone, everything seems to be synced and happy, but the next time I sync, it happens again. I presume this is a bug of some sort on Apple's part, but it seems like people have found workarounds... I'm just having trouble tracking one down that (a) is well described enough that I can actually follow it, (b) has enough detail that I can do it without losing data (i.e. tells me pathnames that I might want to copy first, or the like, before telling me to remove something) (note: see also point (a) -- I can't remove it if it's telling me to remove something that I don't know where it is!), and (c) otherwise seems sane. I'm hoping that here perhaps I'll get better luck -- with either a workaround, and/or debugging tips for figuring out how to find myself a workaround. Note: I'm busy with some other things at the moment, but can try to add some additional information later, if necessary.

    Read the article

  • Freeradius authentication failed for unknown reason

    - by Moein7tl
    I followed this instruction to force freeradius to use mysql database. and run freeradius in debug mod. but it rejects all authentication. mysql database : mysql select * from radcheck; +----+----------+-----------+----+---------+ | id | username | attribute | op | value | +----+----------+-----------+----+---------+ | 1 | test | Password | == | test123 | | 2 | test | Auth-Type | == | Local | +----+----------+-----------+----+---------+ 2 rows in set (0.02 sec) radtest command : # radtest test test123 localhost 0 testing123 Sending Access-Request of id 235 to 127.0.0.1 port 1812 User-Name = "test" User-Password = "test123" NAS-IP-Address = 127.0.0.1 NAS-Port = 0 Message-Authenticator = 0x00000000000000000000000000000000 rad_recv: Access-Reject packet from host 127.0.0.1 port 1812, id=235, length=20 radiusd debug mod log: rad_recv: Access-Request packet from host 127.0.0.1 port 51034, id=235, length=74 User-Name = "test" User-Password = "test123" NAS-IP-Address = 127.0.0.1 NAS-Port = 0 Message-Authenticator = 0xbf111cbbae24fb0f0a558bfa26f53476 # Executing section authorize from file /usr/local/etc/raddb/sites-enabled/default +- entering group authorize {...} ++[preprocess] returns ok ++[chap] returns noop ++[mschap] returns noop ++[digest] returns noop [suffix] No '@' in User-Name = "test", looking up realm NULL [suffix] No such realm "NULL" ++[suffix] returns noop [eap] No EAP-Message, not doing EAP ++[eap] returns noop ++[files] returns noop ++[expiration] returns noop ++[logintime] returns noop [pap] WARNING! No "known good" password found for the user. Authentication may fail because of this. ++[pap] returns noop ERROR: No authenticate method (Auth-Type) found for the request: Rejecting the user Failed to authenticate the user. Using Post-Auth-Type Reject # Executing group from file /usr/local/etc/raddb/sites-enabled/default +- entering group REJECT {...} [attr_filter.access_reject] expand: %{User-Name} - test attr_filter: Matched entry DEFAULT at line 11 ++[attr_filter.access_reject] returns updated Delaying reject of request 20 for 1 seconds Going to the next request Waking up in 0.9 seconds. Sending delayed reject for request 20 Sending Access-Reject of id 235 to 127.0.0.1 port 51034 Waking up in 4.9 seconds. Cleaning up request 20 ID 235 with timestamp +4325 Ready to process requests. where is the problem and how should I solve it?

    Read the article

  • NVidia ION and /dev/mapper/nvidia_... issues.

    - by Ritsaert Hornstra
    I have an NVidia ION board with 4 SATA ports and want to use that to run a Linux Server (CentOS 5.4). I first hooed up 3 HDs (that will be a RAID5 array) and a forth small boot HD. I first started to use the onboard RAID capability but that does not work correctly under Linux: the raid capacity is not a real RAID but uses lvm to define some arays. After setting the BIOS back to normal SATA mode and whiping the HDs, the first boot harddisk (/dev/sda) is seen as /dev/sda BEFORE mounting and after mounting as /dev/mapper/nvidia_. CentOS is unable to install on it (and grub is not installable on it either). So somehow the harddisk is still seen as if it belongs to some lvm volume. I tried to clean out the HD by issuing a few dd if=/dev/zero of=/dev/sda commands to wipe the starting cylinders and final cylinders but to no avail. Did anyone see this problem and did anyone find a solution? UPDATE When I create only a single ext3 partition on the first HD (/dev/mapper/nvidia_...) no LVM partitions are seen and I can boot from /dev/mapper/nvidia_.... Now the next step is to see how I can get rid of this folly.

    Read the article

  • PXE booting LACP hosts on Force10 S50N with FTOS

    - by lolwutreddit
    Hardware: S50N Firmware: FTOS 8.4.2.6 Problem: We're trying to PXE boot some servers that are connected via port-channel interfaces with LACP. Current Work-around: we PXE boot a server with a single interface (eth0), and then use a Perl script to turn up the port-channel interfaces after the server is built. Details: Is anyone doing anything similar on Force10 S50 switches with FTOS? If not, is anyone doing this on another S series, or larger chassis-based Force10? I'm wondering if Native VLAN will solve this, since ports in a port-channel cannot explicitly have a VLAN set, and they don't seem to use the tagged or untagged VLAN that the port channel is in. I will confirm this next (I think it's the only thing I haven't tried) Juniper Example: http://broken.net/openindiana/how-to-pxe-boot-systems-on-lacp-using-juniper-switches/ Cisco: there are plenty of documented ways to solve this issue on IOS and Nexus Update/Edit: since there seems to be no way to use interface or port-channel mode commands to get the individual interfaces to show up in spanning-tree (rtsp in this case), the ports should never go into a forwarding state. I'm not going to mess with it anymore unless a) someone that has experience passes it on, or b) Force10 comes up with a solution for this (I'm guessing it will only be introduced on other S platforms (S55, S60), since the S50 seems to be near EOL). I'm basing that on the fact that the Open Automation type features are only being supported on the newer switches.

    Read the article

  • How to deal with DELL support system?

    - by Nishant Kumar
    We have purchased a Dell Optiplex 9010 SSFV for our organization's work. Since the first installation two of the USB keyboard keys were not working properly. I had to press those keys two times simultaneously, on first time keys did not work and for for second time it printed two characters (as it were buffering first character.) Two keys that were not working properly: Hexangrave (Below the ESC key: `) Double Quotes (Left the enter key ") We registered our complaint with DELL and they suggested (with some hard to understand and weird ENGLISH accent) some test and tricks, such as switching to different ports, checking keyboard on different PC, and it worked well with diff. PC(with Windows 7 Home Premium installed). It was clear that it is an OS fault, hence they suggested to re-install OS. Problem began here, we have a project on the run and currently a video editing project setup on our system, so can't re-install system in hurry and also DELL persons were not providing any other solution such as updating keyboard driver, etc. Arguments I am a Software Engg. and don't think it is a feasible solution to re-install entire system for simple problems. This prob is coming since the fresh system installation, so I don't think it will solve the problem. Finally, I had to find solution myself and got it here, now I want to show my disappointment to dell persons or at least tell them that they should improve there support system to not advice to re-install entire system for that simple problems. Notes We have purchased 5 years NEXT business day support from DELL for around 8000 INR (Not for that kind of solutions from DELL). It is Dell India Support System. So can anyone tell me how to tackle dell support system officially, so that they will pay more attention in near future. Thanks

    Read the article

  • linked-server sql - access

    - by user22121
    Hi, I have a SQL server 2000 and an Access database mdb connected by Linked server on the other hand I have a program in c # that updates data in a SQL table (Users) based data base access. When running my program returns the following error message: OLE DB provider 'Microsoft.Jet.OLEDB.4.0' reported an error. Authentication failed. [OLE / DB provider returned message: Can not start the application. Missing information file of the working group or is opened exclusively by another user.] OLE DB error trace [OLE / DB Provider 'Microsoft.Jet.OLEDB.4.0' IDBInitialize:: Initialize returned 0x80040E4D: Authentication failed.] . Both the program, the sql server and database access are on a remote server. On the local server the problem was solved by running the following: "sp_addlinkedsrvlogin 'ActSC', 'false', NULL, 'admin', NULL". Try on the remote server the next, without result: "sp_addlinkedsrvlogin 'ActSC', true, null, 'user', 'pass'". On the remote server and from the "Query Analyzer" sql update statements are working correctly. Can you think of what may be the problem? Thanks!

    Read the article

  • Ubuntu Postfix Gmail SMTP Relay Not Working

    - by Nick DeMayo
    I currently have postfix set up to relay messages from my websites through gmail, and up until recently it was working perfectly. However, within the last week or so (not really sure when) I started getting the below error whenever attempting to send an email: Jul 20 07:40:46 localhost postfix/smtp[11958]: connect to smtp.gmail.com[2001:4860:800a::6c]:587: Network is unreachable Jul 20 07:40:46 localhost postfix/smtp[11958]: connect to smtp.gmail.com[173.194.76.109]:587: Connection refused Jul 20 07:40:46 localhost postfix/smtp[11958]: connect to smtp.gmail.com[173.194.76.108]:587: Connection refused Here is my configuration file: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h #readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = [my domain name] alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases #myorigin = /etc/mailname mydestination = [my host name], localhost.localdomain, localhost relayhost = [smtp.gmail.com]:587 mynetworks = 127.0.0.0/8 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = loopback-only inet_protocols = all ########################################## ##### non debconf entries start here ##### ##### client TLS parameters ##### smtp_tls_loglevel=1 smtp_tls_security_level=encrypt smtp_sasl_auth_enable=yes smtp_sasl_password_maps=hash:/etc/postfix/sasl/passwd smtp_sasl_security_options = noanonymous ##### map username@localhost to [email protected] ##### smtp_generic_maps=hash:/etc/postfix/generic Nothing changed on my server, as far as I know...any ideas what could have caused it to stop working?

    Read the article

  • Nginx with PAM authentication through pam_script

    - by Envek
    Have anyone set up such a configuration? It's not work for me. So, I've installed nginx-extras on Ubuntu 12.04 (it's built with PAM module), and write to site config: location ^~ /restricted_place/ { auth_pam "Please specify login and password from main_site"; auth_pam_service_name "nginx"; } Afterwards, in /etc/pam.d/nginx: auth required pam_script.so dir=/path/to/my/auth_scripts And wrote simplest /path/to/my/auth_scripts/pam_script_auth (also I've tried to write complicated scripts) #!/bin/sh exit 0 # should allow anyone Doesn't work. The script is launched (I've wrote full functional script, that successfully executes, check credentials, writes to its own log and returns correct exit code, and executes noticeably long). But no access granted. Only rejected. In /var/log/nginx/error.log appears next record: 2012/09/13 10:44:42 [alert] 1666#0: waitpid() failed (10: No child processes) If I'm specify in /etc/pam.d/nginx: auth required pam_unix.so and grant for www-data user right to read /etc/shadow, unix authorization works fine. But script auth doesn't work. Can't understand, where is trouble. In nginx module, or in pam_script module.

    Read the article

  • HA for Resque & Redis

    - by Chris Go
    Trying to avoid SPOFs for Resque and Redis. Ultimately the client is going to be PHP via (https://github.com/chrisboulton/php-resque). After going through and finding some workable HA for nginx+php-fpm and MySQL (mysql master-master setup as a way to simply master-slave promotion), next up is Resque+Redis. Standard install of Resque uses localhost Redis (at DigitalOcean). I am heavily depending on Amazon Route 53 DNS failover to try to solve this. resque1.domain.com points to localhost redis (redis1.domain.com) = same server resque2.domain.com points to localhost redis (redis2.domain.com) = same server Do resque.domain.com with FAILOVER resque1 as primary and resque2 as secondary. What this means is that most of the time (99%), resque1 should be getting hit with resque2 as just a hot backup. This lets me just have to get 2 servers and makes sure that any hits to resque.domain.com goes somewhere The other way to do this is to break out resque and redis into 4 servers and do it as follows resque1.domain.com - redis.domain.com resque2.domain.com - redis.domain.com redis1.domain.com redis2.domain.com Then setup DNS Failover resque.domain.com - primary: resque1 and secondary: resque2 redis.domain.com - primary: redis1 and secondary: redis2 I'd like to get away for 2 servers if I can but is this 2nd setup much better or negligible? Thanks, Chris

    Read the article

  • How can I manually install pecl_http on Ubuntu 9.10?

    - by Richard
    This is essentially a repost of http://stackoverflow.com/questions/4159369/ubuntu-9-04-pecl-extension-downloads-but-does-not-install. Hoping maybe someone can help me here. I've done this: sudo apt-get install php-pear sudo apt-get install php5-dev sudo apt-get install libcurl3-openssl-dev which installs fine. However, the next step: sudo pecl install pecl_http Doesn't install the extension, but merely downloads it. There are no error messages. So I have unpacked it and built it myself per http://php.net/manual/en/install.pecl.phpize.php Essentially: cd pecl_http phpize ./configure make make install I also make test'd to check all ok - and it failed one test: HttpRequest, which is kind of fundamental to this package. And indeed this doesn't work: $r = new HttpRequest('http://www.google.com'); $r->send; echo $r->getResponseCode(); No request is sent, the response code is zero, but no errors either. How can I get this damn thing installed? Is this a bug? Am I doing something wrong? Any alternatives, workarounds? Help appreciated. Thanks

    Read the article

  • Windows 7 - Wireless Signal Good - Internet Often Says Limited or No Activity

    - by Anthony Trilussa Pizzo
    Hi - I have a New Dell Studio XPS latop that I hooked up in early January of 2010. The Dell Studio XPS laptop is connected to my wireless router (Belkin Wireless N). The router was purchased in Februaru of 2010 because I thought my router was bad (A D-Link router). Comcast is my ISP. Everyday, at random times, the internet will have a good signal (according to the little Win 7 icon on the taskbar) but I will receive a message saying limited or no internet access and I cannot access the internet. I have the internet setup to automatically get an IP and I did the IPCONFIG release / renew but with no success. I even had a WiFi enabled Blackberry right next to my computer and whenever the computer loses an WiFi connection, so does the Blackberry. The router isnt't bad, because the old router did the same thing - giving me limited or no internet access while working sometimes. I also tried using two other laptops in same physical position as my Dell Studio XPS laptop and they have the same problem. I am going crazy trying to figure this out as I have called the Router Company (Belkin), my ISP (Comcast), and my computer maker (Dell). Can anyone offer some suggestions and I can try them as we work through this and maybe one of the suggestion will be right one. Thanks in advance for even reading my problem. I searched for a problem similar to mine on the internet, but I have not been able to keep a steady internet connection. (Side note - I used to have an IBM thinkpad laptop and D - Link router and for years that wireless connection worked FLAWLESSLY, with the same physical set up - the laptop in my bedroom, and the router in the basement.)

    Read the article

  • Cannot push to GitHub from Amazon EC2 Linux instance

    - by Eli
    Having the worst luck push files to a repo from EC2 to GitHub. I have my ssh key setup and added to Github. Here are the results of ssh -v [email protected] OpenSSH_5.3p1, OpenSSL 1.0.0g-fips 18 Jan 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to github.com [207.97.227.239] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /root/.ssh/identity type -1 debug1: identity file /root/.ssh/id_rsa type 1 debug1: identity file /root/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5github2 debug1: match: OpenSSH_5.1p1 Debian-5github2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.3 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'github.com' is known and matches the RSA host key. debug1: Found key in /root/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /root/.ssh/identity debug1: Offering public key: /root/.ssh/id_rsa debug1: Remote: Forced command: gerve eliperelman 81:5f:8a:b2:42:6d:4e:8c:2d:ba:9a:8a:2b:9e:1a:90 debug1: Remote: Port forwarding disabled. debug1: Remote: X11 forwarding disabled. debug1: Remote: Agent forwarding disabled. debug1: Remote: Pty allocation disabled. debug1: Server accepts key: pkalg ssh-rsa blen 277 debug1: Trying private key: /root/.ssh/id_dsa debug1: No more authentication methods to try. Permission denied (publickey).

    Read the article

  • Deploying a Git server in a AWS linux instance

    - by Leroux
    I'm making a git server on my linux instance in AWS. I tried doing it using these instructions but in the end I always get stuck with a "Permission denied (publickey)" message. So here is my detailed steps, the client is my windows machine running mysysgit and the server is the AWS ubuntu instance : 1) I created user Git with a simple password. 2) Created the ssh directory in ~/.ssh 3) On the client I created ssh keys using ssh-keygen -t rsa -b 1024, they got dropped in my /Users/[Name]/.ssh directory, id_rsa and id_rsa.pub key pair was created. 4) Using notepad I copy pasted the text into newly created files on the server in the ~/.ssh directory of my Git user. ~/.ssh/id_rsa and **~/.ssh/id_rsa.pub** were copied. 5) On the server I made the authorized_hosts file using "cat id_rsa.pub authorized_hosts" (while inside the .ssh directory) 6) Now to test it, on my client machine I did ssh -v git@[ip.address] 7) Result : debug1: Host 'ip.address' is known and matches the RSA host key. debug1: Found key in /c/Users/[Name]/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /c/Users/[Name]/.ssh/identity debug1: Trying private key: /c/Users/[Name]/.ssh/id_rsa debug1: Offering public key: /c/Users/[Name]/.ssh/id_dsa debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey). I would appreciate any insight anyone can give me.

    Read the article

  • Dual Xeon Server voltages are low

    - by Mindflux
    I've got a whitebox server running CentOS 5.7. It's a Dual Xeon 5620, 24GB of RAM. The mainboard is a SuperMicro X8DT6-F and the chassis is a SC825TQ-R720LPB. Dual 720W Power supplies. We had a big power outage a couple weeks back that took down everything, I don't have any pre-power outage figures for this server, and the only reason I noticed these is because when I was bringing up the servers I was checking them out with more scrutiny than usual. http://i.imgur.com/rSjiw.png (Image of voltage readings) As you can see, CPU1 DIMM is low, +3.3V is high, 3.3VSB is high, +5v is high, +12v is REAL LOW (out of normal 5% (plus/minus))... and VBAT is off the charts. With my whitebox VAR we've tried the following: Swap out PSU with another server I have with the same PSUs. Try different power cord Update BMC/IPMI firmware in case readings were wrong (They aren't) Update BIOS Try different PDU Try a different outlet and/or circuit Replaced Voltage Regulator Unit At this point, the only thing we haven't done, seemingly is replace the mainboard.. which is what the next step will be unless something else shines some light on the situation. I should mention the system is rock solid otherwise which is a surprise given the 12v voltage is that far off.

    Read the article

  • BSOD trying to migrate Windows XP from a physical to a virtual machine

    - by pauldoo
    I am attempting to migrate a Windows XP Home installation from a physical machine to a virtual machine. The physical machine has two hard disks; the first is 250GB containing the "C:", the second is 1TB containing "D:". I'd like to create a new virtual machine stored on the D:, which is a copy of the Windows XP Home installation that is currently on the C:. (This will leave the 250GB drive clear for me to install a fresh copy of Windows 7, and still be able to access the old XP installation if necessary.) The first method I tried was to follow the instructions here: http://www.virtualbox.org/wiki/Migrate_Windows I booted up from an Ubuntu Live CD in order to execute the Linux commands whilst the Windows system wasn't running. With this method the virtual machine would always blue screen on startup with a "STOP 0x0000007B" message. The instructions above say to try a "repair install" using the Windows XP disc. Unfortunately for me my XP disc is scratched and will not boot so I was unable to try a repair install. The second method I tried was to use "VMWare Converter Standalone Client". This tool executed without any errors, but again produced a virtual machine that blue screens on startup with the same "STOP" message. Are there any other methods to move the Windows XP installation into a virtual machine? I think next I will try some more manual process to create the cloned virtual machine. I think I will try installing a fresh copy of Windows XP to a virtual machine, then once that is booting OK I will ntfsclone the source C: partition over the top. Perhaps this will fix the booting problems if the issue is related to the MBR or partition table in some way.

    Read the article

< Previous Page | 648 649 650 651 652 653 654 655 656 657 658 659  | Next Page >