Search Results

Search found 15651 results on 627 pages for 'setup'.

Page 342/627 | < Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >

  • Postfix Vacation.pl with local users

    - by Simiyu
    Hi, I am trying to setup the vacation.pl script on a mail servers which has local users only (since they are only 10 users). I have installed the SquirrelMail plugin and the Auto respond option is available for the users, but when an email is sent to the addresses no auto reply email is sent to the sender. There are also no logs on the /var/log/vacation folder which i created as well as the normal log files. Most of the examples online refer to virtual users, can it work with local users? and if so how? regards, Arthur

    Read the article

  • ubuntu pptp connections from command line

    - by Ian R
    I'm using a lot of vpn connections daily for my work and I want to program a pptp dialer in python to ease my job a little bit by automating things. I usually go with network-manager-pptp to setup my connections but I would like to skip this gui tool and do it from the script. Something like a dialer. My question is. Can pptp connections be established using the command tools only? Also, where does network-manager-pptp saves its failes so I can take a look and see what configs it generates. Any help is much appreciated.

    Read the article

  • sftp chroot access via SSH

    - by Cudos
    Hello. I have this setup in sshd_config: AllowUsers test1 test2 Match group sftpgroup ChrootDirectory /var/www X11Forwarding no AllowTcpForwarding no ForceCommand internal-sftp Match user test2 ChrootDirectory /var/www/somedomain.dk X11Forwarding no AllowTcpForwarding no ForceCommand internal-sftp I am trying to restrict test2 to only use /var/www/somedomain.dk For some reason when I try to login e.g. with Filezilla on account test2 I get this error: "Server unexpectedly closed network connection" The users are created and works. the SSH service has been stopped and started. test1 works when using e.g. filezilla and the root of the connection is /var/www. What am I doing wrong?

    Read the article

  • Bad request - Invalid Hostname Error when using ARR IP address

    - by syloc
    I'm trying to setup a simple ARR system. I have 1 ARR machine load balancing between 2 APP servers. I can reach the app sites if i use the server name of the ARR machine. (http://arrserver/app) But i can't do it with its IP address. (http://10.7.10.25/app). It gives the "Bad Request - Invalid Hostname". In the ARR machine i configured the default site's bindings to "All Unassigned","80" (default values). Do i need to change the binding rule or need additional url rewrite rules? And also, in the ARR server http://127.0.0.1/app doesn't work. But http://localhost/app works fine. Thx in advance

    Read the article

  • Configure Postfix to use external MX servers for delivery of local mail if user is unknown

    - by mr.b
    I have a following setup: linux box with postfix configured to be responsible for example.com domain domain's MX servers are configured so that mail sent to example.com is sent to google mail servers several user accounts on linux machine exist (same machine also hosts example.com site) When someone from the outside attempts to send mail to address ending with @example.com, it gets routed to google mail (and there handled appropriately). When linux machine tries to send mail to outside world, mail is delivered correctly, as reverse dns and spf records are configured correctly, so linux machine is valid mail sender for example.com domain (along with google mail servers). However, here's the problem. When php application (hosted at linux box) tries to send mail to [email protected] (and someuser doesn't exist on linux box), it fails, since it doesn't even consult google mail servers, but postfix smtp locally concludes that "someuser" is unknown. So, the question is: how do I tell postfix to relay mails sent to @example.com domain to google mail servers (so, to servers specified in MX records), IF and only if a mailbox is not found locally.

    Read the article

  • Zimbra Relaying from Postfix connection timed out sending multiple emails?

    - by liamTc
    I have a web server setup with postfix which is relaying email to a zimbra server. This working fine however I have attempted to send a few thousand emails and now the connection from postfix to zimbra is timing out. All of the emails have been deferred on the postfix queue. If I try to send individual emails from postfix to zimbra it works fine. But if I try to flush the postfix queue all of the emails time out. In mail.log the emails look like this: postfix/error[2494]: 32B0950C04: to=, relay=none, delay=19431, delays=19402/29/0/0.01, dsn=4.4.1, status=deferred (delivery temporarily suspended: connect to mail.server.com[123.45.678.91]:25: Connection timed out) I have also noticed that in the above message it says "relay=none" for these emails that are failing. But the emails that do send say "relay=domainname.com". How I can resolve this, by sending the emails in the queue and avoiding this from happening again?

    Read the article

  • How to configure Amazon Security Groups to achieve multi-tier architecture?

    - by ks78
    What is the preferred way to configure Amazon Security Groups to achieve a multi-tier architecture? Each of my instances has its own Security Group, which I only want to use for rules specific to an instance. I'd like to keep any rules which apply to multiple instances in a separate Security Group, which can then be assigned to instance Security Groups as necessary. As an example, I've setup a group called "admin", which allows administrative access from my IP. I added the "admin" group as the source to each of my instance security groups. However, I still can't access the instances from my IP without adding the rules directly to the instance's group. Am I missing something? Although it seems a multi-tier security architecture should be possible, it doesn't seem to be working.

    Read the article

  • iTunes home sharing problem

    - by Trev
    I have two iMacs running Snow Leopard. I have iTunes home sharing setup on the two of them using the same iTunes account credentials. Up until last night all worked fine... I could download an app on one iMac, then go to the other iMac and drag that app from the home sharing into that iMac's applications. Then all of a sudden it stopped working. I can see and browse the home sharing, however, when I go to drag an app or song from the share to the local library it gives me an error stating that I am unable to do so... when I click OK to this message I lose visibility to the home share. I have checked and this iMac is authorized with my iTunes account and I have even deauthorized and reauthorized but still the same result. Does anyone have any suggestions?

    Read the article

  • Google Chrome 4.0.249.89 not working

    - by tommieb75
    Good day to you, I am running Google Chrome 4.0.249.89 and noticed a weird behavior with it. It loads but I get this error, notice there is an absent of commonly used pages in the display itself and displays the message. Upon closer inspection within the directory which I've captured here on pastebin.com Has google chrome ceased to function after a certain limit...? I have tried the setup --rename-chrome-exe trick which did not work...I just don't want to lose my bookmarks... Thanks for your help, Best regards, Tom.

    Read the article

  • Using VirtualBox/VMWare to deploy software to multi sites ?

    - by Sim
    Hi all, I'm currently evaluating the feasibility of using VirtualBox (or VMWare) to deploy the follow project to 10 sites Windows XP MSSQL 2005 Express Edition with Advanced Services JBoss to run 1 in-house software that mostly query master data (customers/products) and feed to other software Why I want to do this ? Because the IT staffs in my 10 sites are not capable enough and the steps taken to setup those "in-house project" are also complicated What are the cons I can forsee ? Need extra power to run that virtualbox instance The IT staffs won't be much knowledgeable 'bout how to install the stuffs Cost (license for VirtualBox in commercial environment as well as extra OS license) I really seek your inputs on the pro/con of this approach, or any links that I can read further Thanks a lot

    Read the article

  • Using local repository with vmbuilder and https

    - by Onitlikesonic
    I seem to be having problems using vmbuilder with a local https mirror "--mirror=https:///archive.ubuntu.com/ubuntu/" as shown below: Process (['/usr/sbin/debootstrap', '--arch=amd64', 'precise', '/tmp/tmpYc0cOktmpfs', '<my_internal_server>/ubuntu/']) returned 1. stdout: I: Retrieving Release E: Failed getting release file <my_internal_server>/ubuntu/dists/precise/Release , stderr: 2012-10-18 10:36:36,429 INFO : Unmounting tmpfs from /tmp/tmpYc0cOktmpfs Traceback (most recent call last): File "/usr/bin/vmbuilder", line 24, in <module> cli.main() File "/usr/lib/python2.7/dist-packages/VMBuilder/contrib/cli.py", line 216, in main distro.build_chroot() File "/usr/lib/python2.7/dist-packages/VMBuilder/distro.py", line 83, in build_chroot self.call_hooks('bootstrap') File "/usr/lib/python2.7/dist-packages/VMBuilder/distro.py", line 67, in call_hooks call_hooks(self, *args, **kwargs) File "/usr/lib/python2.7/dist-packages/VMBuilder/util.py", line 165, in call_hooks getattr(context, func, log_no_such_method)(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/VMBuilder/plugins/ubuntu/distro.py", line 136, in bootstrap self.suite.debootstrap() File "/usr/lib/python2.7/dist-packages/VMBuilder/plugins/ubuntu/dapper.py", line 269, in debootstrap run_cmd(*cmd, **kwargs) File "/usr/lib/python2.7/dist-packages/VMBuilder/util.py", line 120, in run_cmd raise VMBuilderException, "Process (%s) returned %d. stdout: %s, stderr: %s" % (args.__repr__(), status, mystdout.buf, mystderr.buf) VMBuilder.exception.VMBuilderException: Process (['/usr/sbin/debootstrap', '--arch=amd64', 'precise', '/tmp/tmpYc0cOktmpfs', '<my_internal_server>/ubuntu/']) returned 1. stdout: I: Retrieving Release E: Failed getting release file <my_internal_server>/ubuntu/dists/precise/Release , stderr: I've checked that the files are in the correct place and i'm able to setup this using http instead of https. However this server will be providing https access only to the repos, the http is only temporarily open. This might be due to the certificate not being valid on the https (since it's self signed) or due to the fact that vmbuilder doesn't support https? In either case how can i get this to work? (If it's the case of the invalid certificate I don't mind ignoring any checks)

    Read the article

  • Windows Mobile deletes my Contacts when deleting a partnership

    - by bitbonk
    Sometimes when I reinstall my PC get a new Work PC or buy a new home PC ,ActiveSync, now the Device Center asks me to delete an existing partnership to setup a partnership with the new PC, since only two partnerships are allowed. When I delete that partnership all contacts that originally came from that partnership get deleted too. How can I prevent that from happening. Do I really always have to remember to frequently backup all my contacts in a safe place? How much redundancy is needed. I hat them on one of my PCs and on my phone.

    Read the article

  • PHP Mail Relay via Remote smtp Server

    - by Toqeer
    We have a php application running on Linux which sends emails to there users. Currently its setup like php.ini is configured to send via local sendmail server but we have separate mail server for our organization for this domain. I want to send the php application emails via that remote smtp server so these emails can have the correct SPF records and sign via DKIM. But I could not see such option in php.ini to specify the remote host address to forward emails to that, its for windows only. I saw some post which suggest phpMailer but I could not find how to configure that so all our php application could send via our remote SMTP.

    Read the article

  • HP Pavilion 15 with AMD dual graphics - Ubuntu live environment not starting

    - by creepus
    I've had this laptop for about a day now and have decided to try Ubuntu on it and determine if I want to install it. I created a USB, it booted (Secure Boot was on, I tried with Secure Boot off to no effect), and then the problem occurred. The screen turned off for a second, turned back on to a black screen, shut off again and turned back on with a dialogue box telling me that the system had to use low-graphics mode. I clicked OK, selected low-graphics mode from the menu and clicked OK. The screen switched to the boot messages and did not go any further than this. Ctrl+Alt+DEL started rebooting the laptop though. I tried booting again, but this time I edited the boot options in GRUB to add nomodeset. This time, the laptop only booted to a black screen. Ctrl+Alt+F2 took me to a prompt, I tried startx from there, but X didn't start, complaining that it wanted kernel mode setting back. I can not seem to find any option to disable one graphics chip or the other in the UEFI setup menus. Laptop : HP Pavilion 15-E004AU. The CPU : AMD A6-4400M APU with Radeon(tm) HD Graphics The graphics chip : AMD Radeon HD 7520G + 8670M Dual Graphics. The Ubuntu version : 13.10, 64 bit. Thanks. EDIT: I tried 12.04.3 LTS, it managed to bring the desktop up. There are severe graphics glitches after about two minutes though.

    Read the article

  • Best peer-to-peer game architecture

    - by Dejw
    Consider a setup where game clients: have quite small computing resources (mobile devices, smartphones) are all connected to a common router (LAN, hotspot etc) The users want to play a multiplayer game, without an external server. One solution is to host an authoritative server on one phone, which in this case would be also a client. Considering point 1 this solution is not acceptable, since the phone's computing resources are not sufficient. So, I want to design a peer-to-peer architecture that will distribute the game's simulation load among the clients. Because of point 2 the system needn't be complex with regards to optimization; the latency will be very low. Each client can be an authoritative source of data about himself and his immediate environment (for example bullets.) What would be the best approach to designing such an architecture? Are there any known examples of such a LAN-level peer-to-peer protocol? Notes: Some of the problems are addressed here, but the concepts listed there are too high-level for me. Security I know that not having one authoritative server is a security issue, but it is not relevant in this case as I'm willing to trust the clients. Edit: I forgot to mention: it will be a rather fast-paced game (a shooter). Also, I have already read about networking architectures at Gaffer on Games.

    Read the article

  • "Bad response to Storage command" when scheduling job with Bacula

    - by Joril
    I have a Bacula setup with 9 clients, and it's working happily. Today I had to add another client, so I went and copied+adapted the existing configuration files from another client, but when I schedule a job for the new client, I get these errors: 20-Mar 17:50 tools-dir JobId 39: Start Backup JobId 39, Job=BackupPresenze2.2012-03-20_17.50.49_04 20-Mar 17:50 tools-dir JobId 39: Using Device "FileStorage" 20-Mar 17:50 presenze2-fd JobId 39: Fatal error: Failed to connect to Storage daemon: bacula.mylan.local:9103 20-Mar 17:50 tools-dir JobId 39: Fatal error: Bad response to Storage command: wanted 2000 OK storage , got 2902 Bad storage From the client I can telnet to bacula.mylan.local:9103 just fine, and jobs for other clients work successfully... What could I check? (Server and client run Ubuntu 10.04, if it's relevant)

    Read the article

  • Erlang node acts like it connects, but doesn't [migrated]

    - by Malfist
    I'm trying to setup a distributed network of nodes across a few firewalls and it's not going so well. My application is structured like this: there is a central server that always running a node ([email protected]) and my co-worker's laptops connect to it on startup. This works if we're all in the office, but if someone is at home, they can connect to the masternode, but they fail to connect to the other nodes in the swarm. I.E., erlang fails to gossip correctly. To correct this, I've change epmd's port number and changed the inet_dist_listen ports to a known open port (1755 and 7070 respectively). However, something fishy is going on. I can run net_adm:world() and it reports that it connects to master node, but when I run nodes() I get an empty array. Same with net_adm:ping('[email protected]'). See: Eshell V5.9 (abort with ^G) ([email protected])1> net_adm:world(). ['[email protected]'] ([email protected])2> nodes(). [] ([email protected])3> net_adm:ping('[email protected]'). pong ([email protected])4> nodes(). [] ([email protected])5> What's going on, and how can I fix it?

    Read the article

  • What causes Windows Boot to stall?

    - by Nick Berardi
    For about 6 months now I have been having this weird problem where Windows 7 fails to fully boot correctly. What happens is this. Starting Windows shows up on the screen. Then 3 out of 4 times nothing else happens, no Windows Flag animation, just nothing occurs. After 3 or 4 restarts repeating steps 1-2 above, the Windows Flag animation finally shows up and everything works as expected. My question is what is causing this problem in steps 1 and 2? Because I have tried the following with no luck: Error checking and correcting of any disk errors Updating drivers Doing a clean install of Windows 7 My setup is as follows: Windows 7 64-bit Ultimate 8 GB RAM 128 GB Crucial SSD (firmware 0005) Dell Latitude E6410 Intel Wireless and Graphics Other than what I have tried above I am totally out of ideas and looking for some new ones to try.

    Read the article

  • How can I allow individual developers to have their own space to create git repositories?

    - by Jason Baker
    I have a server that is essentially a gitosis setup. I have a git user that has access to all the shared repositories. What I would like to do is have each developer be able to have their own "area" on this server to create their own repositories. I'd like these areas to be able to be viewable via gitweb. How can this be done that would require the least maintenance in terms of adding users and repositories? One obvious solution would be to just allow each developer to create repositories on the git login and have branches named something like <devname>-<reponame>. But I could see this getting unmanageable as the number of developers grows.

    Read the article

  • infiniband network between 3 servers

    - by grumpf
    Let's say I have 3 different servers, each one with an infiniband card. Each card has 2 different ports. (I don't know about the model yet) Is it possible to create 3 different networks and to allow the 3 servers to communicate with each other without any problems? (and any spof). I guess I just have to setup the /etc/hosts correctly. I really don't know about infiniband, so please help me :) Thanks in advance. EDIT: Point is to NOT USE a switch!

    Read the article

  • nginx proxy pass redirect through load balancer

    - by Brian
    I have several backend webservers that are load-balanced using LVS. These machines have only internal non-routable IPs on them. The load-balancer is the only machine with an external IP. This setup works great. I would like to add another webserver for image serving, but it will not be part of the load-balanced pool. Is it possible to proxy pass from the load-balanced web servers to the image server and have the response redirected to the client? Client--external LB--internal web server--internal image server I've gotten proxy pass working when I remove the LB from the equation, but no luck when trying to use it.

    Read the article

  • Is it Secure to Grant Apache User Ownership of Directories & Files for Wordpress

    - by Oudin
    I'm currently setting up WordPress on an Ubuntu server 12 everything runs fine but there is an issue when it comes to automatically updating and uploading media via WP as Apache "www-data" user does not have permissions to write to the directories. "user1" has full permission All my directories have permissions of 0755 and files 644 my directories setup is as follows: /home/user1/public_html All WP files and directories are in "public_html" In order to work around the auto updating and uploading media I've granted Apache user ownership to the following directories sudo chown www-data:www-data wp-content -R sudo chown www-data:www-data wp-includes -R sudo chown www-data:www-data wp-admin -R I would like to know security wise how secure this is and if it is not secure what would be the best solution? That will allow me to keep all files and directories owned by user1 and still allow wp to be able to automatically update and uploading media

    Read the article

  • How to forward blocked ports by ISP

    - by KiDo
    So I've been trying to setup a TeamSpeak 3 server on my pc but ports (9987,10011,30033) are blocked by my ISP, I've contacted them to unblock them but they didn't accept, and it's the fastest ISP in my city (as living in a 3rd world country) so it's not a good idea to connect to another ISP. The thing is, I've tried Your-Freedom to connect to tunnel my connection & SocksCap. The problem is, when TS works with SocksCap it doesn't show a WAN-IP that friends will use to connect to my server It says "Needs to be Requested" and when I press the Request button, I get nothing. So, any idea what's wrong if someone has done this before? or if you have any other suggestion to run a TS server, would be very glad to hear it and really appreciate that. P.S. as I've mentioned before, living in a 3rd world country, makes me unable to buy a VPS even the cheapest one cause there's no Visa, Credit, or paypal. so that won't work. Thanks in advance.

    Read the article

  • VMWare Server :: VM set to 2gb RAM but vmware process shows 100mb physical, 1900mb virtual

    - by brad
    I've set up a VMWare instance to run CastIron Integration Appliance. I allocated 2gb of memory to the instance, assuming it would take this as physical memory (my server has 8gb total). When I view top however on the server, the vmware-vmx process has about 100m Resident memory and 1900m Virtual. Running CastIron it reports that the appliance often hits 50% memory usage. Does this mean I'm using 900mb of harddrive space as memory? I wanted VMWare to use 2gb of physical memory, no swap. Can anyone tell me how to achieve this? Setup Debian Lenny 5.0.3 VMWare Server 2.0.2

    Read the article

  • Does a system exist to facilitate virtual meetings and file sharing?

    - by CSharp Mania
    I'm looking for a system that is similar to an online classroom setup but allows for virtual meeting rooms with video/audio conferencing, and of course file sharing. I'm preferring an open source solution that I can edit/tweak myself as needed, and is of course free. Ultimately, I guess what I'm looking for is something that we could possibly tweak to give our own "branded" look and feel, if possible, along with full integration within our own servers. Thus the reason I brought up open source solutions. Do you masters of the web know of such a system available? If so, do you have a preferred one that you would suggest? OR, can such a system be developed by slapping together a couple of open source projects to derive at what is desired? Thanks for sharing your expertise. (FYI - I am a developer that is comfortable with PHP and C#. I'm not experienced with Ruby or Python, but a system using them or something else is acceptable. We can figure it out I'm sure.)

    Read the article

< Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >