Search Results

Search found 109878 results on 4396 pages for 'server side objects to client side'.

Page 1618/4396 | < Previous Page | 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625  | Next Page >

  • ESXi - Should failover node be in the same geographic location?

    - by Ryan
    For some reason it seems to me that at least one failover should be in the same building. But really I have no idea. Could there be an issue with routing delays for users during a failure? I'm just imagining reasons at this point. Let me know, should at least one failover node be at the same geographic location as the other? I am trying to prevent what appears to be a poor decision so any feedback or life experience you can share would be grand. Will mostly be running Windows Server 2008 with SQL Server 2008 as our guest OS.

    Read the article

  • I added some options to stop spam with Postfix, but now won't send email to remote domains

    - by willdanceforfun
    I had a working Postfix server, but added a few lines to my main.cf in a hope to block some common spam. Those lines I added were: smtpd_helo_required = yes smtpd_recipient_restrictions = reject_invalid_hostname, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination, reject_rbl_client multi.uribl.com, reject_rbl_client dsn.rfc-ignorant.org, reject_rbl_client dul.dnsbl.sorbs.net, reject_rbl_client list.dsbl.org, reject_rbl_client sbl-xbl.spamhaus.org, reject_rbl_client bl.spamcop.net, reject_rbl_client dnsbl.sorbs.net, reject_rbl_client cbl.abuseat.org, reject_rbl_client ix.dnsbl.manitu.net, reject_rbl_client combined.rbl.msrbl.net, reject_rbl_client rabl.nuclearelephant.com, permit It appears my postfix is now receiving normal emails fine, and blocking spam emails. But when I now try to use this server myself to send to a remote domain (an email not on my server) I get bounced, with maillog saying something like this: Nov 12 06:19:36 srv postfix/smtpd[11756]: NOQUEUE: reject: RCPT from unknown[xx.xx.x.xxx]: 450 4.1.2 <[email protected]>: Recipient address rejected: Domain not found; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<[192.168.1.100]> Is that saying 'domain not found' for gmail.com? Why is that recipient address rejected? An output of my postconf-n is: alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 html_directory = no inet_interfaces = all inet_protocols = all mail_owner = postfix mailbox_size_limit = 0 mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain mydomain = primarydomain.net myhostname = mail.primarydomain.net myorigin = $myhostname newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES relay_domains = $mydestination, primarydomain.net, secondarydomain.org sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtpd_client_restrictions = permit_sasl_authenticated smtpd_helo_required = yes smtpd_recipient_restrictions = reject_invalid_hostname, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination, reject_rbl_client multi.uribl.com, reject_rbl_client dsn.rfc-ignorant.org, reject_rbl_client dul.dnsbl.sorbs.net, reject_rbl_client list.dsbl.org, reject_rbl_client sbl-xbl.spamhaus.org, reject_rbl_client bl.spamcop.net, reject_rbl_client dnsbl.sorbs.net, reject_rbl_client cbl.abuseat.org, reject_rbl_client ix.dnsbl.manitu.net, reject_rbl_client combined.rbl.msrbl.net, reject_rbl_client rabl.nuclearelephant.com, permit smtpd_sasl_auth_enable = yes smtpd_sasl_path = private/auth smtpd_sasl_type = dovecot smtpd_sender_restrictions = reject_unknown_sender_domain soft_bounce = no unknown_local_recipient_reject_code = 550 virtual_alias_domains = mail.secondarydomain.org virtual_alias_maps = hash:/etc/postfix/virtual Any insight greatly appreciated. Edit: here is the dig mx gmail.com from the server: ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6_4.4 <<>> mx gmail.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 31766 ;; flags: qr rd ra; QUERY: 1, ANSWER: 5, AUTHORITY: 4, ADDITIONAL: 14 ;; QUESTION SECTION: ;gmail.com. IN MX ;; ANSWER SECTION: gmail.com. 1207 IN MX 5 gmail-smtp-in.l.google.com. gmail.com. 1207 IN MX 30 alt3.gmail-smtp-in.l.google.com. gmail.com. 1207 IN MX 20 alt2.gmail-smtp-in.l.google.com. gmail.com. 1207 IN MX 40 alt4.gmail-smtp-in.l.google.com. gmail.com. 1207 IN MX 10 alt1.gmail-smtp-in.l.google.com. ;; AUTHORITY SECTION: gmail.com. 109168 IN NS ns1.google.com. gmail.com. 109168 IN NS ns4.google.com. gmail.com. 109168 IN NS ns3.google.com. gmail.com. 109168 IN NS ns2.google.com. ;; ADDITIONAL SECTION: alt1.gmail-smtp-in.l.google.com. 207 IN A 173.194.70.27 alt1.gmail-smtp-in.l.google.com. 248 IN AAAA 2a00:1450:4001:c02::1b gmail-smtp-in.l.google.com. 200 IN A 173.194.67.26 gmail-smtp-in.l.google.com. 248 IN AAAA 2a00:1450:400c:c05::1b alt3.gmail-smtp-in.l.google.com. 207 IN A 74.125.143.27 alt3.gmail-smtp-in.l.google.com. 249 IN AAAA 2a00:1450:400c:c05::1b alt2.gmail-smtp-in.l.google.com. 207 IN A 173.194.69.27 alt2.gmail-smtp-in.l.google.com. 248 IN AAAA 2a00:1450:4008:c01::1b alt4.gmail-smtp-in.l.google.com. 207 IN A 173.194.79.27 alt4.gmail-smtp-in.l.google.com. 249 IN AAAA 2607:f8b0:400e:c01::1a ns2.google.com. 281970 IN A 216.239.34.10 ns3.google.com. 281970 IN A 216.239.36.10 ns4.google.com. 281970 IN A 216.239.38.10 ns1.google.com. 281970 IN A 216.239.32.10

    Read the article

  • Dell PowerEdge 6850 Degraded HDD

    - by Matt
    Good Morning, We have a dell power edge 6850 with a degraded drive in the RAID array. I have never had to recover such an issue, so any help or advice would be welcome. Basically it hasn't affected the server at an operating system level, but has slowed down performance, I have a replacement drive in hand but as this is our main database server I want to proceed with extreme caution. My options as I see them are - Can I just hot swap the degraded drive with the new one and the data will automatically re-sync and we are all back to normal presumably this is dependant on the current raid configuration? reading various comments on-line I may need to re-configure the RAID array and re-build the broken drive? This screams disaster to me with the main worry being that I wipe any other data. Option 1 would of course make my day. Thanks in advance

    Read the article

  • Possible to use DRBD on two ESXi virtualized servers?

    - by chen
    I have two servers (attached disks have been set up as hardware RAID1 for disk device level failure resilience). Here is the setup in my mind: 1) Install ESXi on each of the physical server, M1, M2; 2) Start one VM on each of the ESXi virtualized physical server V1, V2; 3) Install the DRDB drivers within V1 and V2. Essentially, this is a "virtualizing machine running DRBD in the VM's instead of bare metal hardware" idea. My question is whether the above setup can achieve the same "networked RAID1" goal that DRDB can achieve in the bare-metal physical machines (http://www.drbd.org/). Thanks. [EDIT] I found this (http://serverfault.com/questions/49305/drbd-experimentation-and-virtualization) is a similar question, but the answer does not seem to be firmative enough for me to follow.

    Read the article

  • Can't access newly created Subversion repos

    - by Jean-François G. B.
    Sorry in advance, I'm pretty new in server configuration. So I followed this tutorial to install Subversion on my CentOS server. I'm at the part I should test the URL to make sure I can access it and that it's password protected, but it's not working, I can't access it. What is wrong? Is there some config missing? I don't know what more details to give, but if you need some, please ask! :) Thanks in advance.

    Read the article

  • lighttpd silently stops logging

    - by Max Cantor
    I'm on a Slicehost 256MB VPS with Ubuntu 9.04 (Jaunty). lighttpd is the only web server process running; it listens on port 80. My lighttpd.conf can be found here. I'm using Ubuntu's default logrotate setup for lighty. At seemingly random times, lighttpd will stop logging. It is not correlated with log rotation--that is, the errors do not occur when logrotate kicks in. What happens is, I will verify that the server is serving files by hitting a URL with my browser, and I will verify that it is not logging by checking access.log and seeing that the GET request I just made is not there. Using init.d to restart the process starts logging again, without truncating or rotating the log file. That is, new requests will be logged at the end of the existing access.log file. There are no cron jobs running on this box. Any ideas?

    Read the article

  • Is SmoothWall a good firewall alternative?

    - by Oden
    I found this linux distribution, called SmoothWall. I read its documentation and it looks pretty for me. The only problem is, that I'm not a big linux professional and I have also not a lot of experience but I want to know your thoughts about this "Firewall OS"! Can it be used for small-business environment, with 15-17 PC-s? I would use the server also as cache proxy. Is this a good idea? (I mean, using one server for two things)

    Read the article

  • Run command before and after printing with CUPS?

    - by leto
    Hello, this is a home setup. A central printer server (Linux) manages the queue, a HP 2430DTN is attached to it via 100Mbit/sec Ethernet. The printer is hooked up to a managable power source. A shell script watches the queue on the server (lpstat -o) and turn on the printer when there is a job. If the queue is empty for 10 minutes it turns the printer off. Now this setup messes up, stops the printer etc. after a couple of weeks and is in general "not so reliable". I now know how to change the stop-printer thing, but: Is there a way to run my turn printer on script and turn printer off script directly from cups without watching the queue? That would be so cool!

    Read the article

  • SQUID Transparent SSL proxy (no intercept)

    - by user974896
    I know how to have squid work as a transparent proxy. You put it into transparent mode then use your router or IPTABLES to forward port 80 to the squid port. I would like to do the same for SSL. Every guide I see mentions setting up keys on the squid server. I do not want squid to actually decrypt the SSL traffic then establish a connection with the server, rather I would like squid to simply forward the SSL traffic as is. The only thing I would like to do is be able to check the SSL request for any offending IPs and drop the packets if the destination is one of them.

    Read the article

  • Cannot chown my own files from NFS

    - by valpa
    We have a NFS server provide home directory for many account, which provided by a NIS server. I have account A and B. In /home/A, I try to copy "cp -a /home/B/somedir ~/". Then I found in /home/A/somedir, all files are owned by user A. Then if I do "chown -R B:B somedir", I got "Operation not permitted" error. I am user A, "cp -a" didn't preserve the original user (B). Then I cannot chown my own files. Any suggestion? I fix my own issue by "chmod 777 /home/A", "su - B" and "cp -a somedir /home/A/", and "su - A", then "chmod 755 /home/A". But it is not a good solution.

    Read the article

  • Amazon EC2- many micro-instances vs single small/medium instance

    - by shashankaholic
    I have a chat application using stack of Openfire, Tomcat6 and MySQL. Currently, i have installed all these servers on single Linux micro-instance(613 MB memory). Even in low user base 10-20 i am encountering CPU overload which is quite obvious here. As, i am new to Amazon EC2 can somebody suggest me how to scale up my architecture according to traffic use? should i use separate micro instances for every app server(openfire,mysql,tomcat6) should i use single small or medium instance for whole server stack. Some factors in context: high reliance on MYSQL high memory usage due to file transfer web-application interacting with other Amazon service like S3,SES

    Read the article

  • Application outside document root in Apache/CentOS

    - by liz
    I have a PHP application running in Apache on CentOS 6. The document root is pointed to a specific app folder: /var/www/my-project/app I'm trying to get phpMyAdmin running on the same server but I don't want to put it in the application folder. Instead I'd like to put it here /var/www/apps/phpmyadmin I'm using a sub domain for the server. What's the easiest way for me to get access to phpMyAdmin? Another subdomain? sub subdomain? Re-direct a folder?

    Read the article

  • Tomcat access logs - are failed requests included?

    - by Maxim Eliseev
    We have a RESTful web service (Java, hosted in Tomcat on Ubuntu on Amazon EC2). From time to time it fails (not every week). When it fails, Java CPU consumption goes to 100% and it takes all available memory. It does not finish by itself. I have to restart the server. There is nothing suspicious in Tomcat access logs. I guess one of our users could submit a very "heavy" request which brought the server down. Is it possible this request is not in Tomcat logs since it never finished?

    Read the article

  • LPR command won't recognize CUPS printer

    - by Datapimp23
    I have a cups server with one shared printer configured on it. It prints test pages without problems. printername (Idle, Accepting Jobs, Shared) Description: desc Location: Driver: Zebra ZPL Label Printer (grayscale, 2-sided printing) Connection: socket://172.20.50.26 Defaults: job-sheets=none, none media=oe_w288h432_4x6in sides=one-sided This is the output from lpstat -t. it shows that the printer is idle and accepting requests admin@SERVER:~$ lpstat -t scheduler is running no system default destination device for printername: socket://172.20.50.26 printername accepting requests since Thu 26 Jan 2012 01:29:35 PM CET printer printername is idle. enabled since Thu 26 Jan 2012 01:29:35 PM CET Now when I want to send a printjob to it via an LPR command it won't recognize the printer /usr/bin/lpr -P printername test.pdf Result lpr: ttn_seg_zebra1: unknown printer What am I missing here ?

    Read the article

  • curl XPUT returning HTTP 500 error message

    - by pradeepchhetri
    I have added the following changes in nginx configuration. server { listen 8080; root /usr/share/nginx/www; client_body_temp_path /tmp/; dav_methods PUT DELETE MKCOL COPY MOVE; create_full_put_path on; dav_access user:rw group:rw all:rw; } I have my nginx configured with --with-http_dav_module also. But when I am trying to running the command: $ curl -XPUT http://172.16.31.127:8080/test.html -d 'test' I am getting 500 Internal Server error. Can anyone help me out in solving this.

    Read the article

  • Configuring Bind9 on ubuntu

    - by Jerry
    I am trying to configure name server on Ubuntu just for learning. I have followed this tutorial. After configuring bind9 I have restarted it and works well. I have no registered domain name and public IP, so I have used a random domain name(khalidiitdu.com) that is not registered. When I dig khalidiitdu.com, it shows status: NXDOMAIN. If I use nslookup command, it shows ** server can't find khalidiitdu.com: NXDOMAIN. Now question is: Is registered domain mandatory to configure bind9 within a LAN? If not please suggest me alternative ways. Thanks.....

    Read the article

  • SFTP: How to keep data out of the DMZ

    - by ChronoFish
    We are investigating solutions to the following problem: We have external (Internet) users who need access to sensitive information. We could offer it to them via SFTP which would offer a secure transport method. However, we don't want to maintain the data on server as it would then reside in the DMZ. Is there an SFTP server that has "copy on access" such that if the box in the DMZ were to be compromised, no actual data resided on that box? I am envisioning an SFTP Proxy or SFTP passthrough. Does such a product exist currently?

    Read the article

  • How to access a site in IIS with no DNS mapping

    - by CiccioMiami
    In my IIS 7.5 hosted in a Windows Server 2008 R2 I have several websites with no DNS address assigned. Let's take for instance the site (as named in IIS) with site name mySite. I have for this site the standard binding with no host name. Suppose that my server IP address is, for instance, 101.22.23.01. Therefore it seems logic to me that in order to access the website, I should place in the address bar of my browser: [IP_address]/[sitename] in this case: 101.22.23.01/mySite but it does not work. Shall I specify something else in the bindings?

    Read the article

  • Determine process using a port, without sudo

    - by pat
    I'd like to find out which process (in particular, the process id) is using a given port. The one catch is, I don't want to use sudo, nor am I logged in as root. The processes I want this to work for are run by the same user that I want to find the process id - so I would have thought this was simple. Both lsof and netstat won't tell me the process id unless I run them using sudo - they will tell me that the port is being used though. As some extra context - I have various apps all connecting via SSH to a server I manage, and creating reverse port forwards. Once those are set up, my server does some processing using the forwarded port, and then the connection can be killed. If I can map specific ports (each app has their own) to processes, this is a simple script. Any suggestions? This is on an Ubuntu box, by the way - but I'm guessing any solution will be standard across most Linux distros.

    Read the article

  • how to forward IP request to a specific port

    - by Jeremy Talus
    I have 2 servers the first (SRV01) is running Bind and other web app the second (SRV02) is running 2 server Minecraft (^^) in Bind I have 2 A recording for the 2 server MC s1.domain.tld A SRV02IP s2.domain.tld A SRV02IP the 2 MC serv are running on 2 different port 25565 and 25566 so I want that the request from s1.domain.tld:25565 are going to SRV02IP:25565 and the request from s2.domain.tld:25565 are going to SRV02IP:25566 I think I need do this in the SRV02 iptables. I have looking some topic about iptables but nothing pertinent to me. could you help me ? rgds.

    Read the article

  • Samba does not reload user group members

    - by xato
    I am running a simple samba server setup where users connect to a share which contains folders for specific user groups. The folders are chmod 2770, so only users which are in the correct group can read/write in them. The problem is that if I change group memberships (i.e. remove user from group / add user to group; changes are in sync between clients and server!) samba does not automatically reload the group memberships for the user, so they can still write to groups that they are no longer a member of etc. I either have to reconnect to the share or to restart samba to apply the changes. Is there any way to prevent group caching and/or enable group membership reload in samba? My smb.conf: https://gist.github.com/anonymous/ca7c10a3b3e2168d7a03

    Read the article

  • rdiff-backup failed due to target machine being down, but is unkillable

    - by Markus
    My backup script was invoked by cron, using rdiff-backup to backup the user files onto a target system in the network. That target computer went down at some point, yet still appeared as mounted on the server. rdiff-backup didn't do anything, but still appears as a process. kill-ing doesn't stop it. Similarly, running rdiff-backup for other directories works but doesn't exit properly and remains in the process list. Is there anything short of rebooting the server that I can try?

    Read the article

  • Puzzled about PHP file permission and shared webhosting - what are some explanations?

    - by extrakun
    I have this issue with different web-hosting, particular upload scripts which can only upload to a folder only if it has 777 permission (which is risky). On the test server (on a different webhost), 755 works well. On another web-hosting, log files generated by PHP file functions cannot be write to some time, but other files are mysteriously unaffected (for instance, the log files for the entire week is 655, and they work well, but just today's log-file doesn't work unless it is set to 777). I am more of an application developer than a server backend expert, so these behaviours puzzle me to no end. Why are they happening? What can be done?

    Read the article

  • php crashes when I try to add extensions

    - by Christy
    Hi all, I am trying to install phpmyadmin on my web server with Windows 2008 and iis 7. PHP is running fine and I have several sites that rely on it. When installing phpmyadmin - it has errors at the bottom that crypt and mbstring are not properly installed. When I try to add the php_mcrypt.dll and/or the php_mbstring.dll to the php.ini file (I verified the location and the right file through phpinfo) it crashes. I get a 500 error on all the websites, and I get an error on the server saying FastCGI has failed. Does anyone know how to fix this or why it is happening? Shouldn't I be able to add extenstions? I have the dll files in the extension folder, which is referenced in the php.ini and other extensions (installed previously) are working as expected. Other info: php version 5.2.8, pdo driver for mysql version 5.0.51a Thanks in advance!

    Read the article

< Previous Page | 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625  | Next Page >