Search Results

Search found 120660 results on 4827 pages for 'http server'.

Page 1298/4827 | < Previous Page | 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305  | Next Page >

  • Blackberry Access to Powered Down Exchange

    - by Sam Cogan
    I work with a company (it's more of a charity really) who have a single Exchange Server, with a few Blackberry users, who download emails via POP3 or IMAP. They are in a developing country and so every night they turn off the Exchange server to save power. However, they now want the Blackberry users to be able to get mail at night. They have a Linux server (a rented VPS), so they are considering having the mail delivered here and then pulling this mail via a POP connector into Exchange. Therefore at night, the BES users (who will now be pulling their POP email from the Linux server) can still get their mail. Can anyone think of a better solution to this problem that I may have missed? Unfortunately there is no convincing the company to leave the machine on overnight.

    Read the article

  • What is a proper MySql replication configuration for frequent db updates and rare selects?

    - by serg555
    We currently have 1 master db on its own server and slave db on app server. App executes very frequent but light updates (like increasing counters), and occasional (once in a few minutes) heavy selects (which is the most important part of the app). When app was connected only to master db there were no performance issues. With slave db introduction CPU load avg on app server increased to about 6-10 during that heavy select period (from 3-4 as before). When server doesn't run those frequent updates it seems like performance for selects stays within the limits. So I have a feeling that those updates is what is causing the performance drop (also these frequent updates are not critical so if slave db doesn't have them in sync with master for some time it would be ok). What would be a good db replication setup for such kind of app? What are the replication parameters we could tweak? Thanks.

    Read the article

  • localhost works 127.0.0.1 does not IIS

    - by NickatUship
    Very weird problem on IIS. Never had it before: localhost works, but 127.0.0.1 does not. localhost pings to 127.0.0.1. www.mydomain.com also pings to that IP, which is set up in the hosts file, but that also doesnt work locally. I've ipconfig /flushdns 'd without success. Ive even restarted the server. Another server set up the exact same way works fine. Any ideas? To be clear, im accessing the URLs in IE like this: http://localhost http://127.0.0.1 http://www.mydomain.com I can telnet to port 80 without a problem for all 3

    Read the article

  • How to map email addresses on subdomains

    - by Glen Little
    Is it possible to create email addresses like these: [email protected] [email protected] [email protected] and have them all handled by one mail server, as three different mail boxes? (Many examples I've seen talk about directing mail to [email protected] into the same mail box as [email protected] - but this is not what I'm looking for.) I haven't specified the server technology being used because I'm wondering if this is generally possible. If you know that server x can do this, please mention it in your answer! Is it correct that MX records can be set to direct email to all subdomains *.mydomain.com to one mail server? Is that still true if there are also web sites at those subdomains (using A records)? Thanks!

    Read the article

  • Nginx proxy SOAP request

    - by user2606078
    looking for a right way to accomplish the following: there is an app that have URL(1) hardcoded and no way/time to change it in the source http://dev.server.com/example.com/admin/soap/action/index?pr=1 and it should use (and get response from) URL(2) http://example.com/admin/soap/action/index?pr=1 what should I configure in Nginx (apache as backup used) conf on dev.server.com in order to give that app when it asks URL(1) answer from URL(2)? On dev.server.com Apache has virtual host: dev.server.com enabled. Also I've tried to proxy in apache instead of nginx by using ProxyPass: <Directory /var/www/dev> Options Indexes FollowSymLinks MultiViews AllowOverride all Order allow,deny allow from all </Directory> <Location /example.com/admin/soap> ProxyPass http://example.com/admin/soap </Location>

    Read the article

  • FTP from batch file

    - by Buzkie
    I'm trying to use a batch file to download a package off my FTP server. echo username >ftp.txt echo >>ftp.txt echo cd directory >>ftp.txt echo get filename >>ftp.txt ftp -s:ftp.txt server.com The server is set to allow anonymous logins on username but when I run the script I get an error: 331 Password required for username If there is any other useful information let me know. -Alex

    Read the article

  • Is it possible to use bittorrent for a fileserver

    - by sris
    I would like to set up a file server that is searchable, preferable via the web. I'm wondering if it would be possible to achieve this using the bittorrent protocol and have a single client sharing every single torrent on the server. I guess I could use some available tracker solution for the webinterface or write one myself. My concerns are the if there are any limits to the number of torrents a single client can share since this may potentially be 10k torrents. The number of downloading clients is very small, only myself and my relatives. The idea is to have a single place to host everything from vacation photos to musical creations. Is there any other options for this kind of file server. It should also be easy to upload files to the server.

    Read the article

  • apt-get commands pausing at 'Waiting for headers'

    - by Matt
    I have a VM running Ubuntu Server 9.10 running a basic web server setup. Whenever I run an apt function it will pause for around 1 minute at 'Waiting for headers...'. It will eventually clear through and continue as normal but it is a bit of an annoyance. Everything else on the server seems to run fine. Any ideas?

    Read the article

  • How can we implement network search for Windows AND OS X clients?

    - by michielvoo
    We have a network with Windows 7 and OS X (10.5 and 10.6) computers. Our servers run on Windows Server 2003 (1 Small Business Server, 2 Standard). We need to be able to search through about 15.000 - 30.000 documents in our archives. The best solution would be if users can search directly from the Windows menu (on Windows 7) or the Spotlight menu (on OS X 10.5 and 10.6). Also good would be if users can search directly from the search bar in their browsers, or by first visiting a site with the search form. In case the users search through the browser, it's important they they are able to open a file in the search results just by clicking on it. I have tested Microsoft Search Server Express, but it doesn't meet the requirements (no OS X support, results in the browser can't be opened by clicking in anything but Internet Explorer). I have looked at Spotlight server, but that only supports OS X. Thanks!

    Read the article

  • IT merger - self-sufficient site with domain controller VS thin clients outpost with access to termi

    - by imagodei
    SITUATION: A larger company acquires a smaller one. IT infrastructure has to be merged. There are no immediate plans to change the current size or role of the smaller company - the offices and production remain. It has a Win 2003 SBS domain server, Win 2000 file server, linux server for SVN and internal Wikipedia, 2 or 3 production machines, LTO backup solution. The servers are approx. 5 years old. Cisco network equippment (switches, wireless, ASA). Mail solution is a hosted Exchange. There are approx. 35 desktops and laptops in the company. IT infrastructure unification: There are 2 IT merging proposals. 1.) Replacing old servers, installing Win Server 2008 domain controller, and setting up either subdomain or domain trust to a larger company. File server and other servers remain local and synchronization should be set up to a centralized location in larger company. Similary with the backup - it remains local and if needed it should be replicated to a centralized location. Licensing is managed by smaller company. 2.) All servers are moved to a centralized location in larger company. As many desktop machines as possible are replaced by thin clients. The actual machines are virtualized and hosted by Terminal server at the same central location. Citrix solutions will be used. Only router and site-2-site VPN connection remain at the smaller company. Backup internet line to insure near 100% availability is needed. Licensing is mainly managed by larger company. Only specialized software for PCs that will not be virtualized is managed by smaller company. I'd like to ask you to discuss both solutions a bit. In your opinion, which is better from the operational point of view? Which is more reliable, cheaper in the long run? Easier to manage from the system administrator's point of view? Easier on the budget and easier to maintain from IT department's point of view? Does anybody have any experience with the second option and how does it perform in production environment? Pros and cons of both? Your input will be of great significance to me. Thank you very much!

    Read the article

  • Hard Disk based storage library

    - by Ryan M.
    We have a Tandberg T24 tape device to handle all of our long term backups right now. We decided that we're not backing up nearly everything that we would like to and that we still have a lot of vulnerabilities. To get to where we want to be, we're going to have to back up a lot more servers than we're currently doing. All of our internal servers have some sort of directly attached drive (I.e. LaCie Raid box or a simple portable hard drive) doing backups, but what we want to do is get those backups off-site. The current tape drive is directly attached via SCSI to a Windows Server 2008 File Server. So to back up anything to tape, it has to be funneled through the File Server. With the current increase that we have planned, I don't think that funneling everything through the File Server is the right course of action and I'm thinking that maybe a second backup device would be more appropriate. I would like your input on a couple of ideas. 1) Doing HDD instead of tape. Tape is hard to deal with. We have a regular rotation cycle, so they don't need years and years of shelf life, so I'm wondering if something HDD-based would be better. 2) Something accessible over the network. Instead of having the device directly attached to one specific machine, have it available to all the servers over the network. Our File Server is a 12-disk raid 6 set up.. I was thinking something like that, but with no raid involved, all disks are stand alone so they can be used/installed/removed on an individual basis. Does any such thing exist? Thanks for your ideas. I'm really interested to hear about some of the solutions you guys are using..

    Read the article

  • Create a redirect for IIS7

    - by justSteve
    When i created my (server08/IIS7) server's SSL certificate I mistakenly defined the common name to be 'myDomain.com' instead of 'www.myDomain.com', meaning that https://www.myDomain.com comes up untrusted. I understand that I can create a server redirect to correct the problem but I don't see where/how to do that from IIS's server manager. thx

    Read the article

  • Combine several locations with regex in nginx

    - by AlexAtNet
    I dynamic number of Joomla installations in subfolders of the domain. For example: http://site/joomla_1/ http://site/joomla_2/ http://site/joomla_3/ ... Currently I have the follwing config that works: index index.php; location / { index index.php index.html index.htm; } location /joomla_1/ { try_files $uri $uri/ /joomla_1/index.php?q=$uri&$args; } location /joomla_2/ { try_files $uri $uri/ /joomla_2/index.php?q=$uri&$args; } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm/joomla.sock; ... } I'm trying to combine joomla_N rules in one: location ~ ^/(joomla_[^/]+)/ { try_files $uri $uri/ /$1/index.php?q=$uri&$args; } but server starts to return index.php as is (does not call the php-fpm). It looks like the nginx stops the processing of the regex rules after the first match. Is there any way to combine this rules with something like regex?

    Read the article

  • Nginx not working properly on subdomains

    - by javipas
    I've been trying to setup a Sugar CRM instance. I've got a domain that has its main site on a server (www.domain.com) and I've created a subdomain (sugar.domain.com), but I wnat this subdomain to be hosted on another server. This second server has nginx installed, and there's a working WordPress blog there on a virtualhost, so I would need to setup a second site. To do this I've created the directory structure, and I've created a /etc/nginx/sites-enabled/sugar.domain.com configuration file that has the following: * server { listen 80; server_name sugar.domain.com *.domain.com; access_log /var/www/sugar/log/access.log; error_log /var/www/sugar/log/error.log info; location / { root /var/www/sugar; index index.php; } location ~ .php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/sugar/$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort on; fastcgi_read_timeout 180; } ## Disable viewing .htaccess & .htpassword location ~ /\.ht { deny all; } } upstream backend { server 127.0.0.1:9000; } As far as I know, I need the *.domain.com parameter on the "server_name" flag, but something is crashing here: I get either a 403 Forbidden error, or I get PHP code (I can read the PHP file code in the browser, like normal text) that somehow is not executed. I've tried setting permissions to 755 inside the /var/www/sugar/ directory, and I've also set up the owner:group with a chown -R www-data:www-data /var/www/sugar/ The thing is, I don't now if my mistake is in the nginx site configuration, in my folder permissions, or in other place :( Could it be because of the main domain (www.domain.com) is hosted on other server? Do they have to be together necessarily?

    Read the article

  • Debian PPTP VPN can't get out to the internet

    - by phidah
    I've setup a Debian pptp server which seems to be running fine. I can successfully access the local services on the server when connected to the VPN, but I cannot go out of the LAN, i.e. I cannot just go to any server out on the internet. I guess this is come kind of routing issue that won't allow me to use the server as a gateway? I couldn't really find any articles or similar that could tell me how to set this up properly. Thanks in advance.

    Read the article

  • Nginx not working properly on subdomains [SOLVED]

    - by javipas
    I've been trying to setup a Sugar CRM instance. I've got a domain that has its main site on a server (www.domain.com) and I've created a subdomain (sugar.domain.com), but I wnat this subdomain to be hosted on another server. This second server has nginx installed, and there's a working WordPress blog there on a virtualhost, so I would need to setup a second site. To do this I've created the directory structure, and I've created a /etc/nginx/sites-enabled/sugar.domain.com configuration file that has the following: * server { listen 80; server_name sugar.domain.com *.domain.com; access_log /var/www/sugar/log/access.log; error_log /var/www/sugar/log/error.log info; location / { root /var/www/sugar; index index.php; } location ~ .php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/sugar/$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort on; fastcgi_read_timeout 180; } ## Disable viewing .htaccess & .htpassword location ~ /\.ht { deny all; } } upstream backend { server 127.0.0.1:9000; } As far as I know, I need the *.domain.com parameter on the "server_name" flag, but something is crashing here: I get either a 403 Forbidden error, or I get PHP code (I can read the PHP file code in the browser, like normal text) that somehow is not executed. I've tried setting permissions to 755 inside the /var/www/sugar/ directory, and I've also set up the owner:group with a chown -R www-data:www-data /var/www/sugar/ The thing is, I don't now if my mistake is in the nginx site configuration, in my folder permissions, or in other place :( Could it be because of the main domain (www.domain.com) is hosted on other server? Do they have to be together necessarily?

    Read the article

  • Using nginx as a reverse proxy for tomcat results in new jsessionids for every ssl request

    - by user439407
    I am using nginx as a reverse proxy for a tomcat setup, and everything works fine for the MOST part, the only issue I am having is that every request to an http address results in a new JSESSION ID being created(this doesn't happen in http), here is the relevant part of the NGINX configuration: location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto https; proxy_redirect off; proxy_connect_timeout 240; proxy_send_timeout 240; proxy_read_timeout 240; proxy_pass http://localhost:8080; } Any idea why I am constantly genning new jsessionids?

    Read the article

  • If I let Google handle my emails for my domain, my Wordpress site won't send out emails anymore

    - by Fulvio
    Since I decided to let Google handle all my emails for my domain, while the domain is hosted on a 3rd party server, emails send out by a Wordpress installation no longer work. My supposition is that since all email is being routed to Google, my specific account on that server for that domain is unable to send out emails. I definitely wish to keep using google services for handling my emails since it comes with all the advantages connected to a Google account. However I need my Wordpress installation to send out administrative emails. I run my server with CPanel. How to configure that specific account and/or Wordpress to keep it able to send out emails? I don't need people to answer these emails sent out from server (eventually I might set a reply-to-address perhaps) thanks

    Read the article

  • NDR for meeting requests

    - by Adam
    We've got a mailbox for each department (e.g [email protected] and [email protected]) and everyone in that department has access to it, access is granted using Exchange Management Console. If I send a calendar invite to [email protected], I get a Undeliverable report: Delivery has failed to these recipients or groups: User_A The e-mail address you entered couldn't be found. Check the address and try resending the message. If the problem continues, please contact your helpdesk. User_B The e-mail address you entered couldn't be found. Check the address and try resending the message. If the problem continues, please contact your helpdesk. User_C The e-mail address you entered couldn't be found. Check the address and try resending the message. If the problem continues, please contact your helpdesk. The users are no longer in AD or Exchange but we cannot find any mention of them within any deligates or permissions anywhere. We only started to get this problem AFTER we upgraded our DCs from Windows Server 2003 to Windows Server 2008 and Exchange server from Windows Server 2003, with Exchange 2005 to Windows Server 2008, with exchange 2010.

    Read the article

  • Choose IP Adress for Process to use on launch [duplicate]

    - by user1436026
    This question already has an answer here: How to set which IP to use for a HTTP request? 2 answers Say my server has the following IP addresses: 123.456.78.0 123.467.79.1 123.456.77.1 123.456.68.0 etc... Say I want to launch a process, say wget from the command line. Normally, I would do something like this: wget http://www.google.com/ Except that I would like to choose the IP address that my server uses to make this request. Is there a way to use wget or launch another command with a choice of one of my own IP addresses, like the following pseudo command: with-ip 123.456.68.0 wget http://www.google.com/

    Read the article

  • Problems restoring old backups in NetBackup 6.5

    - by gharper
    I had a server that was decommissioned & replaced last year, and since the server was no longer in use, I deleted it's client & backup policy from the NetBackup Admin Console shortly afterwards. I recently got a request to restore a file from the old server, however when I specify the source client for the restore, I get an error message saying: WARNING: server (backupserver) does not contain any backups for client (oldserver) using the specified policy type (Standard) as requested by client (backupserver). [Ok] In addition to that error, I can't seem to run a Client Backup report on the old client any more to determine what tapes I need to recall in order to re-index and restore the files... My questions: Does deleting the client somehow remove NetBackups ability to ever restore files from the old system, even if the backups have a retention period of infinity? Is there a way to restore the file from the tape, assuming I can figure out which tape I need?

    Read the article

  • Understanding Unix Permissions (w/ ACL)

    - by Dr. DOT
    I am trying to set permissions on my server properly. Currently I have a number of directories and files chmod'd at 0777 -- but I am not comfortable with it being this way. So at the advice of a serverfault specialist, I had my hosting provider install ACL on my shared virtual server. When I FTP to the server as my FTP user account "abc", I can do everything I need to do (and rightfully so) because all my dirs and files are owned by "abc", the group is "abc", and the 1st octet is set to 7 (rwx). That much I get. But here's where it gets dark gray for me. PHP is set to user "nobody". so when someone browses on of my web pages that either ends in .php or has some embedded PHP, I assume the last octet controls the access. Because all my dirs and files are owned by "abc" and assigned to group "abc", if the last octet was a 4 (r--) then the server would let the browser read the file. If it were a 6 (rw-) then the server would let the browser also write to the file or directory, correct? what if the web document does not end in .php or does not have any PHP embedded? What is the user then? how can I use ACL to not set the permission to 6 (rw-) or even 7 (rwx)? [not sure what execute does or means] Just looking for some sort of policy settings to best lock down my dirs and files while allowing my PHP scripts to do uploads and write to files (so my users don't call me to tell me "permission denied". Ok, thanks to anyone out there willing to lend me a hand. It is greatly appreciated.

    Read the article

  • Apache httpd + FreeTDS hangs until restarted

    - by Jordan Reiter
    Every so often requests to a Linux server (say, linux.example.org) where the web app (Django) pulls in data from a SQL Server database via FreeTDS will hang. Requests on other servers pointing to the database still work, as do requests on linux.example.org that use local MySQL databases. Only the server plus FreeTDS appear to be affected. Restarting httpd makes the database connections work correctly again. What could cause this problem? Using: Centos 5.9 freetds 0.91 Apache httpd 2.2.3 /etc/obdc.ini: [DSN] Description = SQL Server 2005 Driver = FreeTDS ;Database = dbname Servername = SERVERNAME ;TDS_Version = 8.0 /etc/freetds.conf: [SERVERNAME] driver = /usr/lib64/libtdsodbc.so host = db.example.org port = 1433 tds version = 8.0 client charset = UTF-8

    Read the article

  • Why S3 website redirect location is not followed by CloudFront?

    - by ychaze
    I have a website hosted on Amazon S3. It is the new version of an old website hosted on WordPress. I have set up some files with the metadata Website Redirect Locationto handle old location and redirect them to the new website pages. For example: I had http://www.mysite.com/solution that I want to redirect to http://mysite.s3-website-us-east-1.amazonaws.com/product.html So I created an empty file named solutioninside my bucket with the correct metadata: Website Redirect Location= /product.html The S3 redirect metadata is equivalent to a 301 Moved Permanentlythat is great for SEO. This works great when accessing the URL directly from S3 domain. I have also set up a CloudFront distribution based on the website bucket. And when I try to access through my distribution, the redirect does not work, ie: http://xxxx123.cloudfront.net/solution does not redirect but download the empty file instead. So my question is how to keep the redirection through the CloudFront distribution ? Or any idea on how to handle the redirection without deteriorate SEO ? Thanks

    Read the article

  • Reliability of VMware ESXi for backup

    - by Laurent
    Currently, I'm using a server as an online backup and to run some VMs with VMware Server. I'm interested in converting it to VMware ESXi but have some concerns about the possible corruption of my VMDKs if I choose to store my data on them. I was also thinking of storing the data directly on the datastore but can't find any way to mount a VMFS volume with a LiveCD if ESXi is unable to start. What are my options? Is continuing to use VMware Server is a good idea, knowing that I DO want to use the server for both virtualization and backup purposes. Thanks.

    Read the article

< Previous Page | 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305  | Next Page >