Search Results

Search found 98447 results on 3938 pages for 'sql server denali'.

Page 1658/3938 | < Previous Page | 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665  | Next Page >

  • Logfiles filling with iptables logging

    - by Peter I
    OS: Debian 6 Server Version I have different logfiles which are filling up: user@server:/var/log$ ls -lahS | head total 427G -rw-r--r-- 1 root root 267G Nov 2 17:29 bandwidth -rw-r----- 1 root adm 44G Nov 2 17:29 kern.log -rw-r----- 1 root adm 27G Nov 2 17:29 debug -rw-r----- 1 root adm 23G Oct 27 06:33 kern.log.1 -rw-r----- 1 root adm 17G Nov 2 17:29 messages -rw-r----- 1 root adm 14G Oct 27 06:33 debug.1 -rw-r----- 1 root adm 12G Nov 2 17:29 syslog -rw-r----- 1 root adm 12G Nov 1 06:26 syslog.1 -rw-r----- 1 root adm 9.0G Oct 27 06:33 messages.1 So I looked up the file /etc/iptables.up.rules which had those lines in it: -A FORWARD -o eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_OUT: -A FORWARD -i eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_IN: -A OUTPUT -o eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_OUT: -A INPUT -i eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_IN: So deleting those lines will solve my problem. But how would I edit those lines without losing their functionality?

    Read the article

  • Sendmail: external alias not receiving relayed mail under certain circumstances.

    - by ben
    I have set up an alias in /etc/mail/aliases like this: user: [email protected] This relay DOES work when I telnet to example.com 25 and send mail to [email protected] (where example.com is my domain); it indeed turns up in [email protected] inbox. Also mail sent from my server at example.com is generally deliverable to this same email address, [email protected]. HOWEVER, the relay DOES NOT work when I send mail from [email protected] to [email protected], expecting it to be relayed back to [email protected]. The mail.log shows it being received and sent just fine, so I guess it is being blocked by gmail for some reason. Why though? As I said, gmail generally does except mail from this server.

    Read the article

  • How to rate-limit concurrent sessions with nginx or haproxy?

    - by bantic
    I'm currently using nginx to reverse-proxy requests from web clients that are doing long-polling to an upstream. Since we're doing long polling (as opposed to websockets), when a client connects it will make multiple http connections to the server in serial, re-establishing a connection every time the server sends it some data (or timing out and re-establishing if the server has nothing to say for 10 seconds). What I'd like to do is limit the number of concurrent web clients. Since the clients are constantly making new HTTP requests instead of keeping a single request open, it's a little tricky to count the total number of web clients (because it's not the same as total number of concurrently connected http clients). The method I've come up with is to track http requests by the originating IP address, and store the IP address somewhere with a TTL of 20 seconds. If a request comes in whose IP isn't recognized, then we check the total number of unexpired stored IP addresses; if that's less than the maximum then we allow this request through. And if a request comes in with an IP address that we can find in the look-up table that hasn't yet expired, then it is allowed through as well. All requests that are allowed through have their IPs added to the table (if not there before) and the TTL refreshed to 20 seconds again. I had actually whipped something together that worked correctly this way using nginx along with the Redis 2.0 Nginx Module (and the nginx lua module to simplify the conditional branching), using redis to store my IP addresses with a TTL (the SETEX command), and checking the table size with the DBSIZE command. This worked but the performance was horrible. nginx and redis ended up using lots of cpu and the machine could only handle a very small number of concurrent requests. The new stick-table and tracking counters that were added to Haproxy in version 1.5 (via a commission from serverfault) seem like they might be ideal to implement exactly this sort of rate limiting, because the stick-table can track IP addresses and automatically expire entries. However, I don't see an easy way to get a total count of the unexpired entries in the stick table, which would be necessary to know the number of connected web clients. I'm curious if anyone has any suggestions, for nginx or haproxy or even for something else not mentioned here that I haven't thought of yet.

    Read the article

  • Import virtualBox appliance to KVM

    - by Hugo Bulnes
    Im new using KVM as a virtualization solution. Currently I'm using VirtualBox to manage my virtual machines in my personal computer, but I'm moving my virtualization to a Server, so I set up a Linux server with KVM. And now I'm trying to import a virtualBox vm to KVM. So far I couldn't make it work. I already convert the ova file from VirtualBox to a format more familiar to KVM (qcow2), and I try to create a new virtual machine using virt-install command and setting the new virtual machine hard drive with the .qcow2 file. There is anyone could help me? thank you!

    Read the article

  • Dell Power Edge R515 - Replacing a Bad Hard Drive in a RAID

    - by LonnieBest
    I've ordered a new hard drive to replace a bad one in a Dell Power Edge R515. The manual covers obvious topics regarding physical replacing of hard drives, but I've never done this before on a production server where RAID is involved. I've heard people talk about this topic, and I've heard that some servers have RAID controllers that are smart enough to allow you to just put in the new drive (hot swap), and then the server will know automatically how to rebuild that drive to be what the old one was to the system. Where do I find the proper procedure for replacing a failed hard drive on a live production Dell Power Edge R515? Can someone with experience tell me how easy or hard this usually is?

    Read the article

  • No response from example.com using Apache

    - by stevens-G
    I am unable to access example.com or by local IP after restarting the server. I checked to make sure httpd service was on. I looked at the error_log in /var/log/httpd and found nothing. Tried to restart httpd again and it says 'Ok'. I'm not sure where else to check. I did move DocumentRoot from /var/www to /web-root and it worked before restarting the server. I tried pointing it back to /var/www and still not able to view the page. IPtables have not changed. Any suggestions?

    Read the article

  • Added autossh in rc.local, but the dynamic port forwarding won't work

    - by rankjie
    I am using Rasbian on my newly arrived Rasp.Pi, and decided to make it my own proxy server. Now I need to set up a ssh tunnel on the Pi to my Linode server, and make it auto start with the system. What did I do: Add this line to /etc/rc.local autossh -f theRemoteServer -N -D 5555 -L 1234:localhost:22 After I reboot, I found out that I can't use the localhost:5555 as a socks proxy. So I type the command ps -A | grep ssh then I can see the autossh and ssh all running: pi@raspberrypi ~ $ ps -A | grep ssh 2018 ? 00:00:00 sshd 2116 ? 00:00:00 autossh 2119 ? 00:00:00 sshd 2195 ? 00:00:00 sshd 3173 ? 00:00:00 ssh (I've installed autossh, and the command works if I type it manually.) (I use the passwordless key auth, so I don't have to enter password.) Much appreciated and sorry for my poor English.

    Read the article

  • Serving static web files off a non-standard port

    - by Nimmy Lebby
    I'm close to deploying a Django project to production. I'm looking over some infrastructure decisions. Something that came up was serving static files with a different server such as lighttpd. However, we're starting off with a single dedicated server so our only option would be to use a non-standard port for the static file webserver. Is there precedence for this? I.e. Does anyone "big" do this? Any particular port I should use or shy away from using? Can anyone thing of some downsides of going this route?

    Read the article

  • Identify Long Running or Slow PHP Scripts

    - by Kirk
    I have web server that is getting around 25K visits a day up at yougetsignal.com. Sometimes the site feels a bit sluggish. I am hosting it on nginx with php5-fpm. Is there a way for me to see a list of all of the long running requests that are coming to the site? I'd love to have a real-time list of all of the active requests that PHP is handling and how long they have been running. Kind of like top, but just for the web server. This would let me know how long requests are taking and which script is the culprit. Anyone have any ideas on how I can do this?

    Read the article

  • 13 IP addresses, how to add them to domain SPF?

    - by Willy
    Hi All, Let say I have these IP addresses on my server: 170.120.210.209 gateway 170.120.210.210 server IP 170.120.210.211 170.120.210.212 170.120.210.213 170.120.210.214 170.120.210.215 170.120.210.216 170.120.210.217 170.120.210.218 170.120.210.219 170.120.210.220 170.120.210.221 170.120.210.222 I am now willing to setup SPF record for my domain but don't want to write each IP one by one. Could you please tell me the short way of this? How can I convert these IP addresses into CIDR notation? Is this correct? 170.120.210.210/28 Thanks for your help.

    Read the article

  • Set nginx.conf to deny all connections except to certain files or directories

    - by Ben
    I am trying to set up Nginx so that all connections to my numeric ip are denied, with the exception of a few arbitrary directories and files. So if someone goes to my IP, they are allowed to access the index.php file, and the phpmyadmin directory for example, but should they try to access any other directories, they will be denied. This is my server block from nginx.conf: server { listen 80; server_name localhost; location / { root html; index index.html index.htm index.php; } location ~ \.php$ { root html; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /srv/http/nginx/$fastcgi_script_name; include fastcgi_params; } } How would I proceed? Thanks very much!

    Read the article

  • Mapping skydrive as network drive in macos

    - by vittore
    as you probably know, if you have windows live account you can use free skydrive 25 gb storage. Even more a lot of people know that if you go to your skydrive in browser and copy cid query parameter value (https://...live.com/...&cid=xxxxxxxx ) you will be able to map skydrive as network drive in windows using this network pass \[cid].docs.live.net[cid]\ I do now that if you have network share like \server\folder i can map it in macos too, as smb://server/folder. however it is doesn't seem to be a case with skydrive when i try to map it as smb://[cid].docs.live.net/[cid] finder tells it can't connect. Anyone know how to map it ?

    Read the article

  • Dell Powervault MD3000 - Not sharing Files between servers

    - by Kevin
    I'm a developer who has to set up a Dell Powervault MD3000 due to lack of resources. I have connected the Powervault to 2 Dell 2950 servers via the SAS cables. I performed the setup using Dell's MD Storage Manager software (4 disks, RAID 5 with hot spare). Then I added the disks using Windows 2003 disk management (Basic, not dynamic disk and formatted with NTFS). When I add files to the array from one server, they are not visible on the other server (and vice-versa). Is the error in the windows disk management configuration?

    Read the article

  • pfSense - DHCP Relay

    - by Patrick
    I have 3 pfSense boxes acting as routers on a single subnet (172.22.12.0/26). Router A - 172.22.12.1 Router B - 172.22.12.17 Router C - 172.22.12.33 I want Router A to be the only DHCP server. Router C has DHCP relay enabled that points to Router B. Router B then has DHCP relay enabled that points to Router A. Like this: Router C -- Router B -- Router A (DHCP Server) Router B gets an IP from Router A, but Router C does not. Any ideas why this configuration isn't working? Thanks.

    Read the article

  • Citrix and WPF, blue window

    - by Ian
    We are building WPF applications which will be deployed on Citrix. Currently you simply see a blue window under Citrix, although the app runs fine on the server itself. There do seem to be some issues detailed on the net. Citrix forum discussion Microsoft hot fix We've applied the hot fix but this does not appear to fix the problem for us at least. Also, found this identical question on this site, but it had been removed by the author, so no answers. I'm running citrix 4.5 on a Windows 2003 server. I am trying to publish a WPF app (any WPF app has this problem) and all I get is a blue rectangle where the app is supposed to be. The rectangle is the exact size and shape of the window I expect, but it is just blue (looks like the color of the citrix desktop background). Any ideas?

    Read the article

  • PHP hits 100% CPU and eats RAM at the same time Monday to Friday

    - by Daniel Samuels
    We run a learning platform for primary schools here in the UK and it's all been running extremely well. However at around 4PM Monday to Friday we see the same issue arise -- 1-2 PHP threads will spike to 100% CPU and gradually start eating up RAM until the server(s) fall over. 98%+ of our requests are HTTPS, these come into our Layer 7 load balancer which then decrypts the SSL data, adds the X-HTTP-Forwarded-For header and forwards the data onto an application server (we have 2 of those at the moment) on port 80. Our application servers have Varnish on port 80 which takes in the request from the load balancer and passes the request through to Nginx on port 81. Nginx then works out which 'vhost' it needs to use and passes any PHP processing through to PHP-CGI which is listening on a socket (managed through spawn-fcgi). There's an instance of Memcached running too, MySQL runs on a separate server / slave setup. Throughout the day the load will typically go no higher than 0.8 on either of the application servers, however at around 4PM our problem arises. I've managed to run strace on a few of the actual threads when they cause the problem and I always see the same thing: stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 This is repeated infinitely and never stops until you SEGKILL the process or oomkiller kills it. There are no cron jobs scheduled to run at that time and I don't have any way of seeing exactly what Nginx request is associated with the PHP process which is running. We are running PHP 5.3.14 which we upgraded to from 5.3.8 last week to rule out the older version being the problem. This issue has been going on a few months now and we have no idea what is causing it. We deploy our software very frequently, so it's difficult to track down a specific release which may have started the problem - especially as we do not know the date of the first occurrence of this issue. Varnish is version 3.0.1, Nginx is 1.0.6 (which I understand is about a year old now), our servers are running CentOS release 5.7 (Final) they have Intel i3 540s at 3.07Ghz and 8GB of RAM. There's a discussion on the Debian mailing list about something very similar, you can find that here. Has anyone seen anything like this in the past, does anyone have any ideas or suggestions? Are there a way of linking an Nginx request directly to a PHP thread? Is there a better way of seeing what the PHP process is doing? (I've seen GDB mentioned, though I'll have to recompile PHP) Thanks!

    Read the article

  • Scaling a video processing application on EC2?

    - by Stpn
    I am approaching the need to scale a video-processign application that runs on EC2. So far the setup is one machine: Backbonejs frontend Rails 3.2 Postgresql Resque + S3 for storage The flow of the app is as follows: 1) Request from frontend. Upload a video. 2) Storing video 3) Quering external APIs. 4) Processing / encoding videos. 5) Post to frontend. I can separate the backend and frontend without any problems, but when it comes to distributing the backend between several servers I am a bit puzzled. I can probably come up with a temporary solution (like just duplicating apps making several instances), but since I don't really have expertise in backend system administration, there can be some fundamental mistakes.. Also I would rather have something that is scalable. I wonder if anyone can give some feedback on the following plan: A) Frontend machine. Just frontend, talks to backend via REST Api of sorts. B) Backend server (BS), main database. Gets request from 1), posts to 2) saves uploads to 3) C) S3 storage. D) Server for quering APIs. Basically just a Resque workers, that post info back to 2) E) Server for video encoding. Processes videos uploaded on 3) and uploads them back. So I will have: A)frontend \ \ B)MAIN_APP/DB ----- C)S3 Storage (Files) / \ / / \ / D)ExternalAPI_queries E)Video_Processing (redundant DB) (redundant DB) All this will supposedly talk to each other via HTTP requests. My reason for this is that Video Processing part is really the most resource-intensive and I would just run barebones application that accepts requests and starts processing them. Questions: 1) In this setup I will have the main database at B) and all other servers will communicate with it via HTTP requests (and store duplicates of databases also I guess..for safety reasons). Is it the right approach or should I have 1 database that everyone connects to (how then?) 2) Is it a good idea to separate API queries from Video Processing part? Logically they are very close (processing is determined by the result of API queries), but resource-wise Video Processing is waaay more intensive. 3) what should I use to distribute calls between backend apps based on load?

    Read the article

  • Network-Backup-Software with file versioning and web-interface

    - by dlang
    Dear All! I would like to backup our business-data to a remote backup-server. We would like to set-up our own backup-server running on any operating system (windows appreciated) which comes with a web-interface, that enables to restore individual versions of one file. Because budget is limited, an open-source software or at least a cheap software is a must! Unfortunately I couldn't find even a single software, which fulfills the requirements of file versioning and web-interface for single file-restore. Do any of you have already set-up such a system? Best regards, Daniel Lang

    Read the article

  • iPad Synching with Exchange 2007 is Losing Contacts

    - by Christopher
    We have a user who has an iPad that is synching to our Exchange 2007 SP1 Server. She is reporting that her contacts are being "eaten", which we take to mean are being slowly deleted over time. This user also has a BlackBerry that is synching through a Blackberry Enterprise Server. I have two questions - 1) Has anyone run into this situation of "self-deleting" contacts or does anyone have any idea what is going on? 2) Can anyone give insight into usage of iPads in their Active Directory/Exchange environment?

    Read the article

  • How to get rid of NAT in a LAN?

    - by Alberto
    Currently the LAN I manage is organized as follows: internal network (192.168.1.0) which uses a Linux server as a gateway (internal address on interface br0 192.168.1.1, external address on interface br1 10.0.0.2) through NAT; then the 10.0.0.0 network has another gateway (10.0.0.1) which through another NAT connects the whole thing to the internet. What I would like to achieve is to configure the Linux server so that the first layer of NAT is no more necessary, so that for example a computer in the 10.0.0.0 network can ping every computer in the 192.168.1.0 network. I deleted this iptables rule: iptables -t nat -A POSTROUTING -o br1 -j SNAT --to-source 10.0.0.2, but of course now computers on 192.168.1.0 cannot reach the internet; ip forwarding is of course enabled. What's missing here? Thanks

    Read the article

  • OpenVPN Permission Denied Error

    - by LordCover
    I am setting OpenVPN up, and I'm in the state of adding users. Details: Host System: Windows Server 2003 32-bit. Guest System: Ubuntu Linux (with OpenVPN installed already), actually I downloaded it from OpenVPN.Net. Virtualization: VMWare v7.0 Problem: I can access the Access Server web portal (on the port 5480), but when I login to http://host_ip:943/admin and enter my (correct) login info, it shows me a page saying that "You don't have enough permissions". I am the (root) user!!!! that is really weird!!! Note: if I enter wrong login it will denote an incorrect login, this means that I am logging in successfully but the problem comes after the login process. What I tried: I tried to create another user after (root) logging in to Linux Bash using (useradd) command, but the same resulted.

    Read the article

  • How do I configure Shrewsoft's VPN client to only route traffic to a certain IP address through the VPN?

    - by dommer
    We're using Shrewsoft's VPN client to connect to a third party development server. However, it seems to be configured to send all or nothing through the VPN. The devs have to disconnect from the VPN to get email/internet access back. The server that needs to be accessed via the VPN is on a specific (local - 10.x.x.x) IP address and a specific ports. Can we configure the Shrewsoft client application to only route traffic to that one address and/or port through the VPN and to route anything else though the usual channels? If so, how is it done? I'm not a VPN specialist and the options are confusing. In the absence of any Shewsoft VPN client specific advice, what should I be search for? Split tunnels?

    Read the article

  • Empty sshd_config file

    - by Thomas
    I run a Centos 5 server with a LAMP stack. I was told this morning that the server was down not serving web content. I then tried to restart httpd but it failed due to another process was listening on port 443. I checked what process was running on 443 using netstat and it was sshd. I then checked the sshd_config file to check the ports that sshd was running on but the sshd_config file was completely blank. I than ran chkrootkit and it flagged not suspicions. What could of caused the sshd_config file to be blank, and sshd service to be restarted? I would really value your thoughts. All the best.

    Read the article

  • Instructions to setup domain controller

    - by Robert Koritnik
    Where could I get best step by step instructions (with some simple explanations) how to setup domain controller on Windows Server 2008 R2 Server Core? I don't know what do I need? Do I need DNS as well and AD and so on and so forth. I don't know enough about these things, but I need to set them up to prepare development environment. I would also like to know how to configure firewall on DC machine, to make it visible on other machines because I've setup DC somehow but I can't connect to it... This is my HW config: Linksys internet router with DHCP my dev machine is Windows 7 my DC machine is a VM in my dev machine my dev machine has a network adapter to linksys and a virtual adapter to DC DC machine has two network adapters: one to linksys (to be inetrnet connected) and one to host (my dev Win7 machine)

    Read the article

  • Nginx server_name is set to mydomain.com, so why is www.mydomain.com getting served too?

    - by Lorenz Forvang
    I have my Nginx conf set up as follows: server { listen 443 ssl; server_name mydomain.com; ... } When I load https://mydomain.com, the site loads fine. But when I load https://www.mydomain.com, the site loads as well. Why is this happening? I set up the DNS records using Amazon Route 53 as: A mydomain.com xxx.xxx.xxx.xxx (IP) CNAME www.mydomain.com mydomain.com So is a request to www.mydomain.com arriving at Nginx as a request to mydomain.com? If so, how do I differentiate requests to www.mydomain.com and mydomain.com at my server?

    Read the article

< Previous Page | 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665  | Next Page >