Search Results

Search found 13411 results on 537 pages for 'proxy servers'.

Page 384/537 | < Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >

  • FTP Server with MySQL access, and POST notification

    - by TIW
    Im looking for an FTP server solution, that we can host either internally on a dedicated server, or on Rackspace Cloud/AWS, that provides a HTTP POST notification when a file is uploaded, and allows user accounts to be created either through an API or MySQL database. There are several offerings that provide email notification - but has anyone come across anything that matches the above requirements. BrickFTP being a IaaS system is an option, but we would prefer something hosted in house. I don't believe the standard FTP servers provided with Apache can do the above ... can they?

    Read the article

  • How to NFSv4 share a ZFS file system on FreeBSD?

    - by Sandra
    Using FreeBSD 9, and created a ZFS file system like so zfs create tank/project1 zfs set sharenfs=on tank/project1 There are many howto's on setting up NFSv3 on FreeBSD on the net, but I can't find any one NFSv4 and when the NFS share is done with ZFS. E.g. this howto say I have to restart the (NFSv3) by nfsd -u -t -n 4, but I don't even have nfsd. When I do # echo /usr/ports/*/*nfs* /usr/ports/net-mgmt/nfsen /usr/ports/net/nfsshell /usr/ports/net/pcnfsd /usr/ports/net/unfs3 /usr/ports/sysutils/fusefs-chironfs /usr/ports/sysutils/fusefs-funionfs /usr/ports/sysutils/fusefs-unionfs # I don't see any NFSv4 servers, which I could install with pkg_add. Question How do I install and setup NFSv4, so I can mount the share from e.g. a Linux host?

    Read the article

  • Vmware peaks NFS load every 30 seconds

    - by gtirloni
    We were troubleshooting a performance problem on one of our storage servers and after investigating almost everything in sight we saw that every 30 seconds, Vmware would go from 10k IOPS (NFS) to 30k, 50k, 100k or whatever the server would handle. Most of it were reads. What could cause this raise in NFS operations per second every 30 seconds? The virtual machines are managed by external customers and there isn't much in common between them. While breaking utilization down by filename, we discovered 5-10 virtual machines that contributed more to those peaks but it still doesn't explain why every 30 seconds. There are no other peaks outside that 30 sec period (ie. it stays in an almost constant average). Is there an NFS tweak in Vmware to change that 30 second period? If that's really necessary, we would like to introduce some variation so all that workload isn't dropped all at once. It's causing NFS timeout on the ESX 3.5/4.0 hosts when the storage gets overloaded.

    Read the article

  • Which free open source CPanel and WHM alternatives do you recommend/use?

    - by Keyframe
    I have been using webmin for some time now, however I miss the elegance and ease of WHM/CPanel combo I've had on shared hosting (and later dedicated hosting) platform. Looking around the web, all I have found that is somewhat at the level of WHM/CPanel was webmin - but WHM/CPanel it is not. Since I'm using this only for our projects, it doesn't matter in the end really. However, we do put our new customers on our servers too, so some sort of CPanel might be an easier thing for them to cope with (mostly going about Email accounts stuff and such). Currently my stack is LAMP (CentOS and Ubuntu Server - several machines, probably ditching CentOS soon in favor of Ubuntu). There is a prospect of Python/Django instead of PHP, but it might take awhile.

    Read the article

  • Checking that tasks are executed

    - by homer5439
    I'm not sure how to explain this. Once one starts having dozens or hundres of servers, each running some sort of periodic jobs (mostly from cron), there is a problem of making sure (or as sure as possible) that these tasks are actually ran. I mean, I get an email if a job fails fails, and no mail if it succeeds, but also no mail if it doesn't run for whatever reason. Sure, I could change them and have them send a "successfully ran" email, only to be flooded by mails that most of the time I don't want to see. Basically, I want to be notified only if: a task ran and failed a task didn't run at the expected time. Is there a way to do this?

    Read the article

  • Random Connections to MySQL refused (Error 111)

    - by joatis
    A Perl/CGI webapp that has been running fine for almost a year has started to randomly been unable to connect to a remotely hosted MySQL. The Error thrown is : Can't connect to MySQL server on 'xx.x.xxx.xx' (111) Reloading the page often solves the problem The client is using Perl, DBI and SSL to connect to MySQL using the same configuration file each time. MySQL 5.0 Server Running RH EL5 Quad-Core AMD Opteron(tm) Processor 2374 HE, 8 cores Real Memory: 15.73 GB total, 11.81 GB used allows networking in my.cnf max-connections are not being met load is low. The servers firewall is open to the client's subnet. The mysql user has permissions from the client's subnet. I have my host looking into the problem but so far we're all stumped as to way the occasional connection is (increasingly getting refused) Any advice what to check that would cause the random refusal of connections?

    Read the article

  • Apache with mod_perl eating memory when idle

    - by syneticon-dj
    An Apache webserver running a mod_perl application is exposing abnormal memory usage - after the "day load" ceases, the system's memory is being exhausted by the Apache processes and oom_killer is being invoked. As the load returns the following morning, the memory usage normalizes - probably because Apache workers get recycled periodically if a sufficient number of hits is generated: This is the graph for apache hits per second to correlate: The remaining 2 hits per second throughout the night are induced by HAProxy checks - it runs HEAD http://mydomain.example.com/running HTTP/1.0 requests against the server every half a second with "running" being a static file (i.e. not invoking any perl code). It also seems that disabling these checks remedies the memory usage problem, but obviously cannot be a solution. All of 3 similarly configured servers (behind HAProxy) expose this behavior. The running OS is Ubuntu 10.10, Apache version 2.2.16. This seems to be a memory leak but I have no idea how to start debugging it - any hints?

    Read the article

  • Speedup of fixing an openssl bug with 8192 bit key [on hold]

    - by rubo77
    This is related to this Bug-Report https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=747453 OpenSSL contains a set of arbitrary limitations on the size of accepted key parameters that make unrelated software fail to establish secure connections. The problem was found while debugging a XMPP s2s connection issue where two servers with long certificate keys (8192 Bit RSA) failed to establish a secure connection because OpenSSL rejected the handshake. This seems to be a small problem to be fixed but although there is an easy patch available to fix the issue in that bug report, no reactions are noticed so far.. The last patch that broke the 2048 barrier took 2 years to be implemented and only resulted in an increase to 4096bit, which seems to be a bad joke. Where would we have to report this to speed up the implementation for such an issue?

    Read the article

  • Setting up a local mail server

    - by KriiV
    This is what I want and I am having issues finding a solution. I have a number of websites (around 5) each with an email account. I have a server at my office and I would like to centralize it. I have a workstation too. What I want to happen is for the server to receive all emails from all those websites (from the web servers) and then connect my workstation to my local server to grab the emails from there. As the server downloads the emails, I would like them to be stored. Also, if I connect another workstation, I want the 2 workstations to sync. So if an email is read on one, it shows up as read on the other. Ideas? I am able to virtualize a Linus environment if that helps.

    Read the article

  • need help setting up a VPN for remote computer connection

    - by Chowdan
    I am on a low budget right now. I am currently in the process of starting a computer company. I am in need of a VPN network so I can run Dameware tools for working on customers/partners computers remotely. I will be working with Windows and some Apple and linux machines. I have desktop with an AMD Phenom II 965BE(currently running stable at 3.8Ghz) processor with 8 GB of ram and a radeon hd 6870(i know graphics aren't too useful) and about 1.5TB of HDD space. I am attempting to create a network out of my office based all on one machine that would also be secure for me to remotely connect to my partners computers so when they have issues I would be able to connect and do the diagnosing and repairs remotely. What types of servers besides a VPN server would i need to create this? I have access to all Microsoft products so I can run Windows Server 2012, Windows Server 2008 R2, or any other Microsoft Software. thanks for the help all

    Read the article

  • Understanding RAM usage on Linux

    - by stebbo
    I'm completely new to Linux and I'm just trying to understand where all my RAM is going. I've got a pretty fresh install of Xubuntu running as a VMWare guest, and I've given it 1.5GB RAM to play with. After only running two apps starting up Tomcat servers and also running Firefox, I've got hardly anything left. 160MB according to free -m. Looking at the output from Top, I see Java appearing twice, each stealing about 1/2 Gig resident memory. Both Tomcat instances use the same jdk, I would have thought I'd only see Java there once. What's the story? I had a screenshot but unfortunately couldn't post it being under 10 rep. Update The free -m output requested: total used free shared buffers cached Mem: 1419 1380 39 0 8 111 -/+ buffers/cache: 1259 160 Swap: 509 68 441 Top (coming)

    Read the article

  • Other options to "balance source" in haproxy

    - by PriceChild
    I have haproxy listening on several ports and pointed at several backend servers. Ideally, I would like it so that repeated communications to the same port get pointed at the same backend. "balance source" isn't workable because often requests come from the same source. Is this doable? I'm also open to non-haproxy solutions. The protocol being used isn't important but is definitely not http. Just assume its ssh and you shouldn't go wrong.

    Read the article

  • In-House DropBox

    - by beardedlinuxgeek
    Dropbox is perfect, but as a company, no one can host anything worthwhile on servers that we don't control. So I've been tasked with coming up with a Dropbox alternative, something in house. GlusterFS is nice, but no offline access. SparkleShare uses Git which isn't great for large files. It also doesn't have windows ports. Any other options? If I were to roll out my own from scratch, what do you think the based way to go about doing this would be?

    Read the article

  • Haproxy, configure for one host

    - by Michal K.
    I have to use haproxy on one machine. I want to do redirect requests from Ip to the same ip (with another port). My configuration (doesn't work): lobal maxconn 4096 # Total Max Connections. This is dependent on ulimit daemon nbproc 1 # Number of processing cores. Dual Dual-core Opteron is 4 cores for example. defaults mode http clitimeout 600000000 srvtimeout 600000000 contimeout 400000000 log 127.0.0.1 local0 log 127.0.0.1 local1 notice option httpclose # Disable Keepalive listen http_proxy 127.0.0.1:8080 balance leastconn # Load Balancing algorithm acl acl_apache path_end .avi .jpeg #option httpchk option forwardfor # This sets X-Forwarded-For ## Define your servers to balance server DE2 127.0.0.1:8080 weight 1 maxconn 15 check

    Read the article

  • Best cloud based IT Systems management services out there?

    - by Ryk
    Our startup organisation is growing fast in 2 different office locations. That brings new challenges and headaches. Our entire company is cloud based, and I am looking for a good product to manage our remote systems. Currently we do not have on-site AD servers, we are using the Windows Azure AD services, so cannot rely on group policies at this stage. I would like to be able to achieve the following: (they are all laptops) Remote Desktop Support Patch management Lock down software on machines (restrict them) Monitor and manage systems Other benefits would be good, but if I can achieve the ones listed above, it will go a long way. We have a combination of Windows 7 pro & Windows 8 & 8.1 machines. I am currently using Windows Intune, but it is really limited. Really just a glorified patch enforcer. Thank you in advance to your help.

    Read the article

  • Write but not delete

    - by hunix
    Hi, We are using glusterfs for our cloud storage needs. Since the partition is open to many servers, we would like to disable file deletion as we don't delete or overwrite any file. Glusterfs does not have ACL, so I need to implement this solution outside of the glusterfs. Perhaps I can mount the disks read and write only (with disabling deletion), but could not find any solution. Setfacl etc. does not work on the partition. How can we disable file deletion -at least- on the client machines? Thanks,

    Read the article

  • "this network location can't be included because it is not indexed" on Windows 2008R2 Remote Desktop Services Hosting

    - by ChrisNZ
    I'm setting up a new terminal server for our users on Win2008R2 (I guess I should call it Remote Desktop Services now!) When I try to change the location of "Documents" (by removing the default Documents library and adding a new one), to use the file server ie \\fileserver\username\Documents I get the message: "This network location can't be included because it is not indexed" I certainly don't want to make folders available offline, and in fact, I have set the GPO to prohibit offline folders on the terminal servers. What is the best practice for document libraries on terminal server and network file shares?

    Read the article

  • LVM incorrectly reported missing after power failure

    - by mensi
    We have had a major power failure in the data-center. We are using a set of servers for our storage needs. The main server has several pairs of disks mirrored with mdadm. The resulting /dev/mdX are LVM physical volumes and belong to a big volume-group with all our data. After the powerloss, we had the problem that one of the mdadm devices was not auto-detected due to a missing entry in mdadm.conf. As a consequence, the volumegroup had inactive logical volumes due to the missing PV. We were able to fix the mdadm config and reboot. pvscan shows all expected PVs but one LV still does not come up. vgdisplay shows: [...] Cur PV: 3 Act PV: 2 [...] Neither vgscan nor pvscan show any missing devices. What went wrong? How can we force LVM to activate all PVs?

    Read the article

  • Best Practice for upgrading PHP On Production Systems

    - by Demic
    We Have two load balanced web servers running php 5.3. I've been asked by our dev team to upgrade php to 5.4 because they need certain functionality it will bring. The main issue is that 5.3 is the latest thats been built into the distros repository, so to upgrade using the package manager, Ill need to add another 3rd party repo. I dont have a problem with this per se, but Im concerned about using a package from a "non official" source. The other option is to compile php from source, but I guess this will prevent me from using the package manager to upgrade at any stage in the future? So I guess Im just looking for some guidance on which way to go. Compile from source or install from any old repo that purports to supply php 5.4? Or perhaps theres a third option I havent considered? Thanks in advance Demic

    Read the article

  • XP Clients can't copy to networkshare

    - by chewbacca76
    i have a windows 2003 domain where i have strange problem. One of our file shares is on a 2003r2 domain controller, xp clients trying to copy files on the share are always getting the error error copying file or folder filename could not be copied. path too long while windows 7 clients work fine. Nothing unusal is found in the eventlog on both the server and the client. It doesn't matter if i access the share by fqdn or ip, the path is including filename shorter than 20 characters i.e. \path\share\file.txt Copying files to other servers is fine. Reading from the shares is ok too. Happened from one day to the other, one windows update that was installed this day (kb2736233) was removed but nothing changed. thanks for any tips

    Read the article

  • Nightly backups (and maybe other tasks) causing server alerts

    - by J. Pablo Fernández
    I have two independent alert notification systems for my servers. The server is a virtual machine on Linode and one of the alerts comes from Linode. The other monitoring system we use is New Relic. They are both watching out for IO utilization. Every night I get alerts from both of them as the server is using too much IO. I run quite a few tasks in the middle of the night but the one I confirmed that can cause IO-warnings is running the backups. The backup is done by s3cmd sync. I tried ionice but it still generates the warnings. Getting warnings every night reduces the efficacy of warnings when they happen for real. For Linode I could raise the level at which a warning is issued, but it might mean making the whole thing useless as the level is too high. What would be the proper solution for this?

    Read the article

  • Some Domain Clients unable to access certain websites

    - by Shaunie
    I have a small domain around 20 clients with a 2003 R2 SP2 DC. Most of my clients can browse the internet freely and dont have a problem. However a couple are reporting problems accessing certain sites. IE: Hotmail, skyscanner, bbc news They can browse the sites sometimes then other times they get 408\409 errors. other machines in the domain can access these sites. I have cleared out dns cache on these machines modified external dns servers on the DC still to no avail. The main issue is the person not able to access skyscanner uses it several times a day to book flights for employess going on leave or returning to work. both clients are running XP SP3 though one machine is getting change for one running win7 shortly. Any advice greatly appreciated. thanks

    Read the article

  • Get SMTP to work

    - by user664408
    We upgraded to exchange 2010 and this broke an old java based script that connected and sent out e-mail messages. Many hours later we still can't get exchange to work like exchange 2003 did. That hope was abandoned and we decided to create a linux postfix server to forward the e-mail from the old system to exchange, eliminating exchange on the java side. This still doesn't work with similar errors. I need help figuring out what is different between exchange 2003 with SSL and authentication and the new servers, both linux and exchange 2010. My guess is both have TLS and for some reason the java code won't revert back to the older version of SSL, instead it just fails. Can someone help me either setup exchange 2010 to work like 2003 used to, OR to setup postfix to mandate it use SSL 2.0 instead of TLS? unfortunately no one knows anything about the Java code and they can't decompile it apparently. Any help is appreciated.

    Read the article

  • Will learning to use Fedora also teach me my way around Redhat (CentOS)?

    - by Matt Untsaiyi
    I want to dive into the open source world and start using a Linux distro while learning to program. I've looked over the options and it pretty much boils down to Fedora or CentOS. The reasoning behind it is I'm hoping to kill two birds with one stone... Redhat seems to be "the choice" for servers, so I figure as I learn to program, I can also learn my way around Linux... or Redhat more specifically... and get that under my belt too. I want to use Fedora, and be on the frontier of new software (since I'm not doing anything critical), but if it's completely different than Redhat I'd rather just use CentOS. So is it? Or can I use one and know the other?

    Read the article

  • Cisco ASA Multiple Public IP

    - by KGDI
    I have a Cisco ASA5510 and articles related to ASA and mulitple Public IP says this cant be done. My question is how to best solve a scenario like this: I have 3 zones, Outside, Inside and DMZ Outside is Internet Inside is Client machines DMZ is a zone for servers related to external and internal services. My scenario is a bit more complex, but to keep things simple this will do: I want to place an Exchange server and a web server (externally reachable in the DMZ zone) The webserver uses both TCP80/443, the Exchange server uses 443 So to the problem: With the ASA only having one public IP, how would you make a DNAT to port 443 on both the internal hosts behind 1 Public IP? Usually, when i do this kind of scenario With Linux boxes i use alias Interfaces like eth0:0, eth0:1 and set 1 Public IP on each. To me this must be a pretty common scenario, any ideas on how to solve it With ASA? /KGDI

    Read the article

< Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >