Search Results

Search found 29037 results on 1162 pages for 'cold start'.

Page 315/1162 | < Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >

  • Postfix relay all mail through SES except for one sending domain / address

    - by Kevin
    I'm thinking this is really really super simple, but I can't figure out what I need to do. I don't mess with Postfix much (Just let it run and do its thing) so I've got no idea where to even start with this. We have postfix currently configured to relay all mail out through SES using the code below. We need to modify this so that emails sent from one of our domains (domain.com) DO NOT go through SES. Everything else should continue to flow out through the SES connection. I'm assuming this is like a one line thing but my google skills are not helping me at all. relayhost = email-smtp.us-east-1.amazonaws.com:25 smtp_sasl_auth_enable = yes smtp_sasl_security_options = noanonymous smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_use_tls = yes smtp_tls_security_level = encrypt smtp_tls_note_starttls_offer = yes smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt smtp_destination_concurrency_limit = 450 Update I have created sender_transport file in /etc/postfix. In it is @domain.com smtp: I then ran this through postmap and placed sender_dependent_default_transport_maps = hash:/etc/postfix/sender_transport above the above block of code and restarted postfix, but still all email is going out through SES. Log after sending Oct 22 14:38:48 web postfix/smtp[19446]: 4B19D640002: to=<[email protected]>, relay=email-smtp.us-east-1.amazonaws.com[54.243.47.187]:25, delay=1.4, delays=0.01/0/0.92/0.44, dsn=2.0.0, status=sent (250 Ok 00000141e21b181f-ee6f7c4f-f0f5-4b0f-ba69-2db146a4f988-000000) Oct 22 14:38:48 web postfix/qmgr[19435]: 4B19D640002: removed I don't think this log is what you're looking for, but it's the only thing that is logged when mail goes out, and this is with me running /usr/sbin/postfix -v start manually and not with the init script.

    Read the article

  • Incorrect durations mp4 file created by ffmpeg (avconv)

    - by Ruslan Sharipov
    Example usage: avconv -i rtmp://maps.lo.ufanet.ru/live/10e227922b473e91f37474fa084107af -vcodec copy -an -sn -map 0 -f segment -segment_format mp4 -segment_time 60 -y %05d.mp4 avconv version 0.8.3-6:0.8.3-1+b1, Copyright (c) 2000-2012 the Libav developers built on Jun 15 2012 13:54:35 with gcc 4.7.0 HandShake: client signature does not match! Metadata: height 480.00 remote_addr: sdp_session {sdp_session,0, {sdp_o,"-","1289703354974145","1289703354974145",inet4, "10.1.12.99"}, "Media Presentation", {inet4,"0.0.0.0"}, {0,0}, [{"control","*"},{"range","npt=0.0 start 30400239.52 timeshift_duration 319250.58 timeshift_size 120000.00 width 640.00 [flv @ 0x1d36a40] Estimating duration from bitrate, this may be inaccurate Input #0, flv, from 'rtmp://maps.lo.ufanet.ru/live/10e227922b473e91f37474fa084107af': Duration: N/A, start: 0.000000, bitrate: N/A Stream #0.0: Video: h264 (Baseline), yuvj420p, 640x480 [PAR 1:1 DAR 4:3], 1k tbr, 1k tbn, 2k tbc Output #0, segment, to '%05d.mp4': Metadata: encoder : Lavf53.21.0 Stream #0.0: Video: libx264, yuvj420p, 640x480 [PAR 1:1 DAR 4:3], q=2-31, 1k tbn, 1k tbc Stream mapping: Stream #0:0 -> #0:0 (copy) Press ctrl-c to stop encoding ^Cframe= 9566 fps= 36 q=-1.0 Lsize= -0kB time=318.25 bitrate= -0.0kbits/s video:30348kB audio:0kB global headers:0kB muxing overhead -100.000071% Received signal 2: terminating. Result: serafim@yard:~/video2$ ls 00000.mp4 00001.mp4 00002.mp4 00003.mp4 00004.mp4 00005.mp4 Now try to play the files in the player, such as VLC. And that's what we get: the first fragment (00000.mp4) played well, no problems, but the second (00001.mp4 and beyond) starts the bug manifests itself, namely the file 00001.mp4 first 60 seconds black screen, but since 61 seconds starts playing the video. Attachments: https://dl.dropbox.com/u/760901/rtmp_and_mp4.zip How to get rid of the delay with black screen at the beginning of the segments? Maybe ffmpeg to pass parameters, or third-party software is able to correct the obtained segments mp4?

    Read the article

  • Linux wireless disconnect every 20 minutes

    - by james
    My laptop uses CentOS 6.3 with kernel 2.6.32-279.el6.x86_64. My wireless adaptor is Intel Corporation Centrino Wireless-N 1000. My wireless connection always get off after about 20 minutes. The network applet shows the connection is still on with good signal strength, but I just cannot load any web pages even the configuration page of the wireless router. The problem will continue until I disable and reconnect the wireless. Other devices like my cell phone uses the same wireless network without the problem. Even yesterday I'm using the same laptop with Fedora 17 without this problem. I also searched the internet and someone said running services NetworkManager and network simultaneously may be a problem. But I cannot stop any one of them because: if I stop network and start NetworkManager, the network service will start automatically; if I stop NetworkManager and run network, it says "Device does not seem to be present, delaying initialization." when trying to bringing on the wireless. What shall I do to get rid of the problem? Thank you very much!

    Read the article

  • VMware vSphere cluster design for site redundancy

    - by Stefan Radovanovici
    I have a question about the best design for site redudancy when using vSphere clusters. A bit of background info about our situation first though. We are a medium-sized company with two main offices, located in different countries. Our networks are linked by a Layer2 150Mbps leased line which is currently underused. We have a variety of services running for internal use within the company, some on physycal servers and some on existing vSphere clusters. In our department we also run several services (almost all running under various forms of Linux) like NTP, Syslog, jump servers, monitoring servers and so on. We have now the requirement that those servers need to be redundant within each location (which they are not at the moment) and also site redudant (which they are to some extent, the servers are duplicated in the 2nd location with configurations kept in sync via various methods at the application layer). There is no SAN available for us, at least not something that we can use at the moment. Cost is also an issue. While we do have some budget available for this, we can't afford to buy SANs for both locations for example. I looked at the VSA feature and it seems that this could be something for us but I am unsure how to solve the site-redudancy requirement. At the moment for testing purposes I am setting up in a lab a vSphere 5 with VSA on two ESXi hosts. I am currently using the Essentials Plus kit with VSA license, which allows me to build a VSA cluster on up to 3 hosts, together with a vCenter license to manage them. The hosts each have two dual-port network cards and two 600GB drives, running in Raid1. Hardware-wise this will be enough for us to run the all the services we need as VMs and will provide redundandcy within the site. At the moment I see only two option to have site redundancy: build an identical VSA cluter in the second location and keep the various services sync'ed at application layer (database sync, rsync and so on). simply move one of the hosts from the existing cluster to the second location, basically having the VSA cluster span the 150Mbps link between the sites. I would very much prefer the second option but I am unsure how well it'll work, if it can work at all. Technically it should, we can span the needed VLANs across the leased line and have them available in the second location. The advantage would be that we don't need to worry at all about sync'ing databases and the like. But I have the feeling that the bandwidth will not be enough, I have no way of knowing how much traffic will the VSA cluster generate between the hosts. I realize that this will most likely depend on the individual usage of the VMs but still, I have no idea how VSA replicates data between the ESXi hosts. Are these my only options or can my goals be achieved in some other way ? Is there perhaps a way to have some sort of "cold stand by" cluster in the second location where the VMs would be sync'ed once per night from the main location ? The idea is that in case the first site becomes unavailable, we would be able to bring all those VMs online there. We would be ok with the data being 1 day old. Any answers are appreciated. Best regards, Stefan

    Read the article

  • Execute encrypted files but don't let anybody read them.

    - by Stebi
    I want to provide a virtual machine image with an installed web application. The user should be able to boot the vm (don't login, just boot) and a webserver should start automatically. The point is I want to hide the (ruby) source code of the web application from everyone as there is no obfuscator for ruby. I thought I could use file system encryption to encrypt the directory with the sourcecode (or even a whole partition). But the webserver user must be able to read it automatically after booting. Nobody is allowed to login as the webserver user (or any other user) so no other can read the contents. My questions are now: Is this possible? Because I give away the whole vm everybody could mount its virtual discs and read them (except the encrypted one). Is it now possible to find the key the webserver user needs to decrypt the files and decrypt them manually? Or is it safe to give such a vm away? The problem is that everything needed to decrypt must be included somewhere in the vm else the webserver cannot start automatically. Maybe I'm completely wrong and you have another tip for me securing the source code.

    Read the article

  • Reading log files from web application

    - by Egorinsk
    I want to write a small PHP application for monitoring logs on a Debian server, including syslog logs and Apache/PHP messages. The problem here is that Apache user (www-data) has no access to /var/log directory. What would be the best way to grant an access to logs for PHP application? Let's assume that log files can be really large, like hundreds of megabytes. I have some ideas: Write a shell script that would be run via sudo and tail last 512 Kb of log into a separate file that can be read by application - that's ineffective, because of forking a new process and having to read data twice Add www-data to adm group (that can read logs) - that's insecure Start a PHP process via cron every minute to read logs — that's not very good, because it doesn't allow real-time monitoring. Also, this script will be started even when I don't read logs, and consume CPU time (server is in the cloud, and I'll have to pay for it) Create a hardlink for all log files with lowered permissions - I guess, that won't work because logrotate could recreate log files and they'll change inode number. Start a separate nginx/Apache server under privileged user that may read logs. Maybe anyone got a better solution?

    Read the article

  • vmware player won't run on CentOS due to missing /dev/vmmon, what could be the problem?

    - by Graphics Noob
    So I've tried installing vmware player 3.1.4 and 3.1.3 and both times had the same problem, when I try to load a VM I get the error "Could not open /dev/vmmon". When I ls /dev/ I can see there is no "vmmon" device present. When I try running: sudo /etc/init.d/vmware start I get the output: Starting VMware services: VMware USB Arbitrator [ OK ] Virtual machine monitor [FAILED] Virtual machine communication interface [ OK ] VM communication interface socket family [ OK ] Blocking file system [ OK ] Virtual ethernet [FAILED] which shows that the Virtual Machine Monitor fails to load. I tried following the advice on this site and ran vmware-modconfig --console --install-all I notice during the compilation there are no errors, but at the end I get the message: Starting VMware services: VMware USB Arbitrator [ OK ] Virtual machine monitor [FAILED] Virtual machine communication interface [ OK ] VM communication interface socket family [ OK ] Blocking file system [ OK ] Virtual ethernet [ OK ] Unable to start services Out of curiousity I tried: sudo /sbin/insmod /lib/modules/2.6.18-238.9.1.el5xen/misc/vmmod.ko But got the error message: insmod: error inserting 'vmmon.ko': -1 Invalid module format I have a feeling this may be the root of the problem, but I don't know what could be causing it or how to fix it.

    Read the article

  • Having problems VPN'ing into our Windows server network.

    - by Pure.Krome
    Hi folks, When two people (on their notebooks) try to VPN to our office, only the first user gets a connection. the second user always times out. Is it possible for VPN to allow two or more people, using / sharing the same EXTERNAL PUBLIC IP to connect/authenticate? Now for some specifics (cause those two statements are very broad). I'm not in the IT Dept. I'm a developer. Our IT Dept don't really care (sigh) so it's up to me to fix this crap. Our office is all Microsoft shop stuff - servers and clients. We also have a firewall (watchguard brand?) and some other crazy setups (yes i know, it's very vague :( ). So i'm wondering - is it possible for multiple users, from the same public IP, to connect via VPN to a windows server? i'm under the impression - yes. But it is possible that this only happens when the clients (who are all behind the single, public IP .. otherwise they will have their OWN ip's) need to have UPnP running or something? this is killing me and i need to start asking the right questions cause these guys don't know what they are doing and i can't work without this happening. I know this is a vauge question with so many 'if-what's-etc' but maybe some questions/suggestions from you guys might start to lead to solving this problem. EDIT: Network Connection: WAN Miniport (PPTP)

    Read the article

  • Control Panel as menu includes a blank item

    - by Matthew Ferreira
    When viewed as a menu attached to the Start Menu in Windows 7 Ultimate x64, the Control Panel contains a blank item. It looks like this: This item cannot be deleted or removed. I also cannot create a shortcut to it. No error message is displayed, instead simply nothing happens. I've tried using Shell Object Editor (using Run as Administrator) to find out if there is an errant entry on the Control Panel, but many entries (almost two dozen) are blank. There are several valid entries as well. I've looked through the registry and through C:\Windows, \system32, and \SysWOW64 but have had no success. I looked at this question, but I am not using Windows XP and thus have no option to use Tweak UI's Rebuild Icons function. Please note that this is no empty entry in the Control Panel when opened normally, only when attached to the Start Menu as a menu. I have compared the list of entries on the attached menu to the normal Control Panel and other than the blank entry, they are exactly the same. Nothing is missing from one or the other. I've also compared the menu and the normal view to reference images and lists of Control Panel items and have found no irregularities. Is anyone familiar with this problem or know of a solution? I've performed virus and malware scans and found nothing. I've used CCleaner with no change. Nothing with Shell Object Editor. Nothing with Registry Editor. Certainly someone here knows how to fix this. My only guess is the many blank entries visible in Shell Object Editor, but I am reluctant to delete that many items without further analysis and guidance. I appreciate your time and consideration.

    Read the article

  • In Stud, which Private RSA Key should be concatenated in the x509 SSL certificate pem file to avoid "self-signed" browser warning?

    - by Aaron
    I'm trying to implement Stud as an SSL termination point before HAProxy as a proof of concept for WebSockets routing. My domain registrar Gandi.net offers free 1-year SSL certs. Through OpenSSL, I generated a CSR which gave me two files: domain.key domain.csr I gave domain.csr to my trusted authority and they gave me two files: domain.cert GandiStandardSSLCA.pem (I think this is referred to as the intermediary cert?) This is where I encountered friction: Stud, which uses OpenSSL, expects there to be an "rsa private key" in the "pem-file" - which it describes as "SSL x509 certificate file. REQUIRED." If I add the domain.key to the bottom of Stud's pem-file, Stud will start but I receive the browser warning saying "The certificate is self-signed." If I omit the domain.key Stud will not start and throws an error triggered by an OpenSSL function that appears intended to determine whether or not my "pem-file" contains an "RSA Private Key". At this point I cannot determine whether the problem is: Free SSL cert will always be self-signed and will always cause browser to present warning I'm just not using Stud correctly I'm using the wrong "RSA private key" The CA domain cert, the intermediary cert, and the private key are in the wrong order.

    Read the article

  • Starting multiple Chrome full screen instances on multiple monitors from (batch) script

    - by Bob Groeneveld
    My goal is to show different web content full screen on multiple monitors automatically after booting from a single computer. The browser I would like to use is Chrome. If Chrome does not support this and Firefox does that would be fine. The OS I would prefer is Windows, if it turns out that Linux is possible that would be fine. On Windows it is possible to set the position of the Chrome browser window (--window-position=) and make Chrome start in full screen mode (--kiosk). Using these options combined you can start Chrome full screen on any of the desktops/screens that you have connected to your computer. I have managed to get this working. However, if I then try to do the same thing a second time to have Chrome full screen on a second screen the second Chrome window will open over the first window, no matter the coordinates I use for the --window-position parameter. I have tried using Chrome profiles and copying the Chrome directory and starting the second chrome.exe. All these things result in the same behaviour.

    Read the article

  • lighttpd: why using port >= 9000 does not work properly

    - by yejinxin
    I had a lighttpd server which works normally. I can access this website from outside(non-localhost) via http://vm.aaa.com:8080. Let's just assume that it's a simple static website, without php or mysql. Now I want to copy this website as a test one(using another port) in the same machine. And I do not want to use virtual host. So I just copy the whole files of original server, including lighttpd's bin/ conf/ htdocs/ lib/ and so on folders. And I made some required change, including changing lighttpd.conf. Now what I'm confused is, if change the port to a number that is less than 9000, it works perfectly. But if the port is changed to a number that is equal or greater than 9000, lighttpd can start, but I can not access the new website from outside, while I do can access the new website from INSIDE(I mean in the same LAN or localhost). The access log from INSIDE is like below: vm.aaa.com:9876 10.46.175.117 - - [08/Oct/2012:13:18:47 +0800] "GET / HTTP/1.1" 200 15 "-" " curl/7.12.1 (x86_64-redhat-linux-gnu) libcurl/7.12.1 OpenSSL/0.9.7a zlib/1.2.1.2 libidn/0.5.6" Command I used to start lighttpd is: bin/lighttpd -f conf/lighttpd.conf -m lib/ -D My lighttpd.conf is like: server.modules = ( "mod_access", "mod_accesslog", ) var.rundir = "/home/work/lighttpd_9876" server.port = 9876 server.bind = "0.0.0.0" server.pid-file = var.rundir + "/log/lighttpd.pid" server.document-root = var.rundir + "/htdocs/" var.cronolog_path = "/home/work/lighttpd_9876/cronolog/sbin/cronolog" server.errorlog = ... accesslog.filename = ... ... So why is this happening? I've tried several diffrent ports, still the same. Isn't that ports between 8000 and 65535 are the same?

    Read the article

  • Fresh 12.04 Install - mySQL not starting

    - by Lee Armstrong
    I have a freshly installed Ubuntu 12.04 x64 server and I installed Percona server from their official repositories. Trouble is it will not start! mysql-error.log shows nothing obvious. 121129 12:16:54 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql/ 121129 12:16:54 [Note] Plugin 'FEDERATED' is disabled. 121129 12:16:54 InnoDB: The InnoDB memory heap is disabled 121129 12:16:54 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121129 12:16:54 InnoDB: Compressed tables use zlib 1.2.3 121129 12:16:54 InnoDB: Using Linux native AIO 121129 12:16:54 InnoDB: Initializing buffer pool, size = 12.0G 121129 12:16:54 InnoDB: Completed initialization of buffer pool 121129 12:16:54 InnoDB: highest supported file format is Barracuda. 121129 12:16:55 InnoDB: Waiting for the background threads to start 121129 12:16:56 Percona XtraDB (http://www.percona.com) 1.1.8-rel29.1 started; log sequence number 1598476 121129 12:16:56 [Note] Server hostname (bind-address): '0.0.0.0'; port: 3306 121129 12:16:56 [Note] - '0.0.0.0' resolves to '0.0.0.0'; 121129 12:16:56 [Note] Server socket created on IP: '0.0.0.0'. 121129 12:16:56 [Note] Event Scheduler: Loaded 0 events 121129 12:16:56 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.5.28-29.1-log' socket: '/var/run/mysqld/mysql.sock' port: 3306 Percona Server (GPL), Release 29.1 121129 12:16:56 [Note] Event Scheduler: scheduler thread started with id 1 And the syslog shows... Nov 29 12:17:07 V-PF-SQL1 /etc/init.d/mysql[2206]: 0 processes alive and '/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf ping' resulted in Nov 29 12:17:07 V-PF-SQL1 /etc/init.d/mysql[2206]: #007/usr/bin/mysqladmin: connect to server at 'localhost' failed Nov 29 12:17:07 V-PF-SQL1 /etc/init.d/mysql[2206]: error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)' Nov 29 12:17:07 V-PF-SQL1 /etc/init.d/mysql[2206]: Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists! Nov 29 12:17:07 V-PF-SQL1 /etc/init.d/mysql[2206]: The socket file is being created and I can access the server NOT using the socket using mysql -h 127.0.0.1 -P 3306 -u root --pPASSWORD

    Read the article

  • Whats the easiest route to trying out mono 2.6?

    - by E J
    We have several web applications built on Microsoft technologies (asp.net+mvc framework, built using VS2008, MS SQL Server). I have recently be playing with Ubuntu (9.10), installed using Wubi, and wanted to see if I can get our apps running on a foss software stack. I have got the hang of the very basics of Postgresql and I have read that there is some support for Linq to SQL in mono (as of 2.6) as well as asp.net/MVC. However I am unsure how to go about getting Mono 2.6 up and running. Here is what I have discovered so far: Ubuntu is not meant for the 'cutting edge' it is designed to be stable hence, it sometimes takes a release cycle or two for new software to make it to the repositories Mono is already installed by default, but it is likely to stay at version 2.4 for at least the 10.4 release You can install paralell environments of Mono, if you know what your doing. I have had a go at setting up parallel environments, but haven't had any luck yet. (And TBH I am not certain that that will do what I think it's gonna do). (tl;dr start here) Is there a distribution of Linux similar enough to Ubuntu, that I wouldn't have to start the learning curve all over again, but that will let me install Mono 2.6, Postgresql, (and possibly mono-develop 2.4)? Or should I persist with Ubuntu?

    Read the article

  • Small business server 2011 standard - applications randomly closing for remote desktop users

    - by Ash King
    Small business server 2011 standard - applications randomly closing for remote desktop users I have an issue where when you are connected through remote desktop (doesn't matter whether you have administrative rights or not). What happens: Any application that you run (outlook, word, excel, notepad, cmd etc..) the application will randomly crash and produce an error as such: Faulting application name: EXCEL.EXE, version: 14.0.6112.5000, time stamp: 0x4e9b2b30 Faulting module name: ieframe.dll, version: 8.0.7600.16930, time stamp: 0x4eeb0187 Exception code: 0xc0000005 Fault offset: 0x0000000000131e03 Faulting process id: 0x3d4c Faulting application start time: 0x01cecf3491388e43 Faulting application path: C:\Program Files\Microsoft Office\Office14\EXCEL.EXE Faulting module path: C:\Windows\System32\ieframe.dll Report Id: 1c06abd4-3b2b-11e3-bd8d-001999b270e9 I noticed the ieframe.dll, but its not constant for every application that crashes, e.g.: Faulting application name: OUTLOOK.EXE, version: 14.0.6109.5005, time stamp: 0x4e79b6c0 Faulting module name: PSTOREC.DLL_unloaded, version: 0.0.0.0, time stamp: 0x4a5be02a Exception code: 0xc0000005 Fault offset: 0x000007fef39c7158 Faulting process id: 0x43f8 Faulting application start time: 0x01cecf33fe5eec26 Faulting application path: C:\Program Files\Microsoft Office\Office14\OUTLOOK.EXE Faulting module path: PSTOREC.DLL Report Id: 0c0f5934-3b2b-11e3-bd8d-001999b270e9 I am unable to perform a sfc /scannow command due to the cmd.exe crashing as well.. I have performed a virus scan on the server which did originally pick up 5 viruses: riskware.tool.ck -> File riskware.tool.ck - > Memory Process trojan.agent.bdavgen -> File trojan.agent -> File HiJack.comsysapp -> Registry Data But after removing these and rebooting the machine we have had no luck Has anyone else ever come across this issue before? Also to elaborate it is happening as frequently as every minute.

    Read the article

  • Borked ubuntu uninstall - need to delete boot partition (i think)

    - by Max Williams
    I just got a new pc laptop with windows 7 and wanted to install Ubuntu on it. Which i did, no problem there, by downloading the installer, burning it to dvd then booting off the dvd and installing. Then, i realised that the new Ubuntu 12.04 uses the Unity desktop, which i immediately disliked, and after some research, began to hate. So, i decided (after a little googling) to install Linux Mint instead. So, thinking i'd better start from scratch, i went to the Windows 7 disk manager and wiped the Ubuntu partition that had been created. Now, when i start up, i get an error from grub, the ubuntu boot manager: error: unknown filesystem grub rescue> _ and a blinking cursor where i can enter commands. I suspect that what i've done is deleted the main ubuntu partition but NOT deleted another partition which is a boot partition, or something like that? Can anyone tell me how i can rescue or unbork this? I'd like to either a) get back to my original windows-only setup OR b) install linux mint off dvd (which i have), into the empty partition, fixing any grub confusion in the process. Any suggestions? Thanks, max BTW please don't answer if you're just going to tell me to stick with 12.04, or install a different distro or something. I definitely want Mint and just want to fix this mess - thanks :)

    Read the article

  • need some help figuring out clamav & monit monitoring error...unixsocket...

    - by Ronedog
    I need a bit of help figuring something out. First off, I'm not very well versed with FreeBSD servers, etc. but with some direction hopefully I can get this fixed. I'm using FreeBSD and installed Monit so I could monitor some of the processes that run tomcat, apache, mysql, sendmail, clamav. So far, I'm only successful in getting apache & mysql to be monitored. I'm getting this error for clamav in the log file for /var/log/monit.log 'clamavd' failed, cannot open a connection to UNIX[/usr/local/etc/rc.d/clamav-clamd] My config file for clamav in /etc/monitrc is: #################################################################### # CLAMAV Virus Checks #################################################################### check process clamavd with pidfile /var/run/clamav/clamd.pid group virus start program = "/usr/local/etc/rc.d/clamav-clamd start" stop program = "/usr/local/etc/rc.d/clamav-clamd stop" if failed unixsocket /usr/local/etc/rc.d/clamav-clamd then restart if 5 restarts within 5 cycles then timeout Honestly, I really don't know much of what's going on here. My host who helped me get the box set up basically installed clamav, but doesn't offer this kind of detail in supporting me, so I'm left to figure this stuff out on my own as I own the box, but they provide the isp service. Is there anyone who can help me troubleshoot this? Thanks for your help in advance.

    Read the article

  • configuring apache with mod_mono for .net app

    - by Mystere Man
    I'm having a huge problem getting mod_mono and apache configured to work correctly. I've had this working at one time, but I can't seem to figure out where i'm going wrong. I'm using mono-server4. I'm trying to use a seperate port from the main website. So I have in /etc/apache2/sites-available (with a link from sites-enabled) a vhost configuration that looks like this: <VirtualHost *:9999> ServerName XXX ServerAdmin web-admin@XXX DocumentRoot /var/xxx MonoServerPath XXX "/usr/bin/mod-mono-server4" MonoDebug XXX true MonoSetEnv XXX MONO_IOMAP=all MonoApplications XXX "/:/var/xxx" <Location "/"> Allow from all Order allow,deny MonoSetServerAlias XXX SetHandler mono SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI "\.(?:gif|jpe?g|png)$" no-gzip dont-vary </Location> <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript </IfModule> </VirtualHost> I used mono-server4-admin to create the application mono-server4-admin --path=/var/xxx --app=/XXX --port=9999 When i start apache, it gives the error: Syntax error on line 13 of /etc/apache2/sites-enabled/xxx: Server alias 'XXX, not found. This corresponds with the MonoSetServerAlias statement. So I commented it out, and when I do that apache starts. However, when I try to access the site, I get a 500 error. The access log indicates that it's trying to access the app on port 80, rather than 9999. I'm not sure what the problem is here. Can anyone help me get figure out where I went wrong? My mono-server4-hosts.conf contains this: # start /etc/mono-server4/conf.d/RMRSite/10_XXX Alias /XXX "/var/xxx" AddMonoApplications default "/XXX:/var/xxx" <Directory /var/xxx> SetHandler mono <IfModule mod_dir.c> DirectoryIndex index.aspx </IfModule> </Directory> # end /etc/mono-server4/conf.d/XXX/10_XXX Also, my /etc/mono-server4/conf.d/XXX/10_XXX contains this: This is the configuration file for the XXX virtualhost path = /var/xxx alias = /XXX vhost = localhost port = 9999

    Read the article

  • Hard drive degredation from large memory usage and paging files?

    - by Stephen R
    I've had a question(s) regarding computer degradation going through my head for a while and haven't found many good resources for researching it. 1) First off, when is the virtual RAM/paging file on a hard drive used by Windows? Is it used when the RAM is full? Or does it use the Virtual RAM/paging file as intermediate caching between the RAM and actual hard drive space all the time? 2) If I were to run many applications on my computer at the same time and have a bad habit of doing this for the entire lifetime of the computer, does it use more of the virtual RAM/paging file than if I were to have fewer programs running? Just to note, the RAM never fills up on my computer but it is used heavily. 3) By extension of question 2, if the virtual RAM/paging file is used more heavily, would that result in rapid hard drive degradation? I have seen a pattern among all of the computers that I have owned or used in the past 5 years. I am the kind of person to leave my web browser up with 40 tabs among other programs which will eat up 40% of my memory typically. Over time my computer will slow down, browsers start crashing, programs start seizing up or crashing themselves, eventually the computer becomes essentially unusable. I have been trying to rack my mind to come up with a solution other than to purchase a new PC to have it die on me in the next couple years as well. This is the only thought that has come to mind that might have a simple hardware fix...Windows ReadyBoost...Maybe? I'd like to be able to discuss this so I can learn something about all of the above. Thanks.

    Read the article

  • Unusable network, packet losses between router and NIC

    - by KáGé
    I have this setup: Gigabyte P35-DS3P motherboard Asus NX1101 PCI network card (the one on the motherboard got fried a few years ago by a power surge) Asus RT-N16 router Windows 7 x64 I think the other specs are irrelevant here, but I'll post them if you say so. Until a week ago everything was fine, but then my network became unusable: websites start loading but timeout before anything would come through (true for the web interface of the router as well), I can't reach the computer from my notebook and Windows' ping utility measures a ~50% packet loss between the computer and the router. Pinging localhost is good. The router works completely fine when wired to my notebook. I also tested different ports on the router, different cables, different router and connecting directly to the modem, but it's still the same. Sometimes it works for a few minutes right after turning on the machine, but then it becomes crap again, but mostly it's useless from the start. I've tried updating the firmware on the router, updating the driver for the network card (after which I started getting BSoDs in every 15 minutes), reinstalling Windows, swapping to Fedora 15 but none of them changed anything. Does this mean that the network card is dying, or could it be something else? If it's the card, what model do you recommend as a replacement? (Could be PCI or PCI-Ex x1) Thanks for your help.

    Read the article

  • Create account for service

    - by Andy
    I am configuring a new server. The server is running Hudson that is going to copy some files from this server to another. The other server is a virtual machine. Both running Windows Server 2012. Hudson is started on server A with log on as "Local System". When I come to the copy phase it says "Access denied". Changing the log on to "Administrator" works. However, I guess this is bad. I do not have much experience with user management. I tried to create a own hudson account on both servers A and B. I tried to log on as hudson account in the service-management but it doesn't start. How would you create an account for this particular service that has access to the shared folder on server B and can be used to start the service on server A? I guess I need two accounts with same username and password on server A and server B? The folder on Server B is shared with everyone and the guest account is enabled.

    Read the article

  • Move files from ftp server to s3

    - by lev
    I would like to set up an ftp server, where users will upload files, and for each file, put it on s3 storage, and delete it from the ftp server. (the server runs on ec2 ubuntu) Here are the stuff I already tried, with no success.. Mount s3 bucket using s3fs. I followed those instructions, but there is a bug in the latest version of s3fs, that prevents it from working. The bug was fixed on the develop branch, but I don't want to use unstable version on my production. Use vsftpd and using s3cmd sync via cron to sync the files periodically. The problem with that approach, is that s3cmd can start running in the middle of a file upload, and start synching the incomplete file. Also s3cmd doesn't give any feedback it the sync fails, so I have no way of knowing if I can delete the files after the sync command finished running. Use pure-ftpd's upload script feature (which allows to run a script after a file is finished uploading), but I noticed that if the file upload was failed in the middle, the script will run anyway, and I have no way of knowing if the upload was successful or not. I've been at it for a few days now, and I'm at a loss here. Any suggestions will be welcomed.

    Read the article

  • Windows 7 hangs on black screen for a while after log in

    - by steini
    I get the welcome screen. I click on my user and get the "logging on" screen. After that all I get is a black screen with a mouse cursor. I can't even start task manager. No ctrl+alt+del or ctrl+shift+escape. It stays like this for about 10 minutes, then the desktop finally starts loading. According to the hdd led on my case, windows isn't even trying to access the hard drive for that whole time. It's just hanging doing nothing it seems. What I have tried: Uninstalled video driver and removed leftovers with driver sweeper Disabled all startup programs and non microsoft services Loaded "last known good configuration" Ran the alleged "black screen fix" from prevx against my best judgement (don't really like running random exes without knowing what they do at all) None of that works. I can boot into safe mode normally. My specs: i7 920 Gigabyte X58-UD3R Gigabyte HD5870 1GB 12GB Mushkin Silverline 1333MHz Windows 7 Ultimate x64 I'm also having another problem which I suspect is related. After I have gotten the computer up and running, everything works perfectly, but when it's been on for a while it starts behaving strangely when changing display modes. When I start up a game or anything that changes the screen resolution the computer freezes for about a minute every time until I reboot again. I think this is probably related to the black screen problem. Just thought I'd check to see if anyone has had the same problem. Let me know if I should post any more details about my system to help diagnose this. Thanks in advance.

    Read the article

  • How do I copy files between harddrives on Ubuntu CLI?

    - by ed209
    I have a dedicated server with a 120gb main ssd. The server happens to come with a couple of 3000GB hard drives. I'd like to use them to back up my main drive. Preferably, I'd like one as an exact copy of the main SSD and the other with incremental backups of the mysql database and a user uploads file. These are the drives I have Disk /dev/sda: 120.0 GB, 120034123776 bytes 255 heads, 63 sectors/track, 14593 cylinders, total 234441648 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000f2e18 Device Boot Start End Blocks Id System /dev/sda1 2048 4196352 2097152+ 83 Linux /dev/sda2 4198400 5246976 524288+ 83 Linux /dev/sda3 5249024 234441647 114596312 83 Linux Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Disk /dev/sdb doesn't contain a valid partition table Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Disk /dev/sdc doesn't contain a valid partition table The first problem I have, is that I have no idea how to copy from one drive to another. Kind of embarrassing I know, but I don't know where to start. I'm thinking of this in terms of Mac OS cli where I'm able to copy between /Volumes - is there an equivalent? (there is nothing under /mnt or /media)

    Read the article

  • Performance decrease in every game and application

    - by Márk Vincze
    When I start a game, initially it runs smoothly, but after a couple of minutes, the performance gradually decreases to the point of being unplayable (1-2 FPS). The sound also starts to lag at this point. This does not happen every time I start my PC, usually exiting the game, rebooting, then starting the game again solves the problem, and I can play with perfect FPS for as long as I want. I could not find any deterministic reason when this happens and when doesn't. It happens in every game I tried (SWTOR, Diablo 3, Skyrim), and not even games, but simple applications like a browser or the Control Panel can get unusably slow. This is a brand new PC I bought three months ago, and this problem occurs since the first day I've been using it. Could you provide any advice how to further diagnose the problem? I tried to reinstall Windows, and tried different video card drivers, but it did not help. It would be important to know whether this is a hardware or software problem, because I can use the warranty if it is a hardware issue. (I did not want to return the PC yet, because I can't reproduce the issue deterministically.) Spec of the pc: Motherboard: ASROCK H61M-HVS CPU: INTEL Core i3-2120 3.30GHz 1155 BOX Memory: KINGMAX 4096MB DDR3 1333MHz KIT Video card: GIGABYTE GV-R685OC-1GD HD6850 1GB GDDR5 PCIE HDD: SEAGATE 500GB Barracuda 7200rpm 16MB SATA3 ST500DM002 I am using Windows 7 64 bit. Thanks a lot in advance!

    Read the article

< Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >