Search Results

Search found 19674 results on 787 pages for 'free wordpress plugins'.

Page 385/787 | < Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >

  • Allowing outbound traffic with APF/iptables for OpenVZ container

    - by David
    I have apf installed on a OpenVZ container (proxmox 2.1). The config is pretty much vanilla and things are working. My external services like ssh and http are working. My problem is that all outbound traffic on http/https is blocked. How do I allow all outbound traffic for http/https. If I change EGF to 1 like this, all inbound and outbound traffic gets blocked EGF="1" EG_TCP_CPORTS="21,25,80,443,43,53" EG_UDP_CPORTS="20,21,53" EG_ICMP_TYPES="all" I opened a single outbound rule with the following # /usr/local/sbin/apf -a downloads.wordpress.org How do I allow all outbound traffic on http/https without blocking all traffic? Why would I allow all inbound ssh/http traffic and block all outbound traffic?

    Read the article

  • Can not boot windows XP from cloned hard disk - what can I do?

    - by Martin
    My configuration: a PC (some years old) with MSI K8N-Neo-4F Motherboard, 1 GB RAM. Disk 1 (Maxtor, SATA II, 250 GB): 2 Partitions, on Partition 1 (48 GB): Windows XP Professional (NTFS) on Partition 2 (190 GB): data (NTFS) I wanted to have a larger and faster disk (the PC is incredibly slow and permanently the disk is rattling when I try to open an application or during Windows startup), so I took Disk 2 (Seagate, Sata II, 500 GB), installed in the PC, created at first a 400 GB-partition at the end of the disk and cloned the data to it, which worked well Installed a swap partition and a partition for Ubuntu Linux 12.10 on the first "part" of the disk so I was able to boot Linux and the old Windows XP with the Linux "System selection" at startup. Now I wanted to move Windows XP to the new disk, deleted the Linux partitions cloned Windows XP to the new disk (with free tools - EASESUS), left both disks in the PC and tried to select the new hard drive during boot as boot partition. This did not work, the PC refused to boot from this second disk. I tried many things like making the boot partition on the 2nd drive "active" in the Windows System Preferences modifying the boot.ini file to boot from the second disk - tried to boot from it, but ended with an error message stating that it was not possible to boot from this disk because of a hardware failure or something else or so removing the original disk and plugging the new one on the same SATA port as the original one - also booting failed with an error message repairing the MBR by booting into recovery mode from the Windows XP Installation CD-ROM, selecting the second disk and doing "FIXMBR" which said that everything was fine with the MBR. after that at least the PC tried to boot from the newer disk and then startup was hanging during the blue screen with the Windows Logo.... no luck. ... deleting the cloned partition and cloning again - this time with Macrium Reflect Free version... - no success during booting. I tried a lot of things with no success, so I wonder what I am doing wrong?! What could I do to successfully clone my Win XP partition to replace the original disk by a larger one which is bootable.

    Read the article

  • 500 internal server error

    - by Rockr
    I am facing 500.0 Internal server quite frequently with my website. The error details are given below. HTTP Error 500.0 - Internal Server Error C:\PHP\php-cgi.exe - The FastCGI process exceeded configured activity timeout Module FastCgiModule Notification ExecuteRequestHandler Handler PHP_via_FastCGI Error Code 0x80070102 Requested URL http://mydomain.com:80/index.php Physical Path C:\HostingSpaces\coderefl\mydomain.com\wwwroot\index.php Logon Method Anonymous Logon User Anonymous When I contacted the support team, they're saying that my site is making heavy SQL Queries. I am not sure how to debug this. But my site is very small and the database is optimized. I'm running wordpress as platform. How to resolve this issue?

    Read the article

  • Backing up a Linux VPS with RSync to Vista

    - by Frank
    I've been working to setup a Linux VPS to host a couple of Wordpress sites and eventually a Mercurial server. I've setup one site and things have gone well. However, before I start moving other things to the VPS, I need to setup a backup solution. My provider, Linode, suggest RSync (among a couple of other options) to do backups. I've seen a few posts on this site that suggests other backup solutions including going to the Amazon Cloud but that costs money and the VPS is all the money I want to spend on this for the time being. So, to help solve that I want to have my backup computer be my home desktop computer. Assuming I'm using RSync, is it possible to use my Vista based home computer to become the destination for the backup? And if it is possible, what type of command or connection would I need to configure on the vista machine? Any insight would be helpful. It's probably obvious, but I've never used RSync.

    Read the article

  • How to disabled password authentication for specific users in SSHD

    - by Nick
    I have read several posts regarding restricting ALL users to Key authentication ONLY, however I want to force only a single user (svn) onto Key auth only, the rest can be key or password. I read How to disable password authentication for every users except several, however it seems the "match user" part of sshd_config is part of openssh-5.1. I am running CentOS 5.6 and only have OpenSSH 4.3. I have the following repos available at the moment. $ yum repolist Loaded plugins: fastestmirror repo id repo name status base CentOS-5 - Base enabled: 3,535 epel Extra Packages for Enterprise Linux 5 - x86_64 enabled: 6,510 extras CentOS-5 - Extras enabled: 299 ius IUS Community Packages for Enterprise Linux 5 - x86_64 enabled: 218 rpmforge RHEL 5 - RPMforge.net - dag enabled: 10,636 updates CentOS-5 - Updates enabled: 720 repolist: 21,918 I mainly use epel, rpmforge is used to the latest version (1.6) of subversion. Is there any way to achieve this with my current setup? I don't want to restrict the server to keys only because if I lose my key I lose my server ;-)

    Read the article

  • Nagios remote monitoring: NRPE Vs. SSH

    - by sam
    We use Nagios to monitor quite a few (~130) servers. We monitor CPU, Disk, RAM and a few other things on each server. I've always used SSH to run the remote commands, purely because it requires little to no additional config on the remote server, just install nagios-plugins, create the nagios user and add the SSH key, all of which I've automated into a shell script. I've never actually considered the performance implications of using SSH over NRPE. I'm not too bothered about the load hit on the Nagios server (It's probably over-speced for what it does, it's never been over 10% CPU), but we run each remote check every 30 seconds and each server has 5 different checks performed. I assume SSH requires more resources for each check but is there a huge difference? (I.E. enough of a difference to warrant the switch to NRPE). If it's any help, we monitor a mix of physical servers (Normally with 8, 12 or 16 physical cores) and Amazon EC2 medium/large instances.

    Read the article

  • startup cassandra layout

    - by davidkomer
    We've got a relatively low-traffic site (~1K pageviews/day) hosted on a single server, and expect it to grow significantly over the next few years. I'm thinking of moving over to Rackspace CloudServer or EC2 and firing up 3 nodes (all on CentOS): 2 x Web (Apache) - with loadbalancer 1 x MySQL (for the Wordpress powered part) The question is where to put Cassandra right now... Should it sit on each Web node, or the MySQL node? My thought right now is to put it on Web nodes. It's my understanding that Cassandra has the benefits of fault-tolerance (i.e. if we take a node down, the site is still operational). So even with only 2 nodes, we'd have that benefit as opposed to just putting it on the MySQL node. Also, as we scale up and add another node, a cassandra instance can come along with it and the php can always run its queries on localhost. Is this a good idea?

    Read the article

  • Video codec that can be read on clean installs of either Windows, OS X and Ubuntu

    - by fmercille
    I have to make a video that will need to be watched on different operating systems. Is there a "universal" video codec that can be played on Windows, OS X and Linux without requiring additional plugins or player other than those that comes on a default clean install of each of those systems? Compression is not an issue, I'm merely looking for compatibility (e.g. for audio, I would use WAV as a universal codec). Note : I must assume that the video will be distributed in countries where software patents are enforced, and therefore can't rely on the user to install non-free codecs on Linux. Thanks.

    Read the article

  • Why won't Media Monkey add one particular folder of mp3's?

    - by ChrisF
    I'm using the latest and greatest version on Media Monkey (free version) and it won't find the mp3's in one particular folder in my music tree. It can see all the other files in the tree and the folder shows up when I click Add/Rescan files to the library... I have full control over the folder and all the files in the folder. The files play in Windows Media Player. The files play in Media Monkey if I right click and play from the context menu. All the tracks are at least 2 minutes long and over 5MB long and Media Monkey is set to ignore files shorter than 20KB and include all files regardless of length. There was an issue in that the that the genre of the tracks was set to "Classical" and the option that allows you to browse the classical music independently of the other music isn't enabled in the free version. It's a Gold version option only. I hadn't spotted that my other classical music was also missing from the library (I have rather a large library). Once I retagged the music with a different tag and tried to add the files again it reported that it added the tracks, but they still didn't show up in the library.

    Read the article

  • Java Development in Linux

    - by Zac
    I'm a developer and am brand new to Linux (Ubuntu): I'm wondering what the "best practices dictate" for what FHS directories to install various tools to. Things I'll be installing: Eclipse & plugins GlassFish SVN ...etc. I see that /opt is for holding additional ("optional") software packages, but also see /usr as a place for utils and apps. In another post a user recommended I create an entire partition for /srv alone, and to do my staging there (I assume he meant that /srv is where GlassFish and other servers should go?). So basically: what FHS directories do Linux developers use for which type of tools? Thanks for any input here

    Read the article

  • Trying to delete a directory stored on a Windows server, from on a Mac, containing files created on the Mac, getting "Directory not empty"

    - by AdamG
    I am trying to delete a directory stored on a Windows 2008 R2 server, mounted on a Mac as network home (10.8.5). The directory was created by Safari and stores temporary internet files. I need to be able to delete this folder on logout from a Mac bash script. The Terminal on Mac shows the directory as empty: 36W-FacRm-02:History lwickham$ cd /home/lwickham/Library/Caches/Metadata/Safari/History 36W-FacRm-02:History lwickham$ ls -al total 0 drwx------ 1 lwickham CGPS\Domain Users 264 Nov 8 09:24 . drwx------ 1 lwickham CGPS\Domain Users 264 Nov 8 09:28 .. However, on the Windows server it has a single 0kb file that doesn't start with a "." but yet is invisible to the Mac. E:\FacultyHome2\lwickham\Library\Caches\Metadata\Safari\History>dir Volume in drive E is FacultyUsers2 Volume Serial Number is 8C17-4EF3 Directory of E:\FacultyHome2\lwickham\Library\Caches\Metadata\Safari\History 11/08/2013 09:24 AM <DIR> . 11/08/2013 09:24 AM <DIR> .. 11/07/2013 04:28 PM 0 http?%2F%2Fwww.google.com%2Furl?sa=t&rct= j&q=&esrc=s&source=web&cd=6&ved=0CFsQFjAF&url=http%253A%252F%252Fwww.usbanklocat ions.com%252Fhsbc-bank-usa-96th-street-branch.html&ei=5vR7UtmXEPjfsATe0YCIBA&usg =AFQjCNF9ypKbpYbXRng00FY3W8Y6cF1Tiw&bvm=bv.56146854,d. 1 File(s) 0 bytes 2 Dir(s) 514,231,967,744 bytes free 9ypKbpYbXRng00FY3W8Y6cF1Tiw&bvm=bv.56146854,d.1 File(s) 0 bytes2 Dir(s) 514,231,967,744 bytes free All my attempts to delete the dir from the Mac have failed: 36W-FacRm-02:History lwickham$ rm -fr /home/lwickham/Library/Caches/Metadata/Safari/History/* 36W-FacRm-02:History lwickham$ rm -frd /home/lwickham/Library/Caches/ rm: /home/lwickham/Library/Caches//Metadata/Safari/History: Directory not empty rm: /home/lwickham/Library/Caches//Metadata/Safari: Directory not empty rm: /home/lwickham/Library/Caches//Metadata: Directory not empty rm: /home/lwickham/Library/Caches/: Directory not empty

    Read the article

  • page up/down print ~ instead of history search in terminal

    - by Desmond
    I am on a Macbook Pro with mac os x 10.8.2 I have set: page up: \033[5~ page down: \033[6~ in terminal keyboard settings (pressing esc to get \033). My ~/.xinputrc is: # Be 8 bit clean. set input-meta on set output-meta on set convert-meta off # Auto completion options set show-all-if-ambiguous on set completion-ignore-case on # Keybindings "\e[1~": beginning-of-line # Home key "\e[4~": end-of-line # End key "\e[5~": history-search-backward # Page Up "\e[6~": history-search-forward # Page Down "\e[3~": delete-char # Delete key "\e[5C": forward-word # Ctrl+right "\e[5D": backward-word # Ctrl+left I am just following a guide found on internet (actually there are a lot of guide really similar): http://macimproved.wordpress.com/2010/01/04/fix-page-updown-home-end-in-terminal/ Unfortunately, the only (terrific) result is that when I press page up (fn + up arrow) just a "~" is printed in the terminal.

    Read the article

  • How to force or redirect to SSL in nginx?

    - by Callmeed
    I have a signup page on a subdomain like: https://signup.mysite.com It should only be accessible via HTTPS but I'm worried people might somehow stumble upon it via HTTP and get a 404. My html/server block in nginx looks like this: html { server { listen 443; server_name signup.mysite.com; ssl on; ssl_certificate /path/to/my/cert; ssl_certificate_key /path/to/my/key; ssl_session_timeout 30m; location / { root /path/to/my/rails/app/public; index index.html; passenger_enabled on; } } } What can I add so that people who go to http://signup.mysite.com get redirected to https://signup.mysite.com ? (FYI I know there are Rails plugins that can force SSL but was hoping to avoid that)

    Read the article

  • vim: sending tab-completion key against a mapped keystroke

    - by CDR
    To switch between buffers without installing any plugins, a good way is to type :b <tab> Which shows all the current buffers names in status bar and you can pick one using cursor keys and enter. But :b <tab> is 5 keystrokes and I would like to map it to a <leader>. But setting the following is not working. :nnoremap <Leader>. :b <Tab> It shows ":b ^I" in status bar and doesn't actually open the buffer names on status bar. Anyone knows why?

    Read the article

  • VPS Memory Exchausted Even With Light Settings

    - by user101570
    Linux noob here. I have a 256MB VPS on Ubuntu 11.04 server and when I run "free -m" the result shows all memory being used (including the second line re: buffers/cache). I found this very strange, considering I only have 5 Apache processes running each chewing up about 20MB each. MYSQL is taking up 30MB. To my knowledge, and according to "top", I have no other memory hogs operating. Settings that may be relevant: PHP memory_limit = 32M MYSQL key_buffer = 16M Prefork MPM Maxclients = 10 So when I reviewed these settings, I naturally thought maxclients was too high, so I tried switching it to 5. Now not only does my memory still show as being 100% used, my website loads much, much slower, despite not getting any traffic aside from mine at the moment. I don't understand this. I thought a single Apache process handles all requests from a client received within the "KeepAliveTimeout" window, which I've set to 2 seconds. With my initial config. of 10 maxclients, my page load times are around .3ms, so a single process should handle that no problem, correct? So next I went to an extreme level of 1 for maxclients. My memory is still at 100% usage and my site loads painfully slow. I'm a noob at a complete loss here. According to the many tutorials I've read on basic server setup, I should be good to go. Help! Please! Edit: total used free shared buffers cached Mem: 256 256 0 0 0 0 -/+ buffers/cache: 256 0 Swap: 0 0 0

    Read the article

  • Large scale file replication with an option to "unsubscribe" from a replicated file on a given machine

    - by Alexander Gladysh
    I have a 100+ GB files per day incoming on one machine. (File size is arbitrary and can be adjusted as needed.) I have several other machines that do some work on these files. I need to reliably deliver each incoming file to the worker machines. A worker machine should be able to free its HDD from a file once it is done working with it. It is preferable that a file would be uploaded to the worker only once and then processed in place, and then deleted, without copying somewhere else — to minimize already high HDD load. (Worker itself requires quite a bit of bandwidth.) Please advise a solution that is not based on Java. None of existing replication solutions that I've seen can do the "free HDD from the file once processed" stuff — but maybe I'm missing something... A preferable solution should work with files (from the POV of our business logic code), not require the business logic to connect to some queue or other. (Internally the solution may use whatever technology it needs to — except Java.)

    Read the article

  • Firefox - bizarre bug

    - by pulancheck1988
    So... this happens to my Firefox browser in like 5 days after I install it. After I reinstall it behaves normal. It doesn't display all sites like this. Same site on Explorer looks as expected. Restarting the browser doesn't seem to help. I do have installed 2 plugins or add-ons (Adblock & VideoHelper) but I'm almost sure this doesn't explain it. and I didn't messed with the settings. So... anyone?

    Read the article

  • Recommended drive encryption solution

    - by Chris Driver
    Hello, I will soon be purchasing a number of laptops running Windows 7 for our mobile staff. Due to the nature of our business I will need drive encryption. Windows BitLocker seems the obvious choice, but it looks like I need to purchase either Windows 7 Enterprise or Ultimate editions to get it. Can anyone offer suggestions on the best course of action: a) Use BitLocker, bite the bullet and pay to upgrade to Enterprise/Ultimate b) Pay for another 3rd party drive encryption product that is cheaper (suggestions appreciated) c) Use a free drive encryption product such as TrueCrypt Ideally I am also interested in 'real world' experience from people who are using drive encryption software and any pitfalls to look out for. Many thanks in advance... UPDATE Decided to go with TrueCrypt for the following reasons: a) The product has a good track record b) I am not managing a large quantity of laptops so integration with Active Directory, Management consoles etc is not a huge benefit c) Although eks did make a good point about Evil Maid (EM) attacks, our data is not that desirable to consider it a major factor d) The cost (free) is a big plus but not the primary motivator The next problem I face is imaging (Acronis/Ghost/..) encrypted drives will not work unless I perform sector-by-sector imaging. That means an 80Gb encrypted partition creates an 80Gb image file :(

    Read the article

  • Changing Vim Home Directory

    - by mcaaltuntas
    Previously I've been using vim without any problems. However a few months ago our company made some network and security updates. After that whenever I plug a network cable into my laptop, it creates a network shared drive "H" with my company name and when I try to open vim it doesn't load plugins and other things that are in my vim home directory. I have found the reason but I don't know how to solve it. The problem is that these network updates changed our HOME directory. When I write: echo $HOME It prints H. Before plugging in a network cable my home was C:\Users\blabla. How can I change my HOME variable? When I run set it prints: C:\Windows\System32>set | findstr /R "^HOME" HOMEDRIVE=H: HOMEPATH=\ HOMESHARE=\\companyname\blabla\username$

    Read the article

  • How can write a mod_rewrite rule to determine if the domain is not the main domain then change https:// to http://

    - by Oudin
    I've set up a WordPress multi-site with a wildcard ssl for example.com to access the admin area securely. However I'm also using domain mapping to map other domains to other sites e.g. alldogs.com to alldogs.example.com. The problem is when I'm trying to access the front end of a site from and admin for a mapped domain e.g. alldogs.com by clicking "Visit Site" the Link goes to https://alldogs.com because of the forced ssl applied to the admin area. Which produces a certificate warning since the certificate is for example.com and not alldogs.com. How can write a mod_rewrite rule to determine if the url/link clicked on is not the main domain e.g. example.com then change the https:// to http:// so the site can be accessed via port 80 and not generate a certificate warning for that mapped domains

    Read the article

  • Privoxy rule to block Facebook spying

    - by bignose
    Recently, my server's Privoxy rules to block Facebook's spying have failed. How can I block current Facebook spying links? Since soon after [the inception of Facebook's so-called “Open Graph” cross-site tracking widgets][1] (those “Like” bugs on numerous websites), I blocked them by using this rule (in user.action) on our site's Privoxy server: { +block-as-image{People-tracking button.} } .facebook.com/(plugins|widgets)/(like|fan).* That worked fine; the spying bugs no longer appeared on any web page. Today I noticed that they're all making it past that filter [edit: no, they're not]. SOLUTION: The proxy was being silently ignored, though this was not obvious in the client. The above rule continues to work fine.

    Read the article

  • Unknown Exception when entering particular keystrokes in Notepad++

    - by Dwza
    When I enter the keystrokes - in a normla file, everting works fine. When I put in <?php and save the file, use the same keystrokes again, than a unknown exception appears. If i delete <?php and try it again, everything works well again. What could this be ? I searched in the Internet, no results. I tried to reinstall all my plugins, no result. I updated my np++, no result. (On my work I use the same version. It works there!)

    Read the article

  • Pages partially load on rapid refresh

    - by user101570
    I recently set up a VPS slice with 256MB to run a LAMP stack (Ubuntu 11.04, Apache2, Mysql, PHP5). So far I'm only running a simple Wordpress site on an IP-based virtual host I set up. The performance is excellent, but I've noticed that if I send multiple HTTP requests from the same IP in a short time period, only partial pages are rendered. Then if I wait a bit and refresh the page, the entire page loads again. I noticed this behaviour when accessing the site from two browsers from my office desktop, but it also presents itself if I quickly navigate the site from a single browser (any browser). I'm guessing this is an Apache phenomenon, as the pages are rendered correctly except under the conditions above, but perhaps I'm wrong here. Could it be my hosting company with some kind of DOS protection in place? As a relative Linux/server noob, I'd really appreciate any insight into what settings in Apache could explain this behaviour, and how I might go about changing it.

    Read the article

  • Centos 6, local yum repo, and multiple versions of the same rpm

    - by Tom Skelley
    I'm trying to set up a really simple local repo. I want to have a basic repo with two versions of only one rpm, so I did: mkdir /packages/x64 copy two rpms to /packages/x64 [root@repo x64]# createrepo --verbose /packages/x64 1/2 - jre-6u37-linux-amd64.rpm 2/2 - jre-7u9-linux-x64.rpm Saving Primary metadata Saving file lists metadata Saving other metadata Added the repo to /etc/yum.repos.d/local.repo But when I do: [root@repo x64]# yum list jre I get: Loaded plugins: fastestmirror, security Loading mirror speeds from cached hostfile Available Packages jre.x86_64 1.7.0_09-fcs local ie it only shows the latest version. I know that they're both in the repo because I've run this: [root@repo x64]# rpm -qp jre-6u37-linux-amd64.rpm jre-1.6.0_37-fcs.x86_64 [root@repo x64]# rpm -qp jre-7u9-linux-x64.rpm jre-1.7.0_09-fcs.x86_64 and when I remove the latter version, and run createrepo again, the former shows up. Most puzzling, what am I missing?

    Read the article

  • How to update OpenSSL using Putty and yum command

    - by JM4
    I am so new to updating server technologies it is unbelievable but we are trying to become PCI Compliant and have to update some of our server technologies. One in particular is OpenSSL. We are currently running arch i686 0.9.8e but we have to upgrade to ATLEAST 0.9.8g. When I run a yum update command, there are no updates available. If I run "yum info openssl" it says available packages are: arch i386 0.9.8e but the only difference is smaller file size. I am running the following repositories: Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * addons: mirrors.netdna.com * atomic: www6.atomicorp.com * base: mirrors.igsobe.com * extras: mirror.vcu.edu * updates: mirror.vcu.edu any help out there?

    Read the article

< Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >