Search Results

Search found 26004 results on 1041 pages for 'debian based'.

Page 348/1041 | < Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >

  • Switching PHP to FastCGI from mod_php broke AMFPHP

    - by wezzy
    Hi, i've just switched my debian server from mod_php to fastcgi following this tutorial everything goes right but now i've found that one of the hosted application that using AMFPHP for flash remoting is broken. I'm trying to understand what's happend. Looking at it with FireBug and FireAMF it seems that the responses has a content but the Flash callbacks never get called and if i try to open the service browser it displays this error: (mx.rpc::Fault)#0 errorID = 0 faultCode = "Client.Error.RequestTimeout" faultDetail = "The request timeout for the sent message was reached without receiving a response from the server." faultString = "Request timed out" message = "faultCode:Client.Error.RequestTimeout faultString:'Request timed out' faultDetail:'The request timeout for the sent message was reached without receiving a response from the server.'" name = "Error" rootCause = (null) It's strange it seems that the server takes a long time to responde, then (in the service browser) flash made a new call to the server and the old one get a response. Some problem with sessions ? Really no idea ....

    Read the article

  • Avoiding syslog-ng noise from cron jobs [closed]

    - by Eyal Rozenberg
    Possible Duplicate: How can I prevent cron from filling up my syslog? On my small Debian squeeze web server, I have syslog-ng installed. Generally, my logs are nice and quiet, with nice -- MARK -- lines. My syslog, however, is littered with this Sep 23 23:09:01 bookchin /USR/SBIN/CRON[24885]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete > /dev/null) Sep 23 23:09:01 bookchin /USR/SBIN/CRON[24886]: (root) CMD ( [ -d /var/lib/php4 ] && find /var/lib/php4/ -type f -cmin +$(/usr/lib/php4/maxlifetime) -print0 | xargs -r -0 rm > /dev/null) Sep 23 23:17:01 bookchin /USR/SBIN/CRON[24910]: (root) CMD ( cd / && run-parts /etc/cron.hourly) kind of garbage. What's the clean way to avoid it?

    Read the article

  • How to analyse logs after the site was hacked

    - by Vasiliy Toporov
    One of our web-projects was hacked. Malefactor changed some template files in project and 1 core file of the web-framework (it's one of the famous php-frameworks). We found all corrupted files by git and reverted them. So now I need to find the weak point. With high probability we can say, that it's not the ftp or ssh password abduction. The support specialist of hosting provider (after logs analysis) said that it was the security hole in our code. My questions: 1) What tools should I use, to review access and error logs of Apache? (Our server distro is Debian). 2) Can you write tips of suspicious lines detection in logs? Maybe tutorials or primers of some useful regexps or techniques? 3) How to separate "normal user behavior" from suspicious in logs. 4) Is there any way to preventing attacks in Apache? Thanks for your help.

    Read the article

  • Very high memory usage, but not claimed by any process?

    - by SharkWipf
    While stress-testing LVM on one of our Debian servers, I came across this issue where memory would fill up a lot to the point where it would run the server out of memory, but no process would claim the memory. See http://i.imgur.com/cLn5ZHS.png, and see http://serverfault.com/a/449102/125894 for an explanation on the colors used in htop. Why is this happening? And is there any way to see what process is using the memory? Htop is configured not to hide any processes, so what is it that htop is missing? In this particular case, I can fairly certainly say that it is caused, directly or indirectly, by lvmcreate, lvmremove or dmsetup, as I was stress-testing that. Do note that this question is not about solving the LVM problem, but about why the memory isn't claimed by any process. Stopping all LVM commands does bring the memory back down to <600MB.

    Read the article

  • Laptop authentication/logon via accelometer tilt, flip, and twist

    - by wonsungi
    Looking for another application/technology: A number of years ago, I read about a novel way to authenticate and log on to a laptop. The user simply had to hold the laptop in the air and execute a simple series of tilts and flips to the laptop. By logging accelerometer data, this creates a unique signature for the user. Even if an attacker watched and repeated the exact same motions, the attacker could not replicate the user's movements closely enough. I am looking for information about this technology again, but I can't find anything. It may have been an actual feature on a laptop, or it may have just been a research project. I think I read about it in a magazine like Wired. Does anyone have more information about authentication via unique accelerometer signatures? Here are the closest articles I have been able to find: Knock-based commands for your Linux laptop Shake Well Before Use: Authentication Based on Accelerometer Data[PDF] Inferring Identity using Accelerometers in Television Remote Controls User Evaluation of Lightweight User Authentication with a Single Tri-Axis Accelerometer Identifying Users of Portable Devices from Gait Pattern with Accelerometers[PDF] 3D Signature Biometrics Using Curvature Moments[PDF] MoViSign: A novel authentication mechanism using mobile virtual signatures

    Read the article

  • Cable installed - now my hub has no connection the router/modem - what do I need to buy?

    - by bcmcfc
    My previous setup was as follows: [modem/router]------[switch]+------ [pc1] +------ [pc2] I've just moved and had cable installed and I no longer have the option of running a lengthy LAN cable from the router to the switch to provide network and internet access to the two PCs. The cable company provided 2 wireless N USB adapters. What do I need to buy to plug into where in order to restore the network to its previous state? PC1 dual boots Windows 7 and Ubuntu 12. PC2 runs Debian 6. Edit: USB adapters - Netgear WNDA3200 Switch - TP-Link TL-SF1008D 8 port Ethernet switch Cabling - various patch cables cat5e rj45 Modem/Router - pretty standard cable company job - wireless Intention is something like- [modem/router] --wifi-- [some-new-hardware or perhaps to pc1] ----[switch]---[pc1/2]

    Read the article

  • How can I remove OLD history from Google Chrome?

    - by Norman Ramsey
    I'm working on a laptop with a modest hard drive, and 500MB is taken up with Google Chrome "History Index" and "Thumbnails" files. Some of these files are a year old. Chrome offers me the option to remove recent history, but I want the opposite: I want to remove old history. (Ideally I would remove the least recently used history information, but I don't expect to be able to do that.) Anyone have any ideas? I'm running the standard Debian google-chrome-beta package.

    Read the article

  • Upgrading from MySQL Server to MariaDB

    - by Korrupzion
    I've heard that MariaDB has better performance than MySQL-Server. I'm running software that makes an intensive use of MySQL, thats why I want to try upgrading to MariaDB. Please tell me your experiences doing this conversion, and instructions or tips. Also, which files I should take care of for making a backup of MySQL-Server, so if something goes wrong with MariaDB, I could rollback to MySQL without issues? I would use this but i'm not sure if it's enough to get a full backup of MySQL-Server confs and databases mysqldump --all-databases backup /etc/mysql My Environment: uname -a (Debian Lenny) Linux charizard 2.6.26-2-amd64 #1 SMP Thu Sep 16 15:56:38 UTC 2010 x86_64 GNU/Linux MySQL Server Version: Server version 5.0.51a-24+lenny4 MySQL Client: 5.0.51a Statistics: Threads: 25 Questions: 14690861 Slow queries: 9 Opens: 21428 Flush tables: 1 Open tables: 128 Queries per second avg: 162.666 Uptime: 1 day 1 hour 5 min 13 sec Thanks! PS: Rate my english :D

    Read the article

  • What is the optimum way to secure a company wide wiki?

    - by Mark Robinson
    We have a wiki which is used by over half our company. Generally it has been very positively received. However, there is a concern over security - not letting confidential information fall into the wrong hands (i.e. competitors). The default answer is to create a complicated security matrix defining who can read what document (wiki page) based on who created it. Personally I think this mainly solves the wrong problem because it creates barriers within the company instead of a barrier to the external world. But some are concerned that people at a customer site might share information with a customer which then goes to the competitor. The administration of such a matrix is a nightmare because (1) the matrix is based on department and not projects (this is a matrix organisation), and (2) because in a wiki all pages are by definition dynamic so what is confidential today might not be confidential tomorrow (but the history is always readable!). Apart from the security matrix, we've considered restricting content on the wiki to non super secret stuff, but off course that needs to be monitored. Another solution (the current) is to monitor views and report anything suspicious (e.g. one person at a customer site having 2000 views in two days was reported). Again - this is not ideal because this does not directly imply a wrong motive. Does anyone have a better solution? How can a company wide wiki be made secure and yet keep its low threshold USP? BTW we use MediaWiki with Lockdown to exclude some administrative staff.

    Read the article

  • AWS: Multi-region setup using single RDS instance

    - by Ion
    I'm trying to scale our web application (PHP, MySQL, memcache) in a multi-region scheme. Currently we are using a setup with two EC2 instances behind an ELB and an RDS instance, all of them in US-EAST (Virginia) region. We would like to have a presence in the EU (Ireland) region as well. This means at least a new EC2 instance there (identical to the others, serving the same application). I have copied the desired AMI, setup the new instance, setup a same ELB configuration (required for SSL termination) and configured latency-based routing in Route53. And it works as suggested. But, clients from EU have speed problems. This is due to the fact that the EU EC2 instances connect to the US-based RDS instance. As far as I know Amazon has not yet enabled RDS multi-region replication. Do you have any suggestions on how to properly speed up the whole setup while using the single RDS instance? Also, any ideas in general on how to scale things up? Ideally we would like to continue using the RDS technology for various reasons. Nevertheless, I am open to suggestions (I guess the next idea would be to host our own MySQL servers).

    Read the article

  • IIS permissions issue pointing docroot to Samba share

    - by lalalalalalalambda
    I have an IIS project which is stored on a Samba shared, network mounted with the following line: X: \\my-samba-server\dev /user:freddie Connectivity is fine, can read/write files from X:. In IIS, I'm trying to set it as the Physical path via \\my-samba-server\dev\folder\to\my\files, which results in the following 500.19 error: Config Error | Cannot read configuration file due to insufficient permissions It is by default trying to use the Pass-through authentication. If I try to set this to connect as the specific user freddie, I receive: The specified user does not exist What is the correct way to connect to a path which has been setup as described above? *Samba man pages indicate version 3.6 is on the Debian host

    Read the article

  • Puppet: Could not find init script for 'squid'

    - by chris
    I'm using Puppet to install ufdbGuard which requires Squid 2.7 (which is correctly installed and working properly). Here is the relevant class: class pns_client::squid { package { 'squid': ensure => present, before => File['/etc/squid/squid.conf'], } if $::ufdbguard_installed == "true" { $squidconf = 'puppet:///modules/pns_client/squid.conf_ufdbguard' } else { $squidconf = 'puppet:///modules/pns_client/squid.conf' } notify{$squidconf:} file { '/etc/squid/squid.conf': ensure => file, mode => 644, source => $squidconf, } service { 'squid': ensure => running, enable => true, hasrestart => true, hasstatus => true, subscribe => File['/etc/squid/squid.conf'], } } When running, I get this error: err: /Stage[main]/Pns_client::Squid/Service[squid]: Could not evaluate: Could not find init script for 'squid' This happens on all freshly-installed Debian 6 and Unbuntu 10.04/11.04 machines. Any ideas?

    Read the article

  • Keepalived with apache unable to bind interface on Backup server

    - by davideagle
    I have two debian 6 servers running keepalived 1.1.20 with one server acting as a Master and the other as a Backup. Both servers host apache 2.4 that have a global Listener on all interfaces on port 80 (Listen *:80) how ever I have some sites that require a listener for port 443 (SSL) and that is configured for each VirtualHost in the Apache config since I do not want every VirtualHost to listen on port 443. The problem is when I try to start Apache on the Backup machine that does not hold the virtual interface the VirtualHost is supposed to be listening on, I get AH00072: make_sock: could not bind to address 1.1.1.1:443. I know this is expected behavior of Apache. The real question is are there any known workarounds or solutions to this scenario?

    Read the article

  • Best way to build / implement a corporate developer Linux distro with multiple kernels?

    - by Garen
    At work we have Linux users who understandably prefer using Ubuntu. Problem is, we also have developer tools that only work with 'officially' supported Linux distributions that use much older 2.6.18 based kernels. (And even if they worked with newer ones, the vendors could always say they won't "support" the software unless it's on one of their 'officially' supported platforms.) We could of course just tell them to use CentOS or something else 2.6.18-based, and I'm sure their response would be something like: "you can take Ubuntu from our cold, dead hands." :) Which brings to me some questions--is there any good/easy/recommended way to run something like Ubuntu as a host VM and Centos 5.x as a guest OS (with which system--Xen,KVM,VMWare, ...?), and then roll that into our own custom internal distribution that could be easily installed? KVM looks like a good high-performance option just recently included in RHEL 5.4, but if hardware support for virtualization like Intel-VT or AMD-V is necessary, then I'd guess only those folks with fairly new PCs will be able to do it. Would be very interested to hear how anyone else has addressed this kind issue. EDIT: The target audience / users of this kind of system would be developers, each one needs to run locally licensed commercial software, so building out some separate beefy central machines isn't an option unfortunately due to license restrictions. Even if that weren't the case, a couple developers could quickly eat up the resources with parallel builds. :) Ideally, I was hoping there was some step-by-step guide out there to build your own pre-built distribution that had e.g. CentOS 5.x and Ubuntu Desktop as a guest.

    Read the article

  • vps like [load] graphs

    - by foober
    I investigated a couple of tools but they were really annoying and not polished. kSar for exampe is supposed to graph sar output, but it doesn't work. There's a perl script around (sar2rrd) that's supposed to convert sar output in rrd format and generate graphs. Doesn't work. (at least it doesn't like the output of "atsar" as per debian/ubuntu package). Tried munin but it wants to mess with http servers, and for some reason it didn't really work, too. It displayed errors in the webpage generated by the http server it put on port 4949. So, is there a simple install and forget tool to generate daily load,cpu,memory,network graphs? It seems strange to me that this problem has not been solved, maybe I'm looking in the wrong places

    Read the article

  • Xmodmap configuration

    - by Krishna S
    On my Debian Linux machine Ctrl+Alt+F1 is bound to a virtual terminal. I can see the corresponding entry by running xmodmap -pke keycode 67 = F1 XF86_Switch_VT_1 F1 XF86_Switch_VT_1 Per this thread, which I might add is consistent with what I've read elsewhere, the columns on the right hand side of = correspond to key, Shift+key, AltGr+key and Shift+AltGr+key. Given that, I don't understand how the keycode mapping for F1 (above) works for Ctrl+Alt+F1. It seems it should really be either Shift+F1 or Shift+AltGr+F1? Here's the output of xmodmap -pm on my machine: shift Shift_L (0x32), Shift_R (0x3e) lock Caps_Lock (0x25) control Control_L (0x42), Control_R (0x69) mod1 Alt_L (0x40), Alt_R (0x6c), Meta_L (0xcd) mod2 Num_Lock (0x4d) mod3 mod4 Super_L (0x85), Super_R (0x86), Super_L (0xce), Hyper_L (0xcf) mod5 ISO_Level3_Shift (0x5c), Mode_switch (0xcb) Can anybody explain it?

    Read the article

  • Clone MySQL DB - errors with CREATE VIEW/SHOW VIEW privileges

    - by user43537
    Running MySQL 5.0.32 on Debian 4.0 (Etch). I'm trying to clone a WordPress MySQL database completely (structure and data) on the same server. I tried a dump to an .sql file and an import into a new empty database from the command line, but the import fails with errors saying the user does not have the "SHOW VIEW" or "CREATE VIEW" privilege. Trying it with PHPMyAdmin doesn't work either. I also tried doing this with the MySQL root user (not named "root" though) and it shows an "Access Denied" error. I'm terribly confused as to where the problem is. Any pointers on cloning a MySQL DB and granting all privileges to a user account would be great (specifically for MySQL 5.0.32). Thanks!

    Read the article

  • Route return traffic to correct gateway depending on service

    - by Marnix van Valen
    On my office network I have two internet connections and one CentOS server running a website (HTTPS on port 443). The website should be publicly accessible through the public IP of the first internet connection (ISP-1). The other internet connection, ISP-2, id the default gateway on the network. Both internet connections have routers (the household-kind) with NAT, SPI firewalls etc. The router on ISP-2 is a Netgear WNDR3700 (aka N600) with original firmware. The problem is that the website is unreachable. Looks like incoming traffic on ISP-1 will reach the server but the returning traffic is routed through ISP-2, effectively making the site unreachable. As far as I can tell I can't do port based routing on the WNDR3700. What are my options to make this work? I've been looking at implementing an iptables / routing based solution on the server itself but haven't been able to make that work. Update: Note that the server has one network interface connecting it to both routers.

    Read the article

  • Throttling apache downloads selectively

    - by Synchro
    I have a linux box running Debian Sarge (old I know) and apache 2.0.54. It serves two kinds of files - regular web pages and small images, and a lot of large podcast mp3s. The podcast downloads swamp the connection and make the rest of the site unresponsive, so I'm looking to throttle the data transfer rate (not the request rate) of just the podcasts. I've set up haproxy using this technique which does what it says it will, but solves a different problem - even only 5 simultaneous podcast downloads is enough to saturate the link. In a perfect world, haproxy would support per-connection throttling, but it doesn't. So far I've looked at mod_bw (won't compile for me, seems unsupported), mod_cband (unsupported, widely reported as problematic) and iptables using tc. The iptables approach would allow me to throttle things, but would not be at all selective, slowing down everything on the server, not just the podcasts, so would just move the bottleneck without changing overall behaviour. Ideas?

    Read the article

  • Remote Software Solution that Acts as a Client

    - by Richard
    I am looking for something that I am not sure exists. I have a remote computer that will not allow incoming traffic due to ISP blocking of ports(basically double NAT situation that I am unable to get around). I am wondering if I have a computer acting as a client, is there any solution out there that will allow remote access to the computer. I do have other servers on the net that have static IP's that the computer could initiate a connection with. I am thinking of using Debian Linux, However computer is not built yet so OS is not overly important at this point.

    Read the article

  • How can I create bootable DOS usb stick?

    - by Grzenio
    I need to use this utility to change one of the parameters of my new WD hard drive: http://support.wdc.com/product/download.asp?groupid=609&sid=113&lang=en It has truly unreadable instructions: Extract wdidle3.exe onto a bootable medium (floppy, CD-RW, network drive, etc.). Boot the system with the hard drive to be updated to the medium where the update file was extracted to. Run the file by typing wdidle3.exe at the command prompt and press enter. I understand that this bootable medium should be some version of DOS? How can I make my USB stick a bootable medium compatible with this utility (I don't have a diskette drive)? I have Windows 7 and Debian Linux installed.

    Read the article

  • In Windows 7, why won't my display stay off despite the power settings saying it should?

    - by Jer
    I'm completely stumped by this. My simple use case is that when I'm in bed, I use a cordless mouse to browse the web, watch videos, etc. - the monitor is across the room. When I'm going to sleep, I want to shut the monitor off. I also want to be able to turn it back on in the morning. I just want to turn the monitor off and on using only the mouse. I thought of creating a power setting that turned the monitor off asap (the shortest amount of time is one minute; that's fine). I have one that does this. It worked great for almost a year on my old XP machine, and for about four months on my new Windows 7 laptop (which I essentially use as a desktop). All of a sudden a couple weeks ago, it just stopped working - my monitor won't turn off on its own anymore. Here are the settings: I tried other options. Based on the advice here I tried nircmd. This seemed great. I created a shortcut with the command line: "C:\Program Files\nircmd\nircmd.exe" cmdwait 1000 monitor off I click this, and in one second the monitor goes off. However about five seconds later it turns back on, and I've been extra careful to make sure the mouse isn't moving. I have no idea what's going on. Based on both of these things, my only guess is that something could be running in the background which somehow makes the computer think it's in use. I've tried killing as many programs as possible but I still get the same behavior. Any advice? I'm mainly curious about how to debug, but am open to other suggestions about turning the monitor off and on with just the mouse as well.

    Read the article

  • mod_rewrite not working for subdomain in Apache2

    - by Matt
    Hi, I'm having some trouble with mod_rewrite. So I'm implementing it through .htaccess, and I can get it working on my main vhost, domain.com - what I want it to do is rewrite http:// domain.com to force it to https:// domain.com, which it does well. I want to have name-based vhosts for the one IP with the following redirects: (I'm breaking up domain names with a space because otherwise serverfault recognises them as links) http:// domain.com -- https:// domain.com http:// staging.domain.com -- https:// staging.domain.com http:// test.domain.com -- https:// test.domain.com http:// beta.domain.com -- https:// beta.domain.com domain.com redirects to https:// domain.com, but staging.domain.com doesn't, although I can access https:// staging.domain.com. The .htaccess is identical for both, just with the domain name different. It doesn't seem to do any rewriting at all for staging.domain.com, I've tested this by trying to get it to rewrite to www.google.com. I have a wildcard DNS record, *.domain.com which points to the domain IP. Is there a particular way I should have the virtualhosts configured to allow this? I keep reading in the Apache documentation that it doesn't support multiple SSL name-based vhosts. But I can access both https:// domain.com and https:// staging.domain.com just fine. Any thoughts? Thanks to everyone for your help with this.

    Read the article

  • Routing application traffic through specific interface

    - by UnicornsAndRainbows
    Hello All! First question here, so please go easy: I have a debian linux 5.0 server with two public interfaces. I would like to route outbound traffic from one instance of an application via one interface and the second instance through the second interface. There are some challenges: both instances of the application use the same protocol both instances of the application can access the entire internet (can't route based on dest network) I can't change the code of the application I don't think a typical approach to load balancing all traffic is going to work well, because there are relatively few destination servers being accessed in the outbound traffic, and all traffic would really need to be distributed pretty evenly across these relatively few servers. I could probably run two virtualized servers on the box and bind each of them to a different external ip, but I'm looking for a simpler solution, maybe using iproute or iptables? Any ideas for me? Thanks in advance - and I'm happy to answer any questions.

    Read the article

  • How to remotely install Linux via SSH?

    - by netvope
    I need to remotely install Ubuntu Server 10.04 (x86) on a server currently running RHEL 3.4 (x86). I'll have to be very careful because no one can press the restart button for me if anything goes wrong. Have you ever remotely installed Linux? Which way would you recommend? Any advice for things to watch out? Update: Thanks for your help. I managed to "change the tires while driving"! The main components of my method are drawn from HOWTO - Install Debian Onto a Remote Linux System, grub legacy: Booting once-only, grub single boot and kernel panic reboot , and Ubuntu Community Documentation: InstallationFromKnoppix Here is the outline of what I did: Run debootstrap on an existing Ubuntu server Transfer the files to the swap partition of the RHEL 3.4 server Boot into tha swap partition (the debootstrap system) Transfer the files to the original root partition Boot into the new Ubuntu system and finish up the installation with tasksel, apt-get, etc I tested the method in a VM and then applied to the server. I was lucky enough that everything went smoothly :)

    Read the article

< Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >