Search Results

Search found 22139 results on 886 pages for 'security testing'.

Page 729/886 | < Previous Page | 725 726 727 728 729 730 731 732 733 734 735 736  | Next Page >

  • How to configure iptables to use apt-get in a server?

    - by segaco
    I'm starting using iptables (newbie) to protect a linux server (specifically Debian 5.0). Before I configure the iptables settings, I can use apt-get without a problem. But after I configure the iptables, the apt-get stop working. For example I use this script in iptables: #!/bin/sh IPT=/sbin/iptables ## FLUSH $IPT -F $IPT -X $IPT -t nat -F $IPT -t nat -X $IPT -t mangle -F $IPT -t mangle -X $IPT -P INPUT DROP $IPT -P OUTPUT DROP $IPT -P FORWARD DROP $IPT -A INPUT -i lo -j ACCEPT $IPT -A OUTPUT -o lo -j ACCEPT $IPT -A INPUT -p tcp --dport 22 -j ACCEPT $IPT -A OUTPUT -p tcp --sport 22 -j ACCEPT $IPT -A INPUT -p tcp --dport 80 -j ACCEPT $IPT -A OUTPUT -p tcp --sport 80 -j ACCEPT $IPT -A INPUT -p tcp --dport 443 -j ACCEPT $IPT -A OUTPUT -p tcp --sport 443 -j ACCEPT # Allow FTP connections @ port 21 $IPT -A INPUT -p tcp --sport 21 -m state --state ESTABLISHED -j ACCEPT $IPT -A OUTPUT -p tcp --dport 21 -m state --state NEW,ESTABLISHED -j ACCEPT # Allow Active FTP Connections $IPT -A INPUT -p tcp --sport 20 -m state --state ESTABLISHED,RELATED -j ACCEPT $IPT -A OUTPUT -p tcp --dport 20 -m state --state ESTABLISHED -j ACCEPT # Allow Passive FTP Connections $IPT -A INPUT -p tcp --sport 1024: --dport 1024: -m state --state ESTABLISHED -j ACCEPT $IPT -A OUTPUT -p tcp --sport 1024: --dport 1024: -m state --state ESTABLISHED,RELATED -j ACCEPT #DNS $IPT -A OUTPUT -p udp --dport 53 --sport 1024:65535 -j ACCEPT $IPT -A INPUT -p tcp --dport 1:1024 $IPT -A INPUT -p udp --dport 1:1024 $IPT -A INPUT -p tcp --dport 3306 -j DROP $IPT -A INPUT -p tcp --dport 10000 -j DROP $IPT -A INPUT -p udp --dport 10000 -j DROP then when I run apt-get I obtain: core:~# apt-get update 0% [Connecting to ftp.us.debian.org] [Connecting to security.debian.org] [Conne and it stalls. What rules I need to configure to make it works. Thanks

    Read the article

  • sudoer scheme to allow useful access to another web developer yet retain future control of a virtual

    - by Tchalvak
    Background: Virtual Private Server I have a virtual private server that I'm looking to host multiple websites on, and provide access to another web developer. I don't care about putting too many constraints on him, though I wouldn't mind isolating the site that he'll be developing from other sites on the server that I will develop. The problem: retain control Mainly what I want is to make sure that I retain control over the server in the future. I want to reserve the ability to create/promote/demote and other administrative functions that don't deal with web software. If I make him an admin, he can sudo su - and become root and remove root control from me, for example. I need him not to be able to: take away other admin permissions change the root password have control over other security/administrative functions I would like him to still be able to: install software (through apt-get) restart apache access mysql configure mysql/apache reboot edit web development configuration type files in /etc/ Other Standard Setups would be happily considered I've never really set up a good sudoers file, so simple example setups would be very useful, even if they're only somewhat similar to the settings that I'm hoping for above. Edit: I have not yet finalized permissions, so standard, useful sudo setups are certainly an option, the lists above are more what I'm hoping I can do, I don't know that that setup can be done. I'm sure that people have solved this type of problem before somehow, though, and I'd like to go with something somewhat tested as opposed to something I've homegrown.

    Read the article

  • Why does my ftp(e)s server fails like half of the time

    - by user1092608
    I have this discussion at work regarding our ftp server running via vsftpd. Initially, we have opted to serve ftpes instead of sftp because this seemed the most flexible and straightforward solution for our server to have secure file transmission. Afterwards, our ftp server seems to be a source of issues for our end users. Half of the time, users complain about not working ftp connections. I must say, i tested our FTP trough different infrastructures (=in the field, at random times at random places) and indeed, sometimes behind some configurations (=no idea how they are configured, because the 'field' testing), i recieve errors. Some of the are: Error: Failed to retrieve directory listing (filezilla) Furthermore, behind my basic home configuration, everything seems to be running fine. I (think I) did all the basic configuration checks (passive mode?, firewall for all ports?, ...) and can't seem to find the source. Being a bunch of techies at our small office, yet knowing nothing about infrastructure, some start suggesting that ftps protocol could be the source of issues. ("No, i only knew sftp so far" "Ftps is not widespread"). I, however, strongly doubt this hypothesis, since reading around on the www, asking questions on serverfault, everyone seems to deny this. So, as I would like to avoid reconfiguring, since this involves messing around in our SSH service, our virtual user setup and ftp service, i would need some advice on 1) what could be potentially the general cause? 2) do you have some general tips? 3) would you mind having a look at my configuration file? ----- General Settings ----- write_enable=YES dirmessage_enable=YES nopriv_user=ftpsecure ftpd_banner="Welcome to XXXX FTP!" hide_ids=YES hide_file=.* max_per_ip=10 max_clients=10 local_enable=YES local_umask=022 chroot_local_user=YES secure_chroot_dir=/usr/share/empty userlist_enable=NO userlist_deny=YES userlist_file=/etc/vsftp_deny_users guest_enable=YES guest_username=ftpvirtual virtual_use_local_privs=YES user_sub_token=$USER local_root=/srv/ftp/ftpvirtual/$USER anonymous_enable=NO syslog_enable=NO xferlog_enable=YES xferlog_file=/var/log/vsftpd_xfer.log connect_from_port_20=YES pam_service_name=vsftpd listen=YES listen_port=21 pasv_enable=YES pasv_min_port=30000 pasv_max_port=30030 pasv_address=foo ssl_enable=YES rsa_cert_file=/etc/vsftpd.pem rsa_private_key_file=/etc/vsftpd.pem force_local_data_ssl=YES force_local_logins_ssl=YES ssl_tlsv1=YES ssl_sslv2=YES ssl_sslv3=YES ssl_ciphers=HIGH anon_mkdir_write_enable=NO anon_root=/srv/ftp anon_upload_enable=NO idle_session_timeout=900 log_ftp_protocol=NO dsa_cert_file=/etc/vsftpd.pem Thanks

    Read the article

  • Rsync root files between systems without specifying password

    - by xpt
    This seems very tricky to me. I've set up my two systems so that I can rsync files between them as me, without specifying password. Now the the problem is to rsync files that belong to root. On both of my systems, there are no root passwords. The only way to become root is via sudo. So I can neither give a password for sudo rsyn local root@remote:, no use my ssh-agent to supply pass phrase. I don't want to set up a root password on any systems; and I do need the files to be owned by root on both systems. EDIT: Using the files that belong to root is just an example, I need a way for my unprivileged account to read/write system (including root-owned) files easily. One example is to copy my configured /root environment into the freshly-installed system. The two systems are actually two VMs under a single host, so it's not a big concern for me to copy root-owned files between them. EDIT 2: If I only want to copy my configured /root environment into the freshly-installed system, I can use tar: sudo tar cvzf - /root | ssh me@remote sudo tar xvzf - -C / But I do need rsync to update from time to time. Any easy way to make it happen? EDIT 3: Formally formulate the question Alright, it all began with the question, how to rsync files that belong to root between two systems as a normal unprivileged user, without specifying password, under the condition that, The root account is locked on both of systems. I.e., there are no root passwords. The only way to become root is via sudo (recommended security practice, see http://help.ubuntu.com/community/RootSudo) I don't want a completely passwordless sudo but don’t want to be typing passwords all the time either. The normal unprivileged user has entered their ssh pass phrase into the ssh agent. Thanks

    Read the article

  • Passive mode FTP file download hangs from specific machine

    - by chiptuned
    I have a server which is an AWS instance that just cannot download files from a specific FTP server. I can connect to the FTP server fine and run some commands, but when I request a file it just hangs. Here is the debug output of the base linux ftp client after login: ---> SYST 215 UNIX Type: Apache FtpServer Remote system type is UNIX. ftp> get outgoing/catalog.gz catalog.gz local: catalog.gz remote: outgoing/catalog.gz ---> PASV 227 Entering Passive Mode (64,156,167,125,135,191) ---> RETR outgoing/catalog.gz 150 File status okay; about to open data connection. Thats it. Then it just sits there and nothing transfers. I have verified that a data connection is made but the client gets no data. ? ss -nt dst 64.156.167.125 State Recv-Q Send-Q Local Address:Port Peer Address:Port ESTAB 0 0 10.185.147.150:41190 64.156.167.125:21 ESTAB 0 0 10.185.147.150:48871 64.156.167.125:48557 The FTP server is not in my control and downloads from other FTP servers in passive mode have worked. Active mode does not work as the system is behind a firewall. Every FTP client I've tried has the same problem. The download works from other systems, even from other AWS instances I have with the same Security Group. Not necessarily the same distro or config though. I understand it may be some issue on the server side, but I want to know what it is about my particular machine where the transfer hangs and where on every other machine I can get my hands on, it works. Please let me know what the culprit on the client side could be or ideas on what else to look at.

    Read the article

  • How do I get Tomcat 7 to start up faster in Linux CentOS kernel version 2.6.18?

    - by user1786833
    I am experiencing a problem with slow start up times for Tomcat 7. I have done some testing by tweaking configuration parameters both on Linux CentOS kernel version 2.6.18 and on Windows 7 using this link as my primary guide: http://wiki.apache.org/tomcat/HowTo/FasterStartUp and managed only a modest improvement. The improvements seemed to result when I added metadata-complete="true" attribute to the element of my WEB-INF/web.xml file and when I added the names of almost all the jars we use for our application to the tomcat.util.scan.DefaultJarScanner.jarsToSkip property in conf/catalina.properties file. I've also used this JAVA_OPTS in the setenv.sh file: JAVA_OPTS="$JAVA_OPTS -server -Xms1536m -Xmx1536m -XX:MaxPermSize=256m -XX:NewRatio=2 -XX:+UseParallelGC -XX:ParallelGCThreads=2 -Dsun.rmi.dgc.client.gcInterval=1800000 -Dsun.rmi.dgc.server.gcInterval=1800000 -Dorg.apache.jasper.runtime.BodyContentImpl.LIMIT_BUFFER=true " but actually saw my start up times increase slightly. Our QA and production environments are on Linux CentOS so I'm hoping to get more information on improving Tomcat 7 start up times in that environment. My primary role is java developer and I don't have much system administration experience so I appreciate any input. Thank you for your time and suggestions.

    Read the article

  • Svchost.exe connecting to different IPs with remote port 445

    - by Coll911
    Im using Windows XP Professional SP2. Whenever I start my Windows, svchost.exe starts connecting to all the possible IPs on LAN like from 192.168.1.2 to 192.168.1.200. The local port ranges from 1000-1099 and the remote port being 445. After it's done with the local IPs, it starts connecting to other random IPs. I tried blocking connections to the port 445 using the local security polices but it didn't work. Is there any possible way I could prevent svchost from connecting to these IPs without involving any firewall installed? My PC slows down due to the load. I scanned my PC with MalwareBytes and found out it was infected with a worm, it's deleted now but still svchost is connecting to the IPs. I also found out that in my Windows Firewall settings, under Internet Control Message Protocol (ICMP), there's a tick on "allow incoming echo request" (usually disabled) which is locked and I can't disable it. Its description is as follows Messages sent to this computer will be repeated back to the sender. This is used for trouble shooting for e.g to ping a machine. Requests of this type are automatically allowed if TCP port 445 is enabled. Any solutions? I can't bear going with the reinstalling Windows phase again.

    Read the article

  • Server format & Reinstall while keeping Server & domain ID

    - by Chris
    Hi Everyone, I want to reinstall my 2008 R2 server from scratch, due to multiple Active Dir issues. I have only 1 server running AD and a spare machine to use if necessary. Is there a way to save just the user accounts and the domain SID, so that I can start with a clean server that uses the same name as before? I can reassign file security, but I do not want to have to rejoin all the users to a new domain. Also all users are mapped to folders on the server. What I hope to do is a clean install of the server without having to mess with the users machines. can someone please tell me the procedure to accomplish this? any help appreciated! Thanks guys, but I could be here all day telling you every error I am getting. can we please keep this to the question of how to do a reinstall and keep the same SID? I just want to start over without having to rejoin all the clients to a new domain. Is there such a tool that can backup the Server SID and the AD domain name so that I could restore them, without restoring any other data? I might not be using the correct terminology here, but hopefully you understand what I am asking. Thanks

    Read the article

  • visually documenting web server configuration and infrastructure

    - by Alex Ciarlillo
    I have just finished a large re-organization and update of our institutions web server(s). This server hosts 3 virtual hosts, 3-4 blogs, 2 wikis, some legacy static HTML pages, and many hosted documents (PDF, .jpg, .xls). I have organized the site into a structure of something like: /var/www/sites/vhost1, vhost2, vhost3 .../wordpress/blogX .../mediawiki/wikiX Data is in a seperate directory structure so I can run a cron task over it to make sure it is all writeable and such. I then symlink to these data directories for each application. /var/www/data/vhost1, vhost2, vhost3 .../wordpress/blogX/uploads .../mediawiki/wikiX/images All Apache configs are in /etc/httpd/conf.d/vhosts.d/vhost1,2,3.conf On top of this there is also a testing server which mirrors this setup. Once changes are fully tested, they are rsynced down to the live server. All the wordpress installs and mediawiki installs are straight form SVN and updates are done by switching branches or "svn up". So my question is how can I best document to share with a) co-workers, b) possible future replacement, c) myself 6 months from now. Obviously I can make a wiki page, excel document, whatever and fill it with text, but I am looking for a more visual representation that I can use to explain the architecture to less-technical people. Ideally it would be awesome if this visual representation could then be expanded to get more technical details.

    Read the article

  • Losing SQL connections

    - by john pavelka
    sql servr 2005 - Standard; one dedicated sql server (VM); windows server 2003; Small databases; About once a week we lose all sql connections. It seems to fix itself after about 5-10 minutes. System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. --- System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. We don't have a fully qualified DBA; it's kind of a joint effort here. Can somebody give me some general ideas for troubleshooting the network side and the application side? We already ran a few tuning profiles and ran through Database Tuning Advisor to apply indexing recommendations. It would sure be nice if there was a way to take a snapshot of what was running on sql server when these 100% cpu spikes occured, but sometimes we're not around. Is it common to throttle CPU for certain processes? Can this be done with Windows server 2003? For example, if security apps were making cpu spike to 100%, is there a way to limit their cpu usage? Any advice is appreciated. thanks,

    Read the article

  • Confused with creating an ODBC connection, apparently I have two separate odbcad32.exe files?

    - by Hoser
    Alright, this is my first time working with this so forgive me if I'm a little confusing or vague. I have a server with Windows Server 2008 Standard without Hyper-v (6.0, Build 6002). I'm running a small website off this server and using a Microsoft Access database to store some information coming in through the website. I'm sure the PHP I have written to open the ODBC connection is correct as it has worked for me when I created this website in a testing environment on a laptop. My current issue now is that it seems like I have two different odbcad32.exe's, and one doesn't appear to have a driver for a .accdb file, and only a .mdb file. The other has a driver for both. The first one I speak of has a driver titled 'Driver do Microsoft Access (.mdb)', the second one has a driver titled 'Microsoft Access Driver (.mdb, .accdb)'. I access the first odbcad32.exe by going to C:\Windows\SysWOW64\odbcad32.exe, and then the one that seems to have the driver I need I go to Control Panel-Administrative Tools-Data Sources(ODBC) and simply create a new connection in the System DNS tab. Whenever I make changes to the one that I access through the Control Panel, I see no changes, however if I use the odbcad32.exe file in SysWOW64 I do get some changes in the errors that come back to me. The main difference I noticed is that when I set up an ODBC connection with the Control Panel method it said it simply couldn't find the ODBC connection, but when I made a .mdb connection in the SysWOW64 one (and pointed it to a .accdb file) it says Cannot open database '(unknown)'. It may not be a database that your application recognizes, or the file may be corrupt. Which makes it seem like it is this odbcad32.exe version in SySWOW64 that is being recognized as the 'correct' one. Is there any way to fix this? I've tried to be as thorough as possible but if I've been confusing or left anything out let me know.

    Read the article

  • What is my BaseDN supposed to be with the following configuration of OpenLDAP?

    - by fuzzy lollipop
    I have the following in my OpenLDAP configuration. Using the latest version OpenLDAP on Centos 5.3. Installed using yum. From my /etc/openldap/slapd.conf database bdb suffix "dc=company,dc=com" rootdn "cn=Manager,dc=company,dc=com" From my /etc/openldap/ldap.conf BASE dc=company,dc=com I have successfully added an entry with ldapadd and retrieved it with ldapsearch from a local bash shell on the box. Now I am trying to get a Graphical Editor to connect to this server remotely so I can enter people from my laptop. But I am having no luck. I tried JXplorer, and it connects with Anonymous bind without me having to specify a BaseDN but I can't edit anything that way. If I try and give it a user name and password, using Manager and my rootpw I have in clear text just for testing, every GUI Client on my remote laptop complains about my BaseDN not being the correct format when I enter dc=company,dc=com and I tried cn=Manager,dc=company,dc=com. Error opening connection: [LDAP: error code 34 - invalid DN] I have tried multiple clients and all of them connect as anonymous, none let me connect authenticated where I can actually create or edit anything. I am using Manager as my username and the password from rootpw, is that correct?

    Read the article

  • Transfer iptables rules to another server (almost) real time

    - by MrShunz
    I'm running 2 cPanel servers with ConfigServer Security & Firewall plugin. One of the functions of the plugin is to block via iptables (temporarily and/or permanently) IPs which fail various authentications (POP3/IMAP, SMTP, FTP, webmail, mod_security and such). Now, i'd like to push those IP blocks to the border router to drop packets as soon as possible (and doing so protecting the other machines on the network). Keep in mind that after N failed logins IP is blocked for 5 minutes, then re-allowed. If multiple bans occours in an hour IP is blocked permanently and should be unlocked "by hand". So I need a near realtime solution. What I'm looking for is a better way than firing some cronjobs both on cPanels and border router to: dump the rules to file transfer the file to border router (via scp/sftp) load the rules from the file in the border router I'm aware that I will need some scripts to parse and modify the rules as cPanels have one ethernet interface and some aliases while border router has two ehternet interfaces and some loopbacks. All machines involved use Linux. EDIT as per @pjmorse comment. The plugin consists of a bunch of perl and config files. The part I'm intrested in is a process which scans logfiles (lfd) and installs iptables rules (and sends an alert email). Fact is, it upgrades quite often (one or two times a week) and itself is 7000 lines of perl so I'm not comfortable on tampering with it.

    Read the article

  • How to set up a server without a hosting control panel

    - by A4J
    I have always used a control panel on my dedicated servers - from cPanel to Plesk to Virtualmin, and I am now considering ditching a CP altogether and manually editing config files. My requirements are fairly simple, I will host multiple sites on the server; some Apache with PHP & Mysql and some Passenger with Rails & Postgres. All will require email smtp/pop. FTP/Stats will not be required. Could someone please give me a quick run-down of what I would need to do - in terms of installing software and configuration? My server will come with a base install of CentOS 6.4 minimal. My thoughts so far: Install/update latest versions of MySQL & Postgres (are they 'safe' out of the box? Or do I need to do anything else like set up root passwords etc?) Install Apache & PHP (again, are the base installs good to go or do they require security tweaks?) Set up nameservers/hostnames/reverse DNS etc (Any guides on how to do this please?) Install Rubygems Install and configure Dovecot and Postfix (any tips on doing this? Or links to how-tos that cover it please?) Set up each website - any links to guides on how to do this? Install/configure firewall (or is the default install good to go?) Any other tips or advice would be greatly appreciated, as would links to guides or how-tos.

    Read the article

  • Managed LAMP platform for maximizing availability and global reach, not scalability

    - by user66819
    Assume a Linux/Apache/MySQL/PHP application for a small base of registered users. With small userbase, there are no traffic peaks so the scalability that cloud platforms offer is not imperative. But the system is mission-critical, so availability is the primary goal. Users are also distributed across Asia, Europe, and US, so multiple server locations that minimize users' network hops would be highly desirable. The dream: a managed VPS platform where we would configure a single server (uploading PHP and other files, manipulating database, etc.), and the platform would automatically mirror the server in a handful of key places around the world (say one on each US coast, one in Europe, one in east Asia). File system synchronization and MySQL replication would happen automatically. Core operating system is managed, so we don't need to do full system administration and security, and low-level backups are also done by service provider, though we also do our own backups as well. Couple this with some sort of DNS geo-detection, so users are routed to the nearest operational server... with support for https, of course. Does such a dream exist? If not, what are some approaches to accomplish the same end with minimal time investment and minimal monthly hosting costs?

    Read the article

  • Are my web server permissions for uploading correct?

    - by user1699176
    I'm on debian and I have my website in the directory /srv/www/mysite.com/public_html I set chown for www-data:www-data on /srv/www. I have root disabled and created a sudo user which is id 1000:1000. I would also like to use this user to upload to /srv/www so I added my sudo user to the www-data group. I originally got a message saying that I didn't have permissions to upload a file to that directory. After playing around with multiple permissions for a while I finally was able to upload properly, but I'm not sure if this set up is correct. I'm hesitant to change it for now since it actually works, so I thought I'd ask for advice. I think what I ended up doing was this: sudo chown -R www-data:www-data /srv/www sudo chmod g+s /srv/www sudo usermod -aG www-data myuser sudo chgrp -R www-data /srv/www sudo chmod -R g+w /srv/www When I was finally able to successfully upload a file (with FileZilla) it showed the owner as myuser myuser. Shouldn't it have been www-data myuser? My question is whether this is correct and if there are any potential security issues? For example, I wasn't sure if I was actually supposed to use "myuser" to own the /srv/www directory instead sudo chown -R myuser:myuser /srv/www or maybe sudo chown -R www-data:myuser /srv/www If you need more info, let me know, thanks.

    Read the article

  • Enabling `mod_rewrite` apache, permissions issues

    - by rudolph9
    In attempting to enable mod_rewrite on the Apache2 web server installed with Mac OSX 10.7.4. Following these instruction, ultimately using the configuration to host CakePHP applications, I run into permissions issues accessing the site via a web browser when I set the directory block associated with cakephp site /etc/apache2/users/username.conf from: <Directory "/Users/username/Sites/"> Options Indexes FollowSymLinks MultiViews AllowOverride none Order allow,deny Allow from all </Directory> /etc/apache2/users/username.conf to: <Directory "/Users/username/Sites/"> Options Indexes MultiViews AllowOverride none Order allow,deny Allow from all </Directory> <Directory "/Users/username/Sites/cakephp_app/"> Options Indexes FollowSymLinks MultiViews AllowOverride all Order allow,deny Allow from all </Directory> The .htaccess files are the CakePHP 2.2.2 default as follows: /Users/username/Sites/cakephp_app/.htaccess <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ app/webroot/ [L] RewriteRule (.*) app/webroot/$1 [L] </IfModule> /Users/username/Sites/cakephp_app/app/.htaccess <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ webroot/ [L] RewriteRule (.*) webroot/$1 [L] </IfModule> /Users/username/Sites/cakephp_app/app/webroot/.htaccess <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php [QSA,L] </IfModule> When performing the request via a web browser at to http://0.0.0.0/~username/cakephp_app/index.php the content of the response is Not Found The requested URL /Users/username/Sites/cakephp_app/app/webroot/ was not found on this server. Apache/2.2.21 (Unix) DAV/2 PHP/5.3.10 with Suhosin-Patch Server at 0.0.0.0 Port 80 Upon a request to http://0.0.0.0/~username/ and http://0.0.0.0/~username/cakephp_app/, added to /var/log/apache2/error_log accordingly are the following: [Tue Sep 04 22:53:26 2012] [error] [client 127.0.0.1] File does not exist: /Library/WebServer/Documents/Users, referer: http://0.0.0.0/~username/ [Tue Sep 04 22:53:26 2012] [error] [client 127.0.0.1] File does not exist: /Library/WebServer/Documents/favicon.ico What is causing the issue? Is there server program, ideally available via a homebrew script, which would make hosting CakePHP applications for testing purposes more effective and efficient?

    Read the article

  • Long running php script hangs/terminates on IIS 7.5

    - by Rich
    I'm a bit of nube when it comes to configuring IIS 7.5 and Php so apologies if this is a silly question but I've been wrestling with this for over half the day and need some fresh input. I have a php application running on IIS 7.5 , php 5.4 running as fastcgi. The application works absolutley fine with the exception that long running php scripts seem to hang; no 500 error they simply seem never complete and return the results to the browser. I've written a simple test script below to eliminate the possibility of programming error in the main app : <?php /* test timeout */ /*set_time_limit(110);*/ echo "Testing time out in seconds\n"; for ($i = 0; $i < 175; $i++) { echo $i." -- "; if(sleep(1)!=0) { echo "sleep failed script terminating"; break; } } ?> If I run the script beyond 175 seconds it hangs. Below that it will return the results to the browser. Here are the time out parameters that I've set for php and fastcgi. I've also played around setting these really low in order to get various time out errors and have succeeded which brings me to the conclusion that there's another setting that I'm missing .. perhaps. fastcgi activity timeout=800 Idle Timeout = 900 request Timeout 800 Php max_execution_time=700 Any solutions or pointers in the right direction would be very ... very welcome. Thanks

    Read the article

  • How can I generate filesystem images that are usable on many different virtualization systems?

    - by Mark Longair
    I have written a script that generates a root filesystem image (based on Debian lenny) suitable for User-Mode Linux. (Essentially this script creates a filesystem image, mounts it with a loop device, uses debootstrap to create a lenny install, sets up a static IP for TUN/TAP networking, adds public keys for login by SSH and installs a web application.) These filesystem images work pretty well with UML, but it would be nice to be able to generate similar images that people can use on alternative virtualization software, and I'm not familiar with these options at all. In particular, since the idea is to use this image as a standalone server for testing the web application, it's important that the networking works. I wonder if anyone can suggest what would be involved in customizing such root filesystem images such that they could be used with other virtualization software, such as VMware, Xen or as an Amazon EC2 instance? Two particular concerns are: If such systems don't use a raw filesystem image (e.g. they need headers with metadata or are compressed in some particular way) do there exist tools to convert between the different formats? I assume that in the filesystem, at least /etc/network/interfaces will have to be customized, but are more involved changes likely to be necessary? Many thanks for any suggestions...

    Read the article

  • Router(s) Issue: DNS quries sporadically fail with multiple computers hooked in

    - by bob-the-destroyer
    Basically, after anywhere from 5-60 minutes, DNS queries fail for a few minutes, then slowly begin to resolve correctly. Then the cycle repeats. This occurs only when more than one computer is on the network. All computers on the network experiences the same sporadic DNS outage at the same time. Wireless or wired, Linux or Windows, fresh OS install or old, browser or ping, same symptoms. Duplicated on 3 routers (not chained together, mind you) and 3 ISP's and 3 separate locations over the past several months. The only common theme is a single 5-yo WIN XP laptop which has been in use on the network throughout all this. There also may be anywhere between 1 - 10 devices hooked up wired or wirelessly at a time. The only reprieve I have from this torture is by using any VPN to an outside source - always smooth sailing. I typically set up any router to a) use WPA2/etc security; b) MAC whitelist; c) UPNP OFF (if available); d) always update firmware when available; e) obtain DNS from ISP automatically; f) set the router to act as DHCP server for the internal network. Adjusting channels has no effect. Any ideas?

    Read the article

  • Repeated installation of malicious software to do outbound DDOS attack [duplicate]

    - by user224294
    This question already has an answer here: How do I deal with a compromised server? 12 answers We have a Ubuntu Vitual Private Server hosted by a Canadian company. Out VPS was affected to do "outbound DDOS attack" as reported by server security team. There are 4 files in /boot looks like iptable, please note that the capital letter "I","L". VPS:/boot# ls -lha total 1.8M drwx------ 2 root root 4.0K Jun 3 09:25 . drwxr-xr-x 22 root root 4.0K Jun 3 09:25 .. -r----x--x 1 root root 1.1M Jun 3 09:25 .IptabLes -r----x--x 1 root root 706K Jun 3 09:23 .IptabLex -r----x--x 1 root root 33 Jun 3 09:25 IptabLes -r----x--x 1 root root 33 Jun 3 09:23 IptabLex We deleted them. But after a few hours, they appeared again and the attack resumed. We deleted them again. They resurfaced again. So on and so forth. So finally we have to disable our VPS. Please let us know how can we find the malicious script somewhere in the VPS, which can automatically install such attcking software? Thanks.

    Read the article

  • Issues with returned mail sent to web-based email domains

    - by Beeder
    My company is having issues with returned mail that we send out to external domains. A few weeks ago we replaced a firewall and changed ISP providers and began subsequently having issues RECEIVING emails from external sources because we hadn't updated our new IPs in the DNS records. After making the necessary configuration changes and setting up SMTP forwarding over port 25 to our mail server, everything was working fine up until a few days ago when we started having mail sent out returned to us. We aren't having any trouble communicating internally (to recipients on our domain) but it seems we're having trouble with outbound messages to web-based email recipients. (@hotmail, @live, @yahoo, @gmail...etc) Currently we are running Server 2003 SP2 and exchange 2003. I'm very unfamiliar with configuring Exchange and could really use some help in narrowing down the possibilities. I did some research and am becoming suspicious of Sender ID being the culprit due to our recent IP address change and the likelihood that Sender ID is identifying us as a fake domain. Am I going in entirely the wrong direction? Any input or guidance would be infinitely appreciated. This is the message that is returned when an outbound message fails...this particular one was sent to my @live.com account for testing purposes... Your message did not reach some or all of the intended recipients. The following recipient(s) could not be reached: [email protected] on 5/17/2012 3:02 PM There was a SMTP communication problem with the recipient's email server. Please contact your system administrator. Unfortunately, messages from xx.x.xx.x weren't sent. Please contact your Internet service provider since part of their network is on our block list. I tried a reverse DNS lookup and found that we are set up as a Forward-confirmed reverse DNS. So do I just need to contact my ISP and have them correct their DNS records or is this something I can solve on our end??

    Read the article

  • Exchange 2010: Send emails via STMP with custom From address to outside the domain

    - by marsze
    The requirement(s): (1) Connect to Exchange via STMP and (2) basic authentication and send emails with a (3) custom From address to (4) recipients outside the domain. I was able to get (1) - (3) working. I created a dedicated receive connector for this task and configured it like this: Permissions: ms-Exch-SMTP-Accept-Any-Recipient (for authenticated users) ms-Exch-SMTP-Accept-Authoritative-Domain-Sender (for authenticated users) ms-Exch-SMTP-Accept-Any-Sender (for authenticated users) Authentication: TLS Basic Authentication (without TLS) Exchange Server Authentication However, I'm still struggeling with (4): I can send with "fake" From addresses to recipients inside the domain. Also, I can send with the original From address to recipients outside the domain. Can you tell me what I'm missing, to configure Exchange to send emails with changed From addresses to recipients outside the domain? (Or is this even possible at all?) Thanks. UPDATE I have to correct myself: it seems to be working after all. There must be some issue with the mailbox I used for testing. It turned out it's working with other external mailboxes. However, I still have no idea what was different there... Anyways, you can take this as a documentation on how to configure Exchange in such a way ;)

    Read the article

  • How do I get to the bottom of network latency and bandwidth issues

    - by three_cups_of_java
    I recently moved two blocks south. That move moved me from Comcast to Broadstripe (high-speed internet cable providers). Comcast was pretty good. Broadstripe sucks. I called them on the phone, and they basically brushed me off (politely). I want to come to them with some numbers, so I can say more than just "it's really slow". I still have access to my old Comcast service, so I can run the tests using both providers. Here's what I'm seeing with my new Broadstripe service: 1) When I browse to most sites, there is a long delay (5-10 seconds) before the page starts loading in my browser 2) The speed test tell me I have 12 megs down (bullshit) 3) I have a server at my office. I just downloaded some files (using scp on the command line). It said I'm getting 3.5 KB/s I'm an experienced programmer and spend most of my days on the command line and in vim. Networking, however is not a strong point. I've played around with traceroute, but I'm not sure if that's the right tool to use. I have access to servers all over the country (I would just use Amazon EC2 to set up a test server), and I prefer to use Ubuntu for my testing. How can I come up with some hard numbers to show Broadstripe how crappy their service is?

    Read the article

  • Basic connectivity issues between Win 7 and XP mixed wired/wireless network. [Solved]

    - by Pulse
    Setup: Windows 7 x64 Ultimate desktop hard wired to Asus WL500gp router (WL500gpv2-1.9.2.7-d-r1445 firmware) Several Bridged VirtualBox VM's running XP, 7, ubuntu server 10.04, Mint 9 and SuSE 11.2 Win XP Pro SP3 notebook with D-Link Airplus wireless network card. No firewall or other security software currently running on either platform (at least for the duration of the test) Situation: Router is acting DHCP server Clients are receiving correct addresses and additional parameters Internet connectivity is available from all clients Windows 7 sharing is set to Network type = work (not home group) NetBT is disabled on all clients using smb over TCP What I can do: I can ping the router and internet addresses from the wireless XP notebook I can ping the Win 7 desktop and any VM from the XP wireless notebook I can ping all devices from the router All VM's and 7 can ping each other and the router as well as Internet addresses What I can't do: I cannot ping the XP wireless notebook from either the Win 7 desktop or the VM's; it always returns a destination host unreachable error. Tracert resolves the name or the XP notebook but also returns a destination host unreachable. From the above it would seem that something is blocking connectivity in a single direction (from the Win 7 box to the Win XP notebook) only but the router can ping the XP notebook. Some fresh input would be most welcome, as this is beginning to drive me batty. Thanks

    Read the article

< Previous Page | 725 726 727 728 729 730 731 732 733 734 735 736  | Next Page >