Search Results

Search found 20785 results on 832 pages for 'idea'.

Page 742/832 | < Previous Page | 738 739 740 741 742 743 744 745 746 747 748 749  | Next Page >

  • PSTN Trunk TDM400P Install on Asterisk / Trixbox

    - by Jona
    Hey All, I'm trying to get a TDM400P card with FXO module to connect to our PSTN line. The card is correctly detected by Linux: [trixbox1.localdomain asterisk]# lspci 00:09.0 Communication controller: Tiger Jet Network Inc. Tiger3XX Modem/ISDN interface And asterisk can see the channel: > trixbox1*CLI> dahdi show channel 1 > Channel: 1LI> File Descriptor: 14 > Span: 11*CLI> Extension: I> Dialing: > noI> Context: from-pstn Caller ID: I> > Calling TON: 0 Caller ID name: > Mailbox: none Destroy: 0LI> InAlarm: > 1LI> Signalling Type: FXS Kewlstart > Radio: 0*CLI> Owner: <None> Real: > <None>> Callwait: <None> Threeway: > <None> Confno: -1LI> Propagated > Conference: -1 Real in conference: 0 > DSP: no1*CLI> Busy Detection: no TDD: > no1*CLI> Relax DTMF: no > Dialing/CallwaitCAS: 0/0 Default law: > ulaw Fax Handled: no Pulse phone: no > DND: no1*CLI> Echo Cancellation: > trixbox1128 taps trixbox1(unless TDM > bridged) currently OFF Actual > Confinfo: Num/0, Mode/0x0000 Actual > Confmute: No > Hookstate (FXS only): Onhook I have configured a "ZAP Trunk (DAHDI compatibility Mode)" with the ZAP identifier 1 and an outbound route, but when ever I try to make an external call via it I get the "All Circuits are busy now, please try your call again later message". The FXO module is directly connected to our phone line from BT via a BT-RJ11 cable. I'm guessing I've missed a configuration step somewhere but no idea where, any help greatly appreciated.

    Read the article

  • PHP 5.3.2 + Fcgid 2.3.5 + Apache 2.2.14 + SuExec => Connection reset by peer: mod_fcgid: error reading data from FastCGI server

    - by Zigzag
    I'm trying to use PHP 5.3.2 + Fcgid 2.3.5 + Apache 2.2.14 but I always have the error : "Connection reset by peer: mod_fcgid: error reading data from FastCGI server". And Apache returns an error 500 each time I tried to execute a php page : I have compiled the Apache with this options: ./configure --with-mpm=worker --enable-userdir=shared --enable-actions=shared --enable-alias=shared --enable-auth=shared --enable-so --enable-deflate \ --enable-cache=shared --enable-disk-cache=shared --enable-info=shared --enable-rewrite=shared \ --enable-suexec=shared --with-suexec-caller=www-data --with-suexec-userdir=site --with-suexec-logfile=/usr/local/apache2/logs/suexec.log --with-suexec-docroot=/home Then PHP: ./configure --with-config-file-path=/usr/local/apache2/php --with-apxs2=/usr/local/apache2/bin/apxs --with-mysql --with-zlib --enable-exif --with-gd --enable-cgi Then FCdigd: APXS=/usr/local/apache2/bin/apxs ./configure.apxs The VHOST is: <Directory /home/website_panel/site/> FCGIWrapper /home/website_panel/cgi/php .php ... ErrorLog /home/website_panel/logs/error.log </Directory> cat /home/website_panel/logs/error.log [Sun Mar 07 22:19:41 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:41 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php [Sun Mar 07 22:19:41 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:41 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php [Sun Mar 07 22:19:42 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:42 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php [Sun Mar 07 22:19:43 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:43 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php The Suexec log: root:/usr/local/apache2# cat /var/log/apache2/suexec.log [2010-03-07 22:11:05]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:11:15]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:11:23]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:41]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:41]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:42]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:43]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php root:/usr/local/apache2# cat logs/error_log [Sun Mar 07 22:18:47 2010] [notice] suEXEC mechanism enabled (wrapper: /usr/local/apache2/bin/suexec) [Sun Mar 07 22:18:47 2010] [notice] mod_bw : Memory Allocated 0 bytes (each conf takes 32 bytes) [Sun Mar 07 22:18:47 2010] [notice] mod_bw : Version 0.7 - Initialized [0 Confs] [Sun Mar 07 22:18:47 2010] [notice] Apache/2.2.14 (Unix) mod_fcgid/2.3.5 configured -- resuming normal operations root:/usr/local/apache2# /home/website_panel/cgi/php -v PHP 5.3.2 (cli) (built: Mar 7 2010 16:01:49) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies If someone has got an idea, I want to hear it ^^ Thanks !

    Read the article

  • Trac problem: AttributeError: Cannot find an implementation of the "IRequestHandler" interface named "WikiModule"

    - by Janosch
    This problem already has been described mutliple times in different mailing lists, but no solution has yet been published. My original setup is as follows (but in the mean time i have a simpler one on Windows 7): Ubuntu server with apache 2.2 and python 2.7 virutal python environment created with virtualenv installed babel, genshi and trac in this order using pip in the virtual environment Trac seems to run fine with tracd, but when visiting it through apache, i get the following error in an official trac error page: AttributeError: Cannot find an implementation of the "IRequestHandler" interface named "WikiModule" The stacktrace looks like this: Traceback (most recent call last): File "/srv/trac/python-environment/lib/python2.5/site-packages/Trac-0.13dev_r10668-py2.5.egg/trac/web/main.py",line 473, in _dispatch_request dispatcher.dispatch(req) File "/srv/trac/python-environment/lib/python2.5/site-packages/Trac-0.13dev_r10668-py2.5.egg/trac/web/main.py", line 154, in dispatch chosen_handler = self.default_handler File "/srv/trac/python-environment/lib/python2.5/site-packages/Trac-0.13dev_r10668-py2.5.egg/trac/config.py", line 691, in __get__ self.section, self.name)) AttributeError: Cannot find an implementation of the "IRequestHandler" interface named "WikiModule". Please update the option trac.default_handler in trac.ini. I already tried a lot to get down to the root of the problem, for me it looks as if all nativ trac components refuse to load. When one explicitely imports these components in the wsgi handler, some of them start to work somehow. Since i suspected the virtual environment, i dropped it, and manually copied all dependencies (babel, genshi, trac, ..) to one directory, and added this directory to system.path in the wsgi handler. I get exactly the same error. Since this setup is now independend from the environment, one can easily try it out on any other machine (windows or linux) running apache 2 and python 2.7. On my Windwos 7 machine, i got exactly the same problem. I zipped together the whole bundle, one can download it from http://www.xterity.de/tmp/trac-installation.zip . In the apache configuration (Windows 7 machine) i use the following settings: Alias /trac/chrome/common "D:/workspace/trac-installation/trac-resources/common" Alias /trac/chrome/site "D:/workspace/trac-installation/trac-resources/site" WSGIScriptAlias /trac "D:/workspace/trac-installation/apache/handler.wsgi" <Directory "D:/workspace/trac-installation/trac-resources"> Order allow,deny Allow from all </Directory> <Directory "D:/workspace/trac-installation"> Order allow,deny Allow from all </Directory> <Location "/trac"> Order allow,deny Allow from all </Location> And my handler.wsgi looks like this: import os import sys sys.path.append('D:/workspace/trac-installation/dependencies/') os.environ['TRAC_ENV'] = 'D:/workspace/trac-installation/trac-environments/Esp004' os.environ['PYTHON_EGG_CACHE'] = 'D:/workspace/trac-installation/eggs' import trac.web.main application = trac.web.main.dispatch_request Has anybody got an idea what could be the problem, or how to find out where it comes from?

    Read the article

  • What is a good layout for a somewhat advanced home network and storage solution?

    - by Shaun
    My home network/storage needs are changing and I am searching for some opinions and starting points on what a good network/storage layout would be that can serve my needs for a few years into the future. I think I have a decent starting point for equipment, but I am also willing to invest fairly heavily in a solution that can last me for a while. I am a bit of a tech nerd and I have a moderate tolerance for setup of the solution. I would prefer if maintenance of the system is somewhat low once it is setup, but I am willing to accept some tradeoffs. Existing equipment: Router - Netgear WNDR3700 (gigabit) Router - DLink Gamerlounge DGL-4300 (gigabit) Switch - 16 port Trendnet green switch (gigabit) Switch - 5 port Trendnet green (gigabit) Computer - i7-950 office computer (gigabit ethernet) Computer - Q6600 quad core media center, hooked up to TV, records shows (gigabit ethernet) Computer - Acer 1810T ultraportable laptop (gigabit and N ethernet) NAS - Intel SS4200-E (gigabit) External hard drive - 2TB WD Green drive (esata) All kinds of miscellaneous network connected TV, Bluray, Verizon network extender, HDhomerun TV tuners, etc. Requirements: -Robust backup solution for a growing collection of huge family picture files and personal files, around 1.5TB. (Including offsite backup) -Central location for all user's files, while also keeping them secure from each other. -Storage for terabytes of movie backups and recorded TV, and access to them from all computers (maybe around 4TB eventually) -Possibility to host files to friends and family easily Nice to have: -Backup of terabytes of movie backups Intriguing possibilities: -Capability to have users' Windows desktops and files look the same from all network computers I am not sure if the new Windows Home Server 2011 would fit into this well, if I need a domain server, how best to organize my backups, or how to most effectively use RAID. Currently I am simply backing up all computers to a RAID 1 on the NAS box, which I was thinking could prevent a situation where I reach for a backup and find that the disk is corrupt. One possibility that I am thinking about now is simply using my media center PC with a huge RAID of hard drives on which all files are stored. Pseudo-backup of all files would be present because of the RAID, but important files would also be backed up off site via carrying hard drives to work. But what if corruption seeps into the files and the corrupted data is then backed up? Does RAID protect against this? I really want to take next to zero risks with the irreplaceable files. I can handle some degree of risk with the movies and other files. I'm looking for critiques on this idea as well as other possibilities. To summarize, my goal is high functionality, media capable, and robust backup of irreplaceable files.

    Read the article

  • Passive FTP on Windows Server 2008 R2 using the IIS7 FTP-Server

    - by ntor
    Hello serverFault-community! During the last few days I have been setting up a Windows Server 2008 R2 in a VMware. I installed the standard FTP-Server on it by using the Webserver (IIS)-role. Everything works fine with accessing my FTP-Site with ftp://localhost in Firefox. I can also get access to it via the local IP of my Server. Actually everything works fine in my LAN. But here's my problem: I want to get access "from outside", using the external IP or a dyndns-URL. I have a LinkSys-Router in front of my Server, therefore I'm forwarding all the important ports. If you may now think "this idiot has probably forgotten some ports", I must dissappoint you. It even works getting access to my Server-Website and messing around in some WebInterfaces. The problem is my passive FTP (active works for me). I always get a timeout, when e.g. FileZilla waits for a response to the LIST-command. The one big thing I don't get, is, why my Server sends a response to the PASV-command, naming a port like 40918, even if I have restricted the data port range for my passive FTP ( in the IIS-Manager) to e.g. [5000-5009]. I simply don't want to open and forward all possible data ports! And another thing is, I can't specify a static external IP-adress for my server, since I don't own any. I hope I have explained my problem in a comprehensible way. If not, simply ask by posting a comment! LG ntor PS: I have already mainly tried following articles: Out Of Band FTP 7 shows "Operation timed out" How to Configure Windows Firewall for a Passive Mode FTP Server ServerFault --- Passive ftp on Server 2008 --- EDIT: --- There is one idea rising up in my mind: When I use FileZilla to connect by passive mode I always get something like this: 227 Entering Passive Mode (192,168,1,102,160,86) According to a Rhinosof-article FZ tries to connect on port "160*256+86 = 41046", although I have restricted the data ports (as mentioned above). Could this be caused by the router, that doesn't forward out-ports directly, but uses different ones? (-- The IP-Adress given is the local one, since I'm not able to define a static external in the IIS-Mgr)

    Read the article

  • getfacl command and Linux file permissions - getting 403 error when accessing Wordpress

    - by tommytwoeyes
    I'm configuring Wordpress for a friend, and I just screwed up the Wordpress directory permissions (I suspect) using setfacl. Webfaction doesn't allow sudo or allow me to change the directory group ownership using chown. Now it appears that something I did is causing the entire application to give me 403 errors when I try to access it. The current directory listing looks like this (I set the whole thing to 777 temporarily to try to recover access to it): drwxrwsr-x+ 6 myusername myusername 4096 Mar 2 07:07 ./ drwxr-xr-x 3 root root 4096 Feb 25 19:48 ../ -rwxrwxr-x+ 1 myusername myusername 286 Mar 2 06:33 gzip.php -rwxrwxr-x+ 1 myusername myusername 4831 Mar 4 20:02 .htaccess -rwxrwxr-x+ 1 myusername myusername 397 Feb 25 19:49 index.php -rw-rw-r--+ 1 myusername myusername 15606 Feb 25 19:49 license.txt -rw-rw-r--+ 1 myusername myusername 9200 Feb 25 19:49 readme.html drwxrwsr-x+ 6 myusername myusername 4096 Feb 25 19:49 .svn/ -rwxrwxr-x+ 1 myusername myusername 4337 Feb 25 19:49 wp-activate.php drwxr-xr-x+ 10 myusername myusername 4096 Mar 4 20:03 wp-admin/ -rwxrwxr-x+ 1 myusername myusername 40283 Feb 25 19:49 wp-app.php -rwxrwxr-x+ 1 myusername myusername 226 Feb 25 19:49 wp-atom.php -rwxrwxr-x+ 1 myusername myusername 274 Feb 25 19:49 wp-blog-header.php -rwxrwxr-x+ 1 myusername myusername 3931 Feb 25 19:49 wp-comments-post.php -rwxrwxr-x+ 1 myusername myusername 244 Feb 25 19:49 wp-commentsrss2.php -rwxrwxr-x+ 1 myusername myusername 3485 Feb 25 20:15 wp-config.php drwxr-xr-x+ 6 myusername myusername 4096 Feb 26 08:52 wp-content/ -rwxrwxr-x+ 1 myusername myusername 1255 Feb 25 19:49 wp-cron.php -rwxrwxr-x+ 1 myusername myusername 246 Feb 25 19:49 wp-feed.php drwxrwxr-x+ 9 myusername myusername 4096 Feb 25 19:49 wp-includes/ -rwxrwxr-x+ 1 myusername myusername 1997 Feb 25 19:49 wp-links-opml.php -rwxrwxr-x+ 1 myusername myusername 2453 Feb 25 19:49 wp-load.php -rwxrwxr-x+ 1 myusername myusername 27787 Feb 25 19:49 wp-login.php -rwxrwxr-x+ 1 myusername myusername 7774 Feb 25 19:49 wp-mail.php -rwxrwxr-x+ 1 myusername myusername 494 Feb 25 19:49 wp-pass.php -rwxrwxr-x+ 1 myusername myusername 224 Feb 25 19:49 wp-rdf.php -rwxrwxr-x+ 1 myusername myusername 334 Feb 25 19:49 wp-register.php -rwxrwxr-x+ 1 myusername myusername 226 Feb 25 19:49 wp-rss2.php -rwxrwxr-x+ 1 myusername myusername 224 Feb 25 19:49 wp-rss.php -rwxrwxr-x+ 1 myusername myusername 9655 Feb 25 19:49 wp-settings.php -rwxrwxr-x+ 1 myusername myusername 18644 Feb 25 19:49 wp-signup.php -rwxrwxr-x+ 1 myusername myusername 3702 Feb 25 19:49 wp-trackback.php -rwxrwxr-x+ 1 myusername myusername 3210 Feb 25 19:49 xmlrpc.php The getfacl output looks like this: # file: . # owner: myusername # group: myusername user::rwx group::r-x group:apache:rw- mask::rwx other::r-x I simply wanted to change the ownership to myusername:apache and the file permissions to 755. I have no idea how to fix the permissions now. Any help would be really appreciated! Thanks, Tom

    Read the article

  • Iptables config breaks Java + Elastic Search communication

    - by Agustin Lopez
    I am trying to set up a firewall for a server hosting a java app and ES. Both are on the same server and communicate to each other. The problem I am having is that my firewall configuration prevents java from connecting to ES. Not sure why really.... I have tried lot of stuff like opening the port range 9200:9400 to the server ip without any luck but from what I know all communication inside the server should be allowed with this configuration. The idea is that ES should not be accessible from outside but it should be accessible from this java app and ES uses the port range 9200:9400. This is my iptables script: echo -e Deleting rules for INPUT chain iptables -F INPUT echo -e Deleting rules for OUTPUT chain iptables -F OUTPUT echo -e Deleting rules for FORWARD chain iptables -F FORWARD echo -e Setting by default the drop policy on each chain iptables -P INPUT DROP iptables -P OUTPUT ACCEPT iptables -P FORWARD DROP echo -e Open all ports from/to localhost iptables -A INPUT -i lo -j ACCEPT echo -e Open SSH port 22 with brute force security iptables -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent --set --name SSH --rsource iptables -A INPUT -p tcp -m tcp --dport 22 -m recent --rcheck --seconds 30 --hitcount 4 --rttl --name SSH --rsource -j REJECT --reject-with tcp-reset iptables -A INPUT -p tcp -m tcp --dport 22 -m recent --rcheck --seconds 30 --hitcount 3 --rttl --name SSH --rsource -j LOG --log-prefix "SSH brute force " iptables -A INPUT -p tcp -m tcp --dport 22 -m recent --update --seconds 30 --hitcount 3 --rttl --name SSH --rsource -j REJECT --reject-with tcp-reset iptables -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT echo -e Open NGINX port 80 iptables -A INPUT -p tcp --dport 80 -j ACCEPT echo -e Open NGINX SSL port 443 iptables -A INPUT -p tcp --dport 443 -j ACCEPT echo -e Enable DNS iptables -A INPUT -p tcp -m tcp --sport 53 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -p udp -m udp --sport 53 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT And I get this in the java app when this config is in place: org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master]; at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:292) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1185) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:537) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:475) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:304) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:228) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:300) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:195) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:700) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:760) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:482) at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:403) Do any of you see any problem with this configuration and ES? Thanks in advance

    Read the article

  • Need help making an ODBC MySQL Connection

    - by Andy Moore
    Short Version: How do I connect from PowerShell to an ODBC 5.1 MySQL Driver? I can't seem to find any connection strings that accurately have a "Provider" field for this particular instance. (See bottom of this question for examples/errors) ===== Long Version: I'm not a server guy, and I've been handed the task of setting up PowerGadgets on our network. I have a MySQL server running on a Linux box, that is configured for remote access and has a user defined for remote access as well. On my windows desktop PC, I have PowerGadgets installed. I installed the MySQL ODBC 5.1 connector, and went to Control Panel Data Sources and set up a User DSN connection to the database. The connection, user, and pass seem to be correct because it lists the tables of the database in my windows control panel. Where I'm running into trouble is in 3 places in PowerGadgets: When selecting a data source, I can select "SQL Server". Inputting the servers IP address does not work and I can't get this option to work at all. When selecting a data source, I can select "OleDB". This screen has a wizard on it, that appears to populate all the correct information (including database table names!) for me. "Test Connection" runs great. But if I try to complete the wizard, I get the error "The .NET Framework data provider for OLEDB does not support the MS Ole DB provider for ODBC Drivers." When selecting a data source, I can select "ODBC". This screen does not have a wizard and I cannot figure out a "connection string" that works. Typically it will respond with the error "The field 'Provider' is missing". Googling ODBC connection strings doesn't reveal any examples with a "provider" field and have no idea what to put in here. The connection string (for #2) above contains "SQLOLEDB" as a provider, and upon inputting that value into this connection string I get the same connection error that #2 gets. I believe I can solve my problems by figuring out a connection string for #3 but don't know where to get started. (PowerGadgets also allows for PowerShell support but I believe I will run into the same problem there) == Here's my current PowerShell connection that doesn't work: invoke-sql -connection "Driver={MySQL ODBC 5.1 Driver};Initial Catalog=hq_live;Data Source=HQDB" -sql "Select * FROM accounts" Spits back the error: "Invoke-Sql : An OLE DB Provider was not specified in the ConnectionString. An example would be, 'Provider=SQLOLEDB;'. == Another string that doesn't work: invoke-sql -connection "Provider=MSDASQL.1;Persist Security Info=False;Data Source=HQDB;Initial Catalog=hq_live" -sql "select * from accounts" And the error: The .Net Framework Data Provider for OLEDB (System.Data.OleDb) does not support the Microsoft OLE DB Provider for ODBC Drivers (MSDASQL). Use the .Net Framework Data Provider for ODBC (System.Data.Odbc).

    Read the article

  • Forwarding udp ports iptables packets "lost"?

    - by Dindihi
    I have a Linux router (Debian 6.x) where i forward some ports to internal services. Some tcp ports (like 80, 22...) are OK. I have one Application listening on port 54277udp. No return is coming from this app, i only get Data on this port. Router: cat /proc/sys/net/ipv4/conf/all/rp_filter = 1 cat /proc/sys/net/ipv4/conf/eth0/forwarding = 1 cat /proc/sys/net/ipv4/conf/ppp0/forwarding = 1 $IPTABLES -t nat -I PREROUTING -p udp -i ppp0 --dport 54277 -j DNAT --to-destination $SRV_IP:54277 $IPTABLES -I FORWARD -p udp -d $SRV_IP --dport 54277 -j ACCEPT Also MASQUERADING internal traffic to ppp0(internet) is active & working. Default Policy INPUT&OUTPUT&FORWARD is DROP What is strange, when i do: tcpdump -p -vvvv -i ppp0 port 54277 I get a lot of traffic: 18:35:43.646133 IP (tos 0x0, ttl 57, id 0, offset 0, flags [DF], proto UDP (17), length 57) source.ip > own.external.ip..54277: [udp sum ok] UDP, length 29 18:35:43.652301 IP (tos 0x0, ttl 57, id 0, offset 0, flags [DF], proto UDP (17), length 57) source.ip > own.external.ip..54277: [udp sum ok] UDP, length 29 18:35:43.653324 IP (tos 0x0, ttl 57, id 0, offset 0, flags [DF], proto UDP (17), length 57) source.ip > own.external.ip..54277: [udp sum ok] UDP, length 29 18:35:43.655795 IP (tos 0x0, ttl 57, id 0, offset 0, flags [DF], proto UDP (17), length 57) source.ip > own.external.ip..54277: [udp sum ok] UDP, length 29 18:35:43.656727 IP (tos 0x0, ttl 57, id 0, offset 0, flags [DF], proto UDP (17), length 57) source.ip > own.external.ip..54277: [udp sum ok] UDP, length 29 18:35:43.659719 IP (tos 0x0, ttl 57, id 0, offset 0, flags [DF], proto UDP (17), length 57) source.ip > own.external.ip..54277: [udp sum ok] UDP, length 29 tcpdump -p -i eth0 port 54277 (on the same machine, the router) i get much less traffic. also on the destination $SRV_IP there are only a few packets coming in, but not all. INTERNAL SERVER: 19:15:30.039663 IP source.ip.52394 > 192.168.215.4.54277: UDP, length 16 19:15:30.276112 IP source.ip.52394 > 192.168.215.4.54277: UDP, length 16 19:15:30.726048 IP source.ip.52394 > 192.168.215.4.54277: UDP, length 16 So some udp ports are "ignored/dropped" ? Any idea what could be wrong? Edit: This is strange: The Forward rule has data packets, but the PREROUTING rule has 0 packets... iptables -nvL -t filter |grep 54277 Chain FORWARD (policy DROP 0 packets, 0 bytes) 168 8401 ACCEPT udp -- * * 0.0.0.0/0 192.168.215.4 state NEW,RELATED,ESTABLISHED udp dpt:54277 iptables -nvL -t nat |grep 54277 Chain PREROUTING (policy ACCEPT 405 packets, 24360 bytes) 0 0 DNAT udp -- ppp0 * 0.0.0.0/0 my.external.ip udp dpt:54277 state NEW,RELATED,ESTABLISHED to:192.168.215.4

    Read the article

  • Why is vCenter 5.1u1 exiting hosts from maintenance mode?

    - by Shane Madden
    This vCenter server was just upgraded to 5.1 update 1. I'm going through hosts and bringing firmware up to date, then upgrading them from various versions of 5.0 to 5.1u1. vCenter 5.1u1 seems to have an interesting new behavior: it's removing hosts from maintenance mode when they reconnect after being disconnected -- but very inconsistently, I've seen it maybe 4 or 5 times on ~25-30 host reboots. I've only seen it happen on 5.0 hosts that have not yet been upgraded to 5.1. In the image, I placed the host in maint mode and rebooted it into the HP SPP DVD's automatic update mode. After its usual ~40 minute update process, the host came back online.. and 7 seconds before even logging that the host had reconnected, vCenter had sent the host a task to exit maintenance mode. In my understanding, the only time vCenter should drop a host out of maintenance mode is when vCenter put it into maintenance mode itself (such as a VUM upgrade task). Why would this vCenter be unilaterally exiting a host from user-initiated maintenance mode? Edit, additional info: I ran the firmware upgrades on 5 more hosts, all at the same time. Two of them exited maint mode after reconnecting, three did not. The common factor of those exiting maint mode seems to be how long they were offline; the two that took a few tries to boot to the virtual media are the two that got knocked out of maint mode. esx31 (image above): 45 minutes unresponsive esx19 (exited maint): 87 minutes unresponsive esx24 (stayed in maint): 32 minutes unresponsive esx29 (stayed in maint): 39 minutes unresponsive esx32 (stayed in maint): 30 minutes unresponsive esx34 (exited maint): 70 minutes unresponsive Edit: The disconnect time idea seems to have been a red herring, as it's not happening consistently. Additionally, in the vpxd.log the exit maint mode task initiation seems to always immediately follow this vim.EnvironmentBrowser.queryProvisioningPolicy SOAP call. Here's the lines, slightly trimmed for clarity: 15:27:49.535 [info 'vpxdvpxdVmomi'] [ClientAdapterBase::InvokeOnSoap] Invoke done (esx31, vim.EnvironmentBrowser.queryProvisioningPolicy) 15:27:49.560 [info 'commonvpxLro'] [VpxLRO] -- BEGIN task -- esx31 -- HostSystem.exitMaintenanceMode -- Note that on the nodes that don't get the exit task, the vim.EnvironmentBrowser.queryProvisioningPolicy event still occurs. I'm not seeing any other differences in events before or after this in the reconnect process, aside from the extra events caused by exiting maintenance mode. Given the log's mention of provisioning policies, looking for autodeploy-related maintenance mode issues turns up complaints about similar behavior (though I'm not using autodeploy at all).

    Read the article

  • How to Block a HTTP Website along with Its All Subdomain using IPTABLE

    - by netnovice
    I run a small HTTP web proxy site . We can not modify anything there in Proxy program. Few users mainly use Yahoo Web mail for Spamming and We need to block yahoo web mail access only ( complete yahoo website is also Ok) through our proxy . specially .mail.yahoo.com.. Like - we need to block URL like - http://uk-mg61.mail.yahoo.com http://in-mg61.mail.yahoo.com etc. etc. Note : We generaly open http://mail.yahoo.com in browser - but after loggin in it forwards it to Urls like above but all those are subdomain of mail.yahoo.com My target is if we can get all IP list for all available subdomain of mail.yahoo.com I can block it totally . We can only use IPTABLE ...I know using proxy itself we can check HTTP header and check Host field for .mail.yahoo.com. and block it. Solution : Follwoign what I did using IPtable . I collected IP CIDR block for yahoo mainly for yahoo web mail ( mail.yahoo.com ) as much as possible ( using linux host and whois command ) [ like 66.163.160.0/19 nd 98.136.0.0/14 etc ] and applied follwing command Like iptables -A OUTPUT -p tcp -d 66.163.160.0/19 -m state --state NEW -j DROP etc. Things are working fine. user can not access yahoo mail BUT the problem is I need to be updated with the avaialble CIDR YAHOO IP list ... I am ready to do it every week. I collected many from Net... You know theer are countles subdomain of mail.yahoo.com and seems every week Yahoo adding new IP... But what I observed some time user can bypass our rule and the reason obvously all the avaialble Ips are not entered in IPtable yet. What we need to do is enter all Ips of mail.yahoo.co But where do I find all subdomain for mail.yahoo.com I know we can get it from DNS but I must not be allowed to make DNS axfr query. Also doing reverse DNS will have performance issue. I want to know all subdomain of .mail.yahoo.c Can I get it from yahoo site. I have the list of all YAHOO smtp IP....but I need webmail Ip... ( http://public.yahoo.com/carloc/ymail.html ) Can you please share your Idea. Thank you

    Read the article

  • Double MySQL Running on Mountain Lion

    - by Norris
    After I've done a full restart, my Apache PHP server doesn't connect to Local MySQL ( connecting via 127.0.0.1 because localhost for some reason fails always ). So I did this today: ? ~ mysqladmin shutdown -u root -p Enter password: ? ~ mysqladmin shutdown -u root -p Enter password: mysqladmin: connect to server at 'localhost' failed error: 'Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)' Check that mysqld is running and that the socket: '/tmp/mysql.sock' exists! Which basically means that I succeeded in shutting down mysql. But as soon as I did - Apache PHP successfully connect to MySQL and my local sites work without a hiccup until the next restart. Here are a few other details: (as you can tell - I've installed MySQL via brew) ? ~ sudo ps aux | grep mysql N 4774 0.0 0.0 2432768 620 s000 S+ 9:53AM 0:00.00 grep mysql N 4772 0.0 2.6 3030168 440688 ?? S 9:51AM 0:00.29 /usr/local/Cellar/mysql/5.6.13/bin/mysqld --basedir=/usr/local/Cellar/mysql/5.6.13 --datadir=/usr/local/var/mysql --plugin-dir=/usr/local/Cellar/mysql/5.6.13/lib/plugin --bind-address=127.0.0.1 --log-error=/usr/local/var/mysql/N.local.err --pid-file=/usr/local/var/mysql/N.local.pid N 4686 0.0 0.0 2433432 1000 ?? S 9:51AM 0:00.01 /bin/sh /usr/local/opt/mysql/bin/mysqld_safe --bind-address=127.0.0.1 N 4362 0.0 2.7 3120276 458728 ?? S 9:47AM 0:00.45 mysqld ? ~ lsof -i | grep mysql mysqld 4362 N 16u IPv6 0x76959e40691f9f93 0t0 TCP *:mysql (LISTEN) This is the weird thing: ? ~ killall -9 mysqld MySQL Is dead! Apache doesn't connect. Then, when I run: ? ~ sudo mysqladmin shutdown -u root -p Enter password: Apache is (again) able to successfully connect to MySQL. As far as I understand this means that I have two mysql servers setup and both of them are trying to start up at the same time, but I don't have the slightest idea on how to fix it. I've tried brew reinstalling but that didn't help. ? ~ which mysqladmin /usr/local/bin/mysqladmin ? ~ whence -p mysql /usr/local/bin/mysql

    Read the article

  • Forward all traffic through an ssh tunnel

    - by Eamorr
    I hope someone can follow this and I'll explain as best I can. I'm trying to forward all traffic from port 6999 on x.x.x.224, through an ssh tunnel, and onto port 7000 on x.x.x.218. Here is some ASCII art: |browser|-----|Squid on x.x.x.224|------|ssh tunnel|------<satellite link>-----|Squid on x.x.x.218|-----|www| 3128 6999 7000 80 When I remove the ssh tunnel, everything works fine. The idea is to turn off encryption on the ssh tunnel (to save bandwidth) and turn on maximum compression (to save more bandwidth). This is because it's a satellite link. Here's the ssh tunnel I've been using: ssh -C -f -C -o CompressionLevel=9 -o Cipher=none [email protected] -L 7000:172.16.1.224:6999 -N The trouble is, I don't know how to get data from Squid on x.x.x.224 into the ssh tunnel? Am I going about this the wrong way? Should I create an ssh tunnel on x.x.x.218? I use iptables to stop squid on x.x.x.224 from reading port 80, but to feed from port 6999 instead (i.e. via the ssh tunnel). Do I need another iptables rule? Any comments greatly appreciated. Many thanks in advance, Regarding Eduardo Ivanec's question, here is a netstat -i any port 7000 -nn dump from x.x.x.218: 14:42:15.386462 IP 172.16.1.224.40006 > 172.16.1.218.7000: Flags [S], seq 2804513708, win 14600, options [mss 1460,sackOK,TS val 86702647 ecr 0,nop,wscale 4], length 0 14:42:15.386690 IP 172.16.1.218.7000 > 172.16.1.224.40006: Flags [R.], seq 0, ack 2804513709, win 0, length 0 Update 2: When I run the second command, I get the following error in my browser: ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://109.123.109.205/index.php Zero Sized Reply Squid did not receive any data for this request. Your cache administrator is webmaster. Generated Fri, 01 Jul 2011 16:06:06 GMT by remote-site (squid/2.7.STABLE9) remote-site is 172.16.1.224 When I do a tcpdump -i any port 7000 -nn I get the following: root@remote-site:~# tcpdump -i any port 7000 -nn tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused

    Read the article

  • PHP 5.3.2 + Fcgid 2.3.5 + Apache 2.2.14 + SuExec => Connection reset by peer: mod_fcgid: error readi

    - by Zigzag
    Hi, I'm trying to use PHP 5.3.2 + Fcgid 2.3.5 + Apache 2.2.14 but I always have the error : "Connection reset by peer: mod_fcgid: error reading data from FastCGI server". And Apache returns an error 500 each time I tried to execute a php page : I have compiled the Apache with this options: ./configure --with-mpm=worker --enable-userdir=shared --enable-actions=shared --enable-alias=shared --enable-auth=shared --enable-so --enable-deflate \ --enable-cache=shared --enable-disk-cache=shared --enable-info=shared --enable-rewrite=shared \ --enable-suexec=shared --with-suexec-caller=www-data --with-suexec-userdir=site --with-suexec-logfile=/usr/local/apache2/logs/suexec.log --with-suexec-docroot=/home Then PHP: ./configure --with-config-file-path=/usr/local/apache2/php --with-apxs2=/usr/local/apache2/bin/apxs --with-mysql --with-zlib --enable-exif --with-gd --enable-cgi Then FCdigd: APXS=/usr/local/apache2/bin/apxs ./configure.apxs The VHOST is: <Directory /home/website_panel/site/> FCGIWrapper /home/website_panel/cgi/php .php ... ErrorLog /home/website_panel/logs/error.log </Directory> cat /home/website_panel/logs/error.log [Sun Mar 07 22:19:41 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:41 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php [Sun Mar 07 22:19:41 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:41 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php [Sun Mar 07 22:19:42 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:42 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php [Sun Mar 07 22:19:43 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:43 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php The Suexec log: root:/usr/local/apache2# cat /var/log/apache2/suexec.log [2010-03-07 22:11:05]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:11:15]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:11:23]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:41]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:41]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:42]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:43]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php root:/usr/local/apache2# cat logs/error_log [Sun Mar 07 22:18:47 2010] [notice] suEXEC mechanism enabled (wrapper: /usr/local/apache2/bin/suexec) [Sun Mar 07 22:18:47 2010] [notice] mod_bw : Memory Allocated 0 bytes (each conf takes 32 bytes) [Sun Mar 07 22:18:47 2010] [notice] mod_bw : Version 0.7 - Initialized [0 Confs] [Sun Mar 07 22:18:47 2010] [notice] Apache/2.2.14 (Unix) mod_fcgid/2.3.5 configured -- resuming normal operations root:/usr/local/apache2# /home/website_panel/cgi/php -v PHP 5.3.2 (cli) (built: Mar 7 2010 16:01:49) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies If someone has got an idea, I want to hear it ^^ Thanks !

    Read the article

  • What do you use to store all of your personal data?

    - by codeflunky
    I have been on a quest for years to find the perfect tool to store all "my stuff". You know... personal information, code snippets, software keys, people's birthdays, whatever. There are lots of tools out there for this sort of thing, but I've never found any of them quite what I need. Ideally, I would just be able to type some notes, tag them (I don't like the idea of folder organization... too cumbersome) and then easily search and retrieve what I need later. It seems so simple, but for some reason I just can't find it. I currently use Backpack (sometimes), which is OK, but I hate the fact that you always have to create "pages" to store things. I don't want to have to do that. I want to just type some notes, tag it and save. That's it. And Backpack didn't even have search for a long time. What I do like about Backpack is that it's fast and it's web based. I've tried some desktop apps, which probably came closer to the functionality I want, but I just hate being tied to a single machine. I want to be able to get to my stuff anywhere, so the web based thing is a definite requirement. Anyway, I'm thinking about writing my own thing for this if I can't find anything, but before I make the attempt, I was wondering if anyone has any suggestions? I've used Backpack, Zoho Planner, Stikkit and Google Notes so far, and they are not quite to my liking. Anyone? (Sorry if this is off-topic, but I figured you guys might be legitimately into this kind of thing... you know, storing code snippets and such.) UPDATE: I've been using Evernote for a few days, and it is exactly what I've been looking for. It is totally tag based and allows both online and offline usage. The desktop app sits in your system tray and allows you to add whatever you want on the fly either as text notes or clippings from the browser. It also syncs it to the web (if you want) where you can get to it from anywhere using their web client. They even have a mobile client which I haven't used, but I will try it soon. Thanks again 18hrs. I wish I could give you 10 upvotes.

    Read the article

  • D-Link DIR-300 slows down / loses network

    - by basic6
    Hi there, there are 2 buildings (A and B). In bldg A is an open WLAN (which I'm allowed to use btw). In bldg B is a computer that I want to connect to that network. So I flashed an old D-Link DIR-300 AP with DD-WRT, mounted it to the wall (bldg B) near a window, attached a 13 dBi directional antenna (pointing to bldg A) and configured it as AP client in that wireless network. Then there's another AP, connected to the D-Link AP, acting as standard access point, which the computer is connected to. That's basically working so far, but: Every now and then the connection is lost. Not the connection between the computer and the D-Link (I can access the DD-WRT admin page normally) or the connection between the D-Link and the WLAN (in Status - Wireless it says it's still connected to the network), but when I want to access a web page (which only works if I'm connected to the wireless network from bldg A), my Firefox keeps "Looking for" (name resolution) without finding anything. When I reset the D-Link (power off, power on) in this situation, after some moments, everything's working fine again (Internet access). I've no idea why this is happening, but usually it's at most every few weeks (most times when nobody was using the computer, so no traffic). Compared to the connection speed when I connect directly to the WLAN in bldg A (Laptop), the speed in bldg B is rather slow, but I have the impression that this difference is worse in the last few days. A few minutes ago, I got 582 KB/s down and 911 KB/s up in bldg A (directly/laptop) and 84 KB/s down and 9 KB/s up in bldg B. The speed in bldg B used to be way higher (I remember 200 KB/s up) while the actual network speed in bldg A was lower than it is now (close to those 200). I'm aware that the wireless connection between those buildings should slow things down, but I'm wondering why this difference has become that extreme. Thanks for any tips... Update: I currently want to upload a large file (1.5 GB) via FTP (FileZilla). Since that caused the D-Link to disconnect (as described in first post), I took my laptop to bldg A, connected directly to the original WLAN (bypassing my D-Link) and tried the same upload. Guess what - same issue: At some point the connection is dead (at this point I would have reset my D-Link if I was connected to it). Just as the D-Link, my laptop is still connected, but not even name resolution is working ("Looking for..." in Firefox). After reconnecting, it's working again. Maybe my D-Link isn't the problem at all...

    Read the article

  • How to run Django 1.3/1.4 on uWSGI on nginx on EC2 (Apache2 works)

    - by Tadeck
    I am posting a question on behalf of my administrator. Basically he wants to set up Django app (made on Django 1.3, but will be moving to Django 1.4, so it should not really matter which one of these two will work, I hope) on WSGI on nginx, installed on Amazon EC2. The app runs correctly when Django's development server is used (with ./manage.py runserver 0.0.0.0:8080 for example), also Apache works correctly. The only problem is with nginx and it looks there is something else wrong with nginx / WSGI or Django configuration. His description is as follows: Server has been configured according to many tutorials, but unfortunately Nginx and uWSGI still do not work with application. ProjectName.py: import os, sys, wsgi os.environ.setdefault("DJANGO_SETTINGS_MODULE", "ProjectName.settings") from django.core.wsgi import get_wsgi_application application = get_wsgi_application() I run uWSGI by comand: uwsgi -x /etc/uwsgi/apps-enabled/projectname.xml XML file: <uwsgi> <chdir>/home/projectname</chdir> <pythonpath>/usr/local/lib/python2.7</pythonpath> <socket>127.0.0.1:8001</socket> <daemonize>/var/log/uwsgi/proJectname.log</daemonize> <processes>1</processes> <uid>33</uid> <gid>33</gid> <enable-threads/> <master/> <vacuum/> <harakiri>120</harakiri> <max-requests>5000</max-requests> <vhost/> </uwsgi> In logs from uWSGI: *** no app loaded. going in full dynamic mode *** In logs from Nginx: XXX.com [pid: XXX|app: -1|req: -1/1] XXX.XXX.XXX.XXX () {48 vars in 989 bytes} [Date] GET / => generated 46 bytes in 77 m secs (HTTP/1.1 500) 2 headers in 63 bytes (0 switches on core 0) added /usr/lib/python2.7/ to pythonpath. Traceback (most recent call last): File "./ProjectName.py", line 26, in <module> from django.core.wsgi import get_wsgi_application ImportError: No module named wsgi unable to load app SCRIPT_NAME=XXX.com| Example tutorials that were used: http://projects.unbit.it/uwsgi/wiki/RunOnNginx https://docs.djangoproject.com/en/1.4/howto/deployment/wsgi/ Do you have any idea what has been done incorrectly, or what should be done to make Django work on uWSGI on nginx on EC2?

    Read the article

  • Nginx with Passenger setup problems

    - by Kreeki
    I'm trying to setup nginx webserver with Passenger support for Ruby on Rails application on Ubuntu 10.04 (on sub URI). All went fine until I tried to access the server/application from the browser. My instalation of nginx is in location /opt/nginx # my nginx.conf server { listen 80; server_name www.mydomain.com; root /websites/site/public; passenger_enabled on; passenger_base_uri /site; location / { # added by default, I don't know if its supposed to be here or not root html; index index.html index.htm; } Then I started the server. But when I put www.mydomain.com/site in browser I get 404 Not Found error. Error.log shows this: 2011/03/04 10:07:07 [error] 21387#0: *2 open() "/opt/nginx/html/favicon.ico" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /favicon.ico HTTP/1.1", host: "80.79.23.71", referrer: "http://80.79.23.71/" 2011/03/04 10:07:07 [error] 21387#0: *2 open() "/opt/nginx/html/404.html" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /favicon.ico HTTP/1.1", host: "80.79.23.71", referrer: "http://80.79.23.71/" 2011/03/04 10:07:11 [error] 21387#0: *4 open() "/opt/nginx/html/favicon.ico" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /favicon.ico HTTP/1.1", host: "80.79.23.71:80", referrer: "http://80.79.23.71:80/" 2011/03/04 10:07:11 [error] 21387#0: *4 open() "/opt/nginx/html/404.html" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /favicon.ico HTTP/1.1", host: "80.79.23.71:80", referrer: "http://80.79.23.71:80/" 2011/03/04 10:07:13 [error] 21387#0: *5 open() "/opt/nginx/html/site" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /site HTTP/1.1", host: "80.79.23.71:80" 2011/03/04 10:07:13 [error] 21387#0: *5 open() "/opt/nginx/html/404.html" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /site HTTP/1.1", host: "80.79.23.71:80" 2011/03/04 10:07:15 [error] 21387#0: *6 open() "/opt/nginx/html/site" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /site HTTP/1.1", host: "80.79.23.71:80" 2011/03/04 10:07:15 [error] 21387#0: *6 open() "/opt/nginx/html/404.html" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /site HTTP/1.1", host: "80.79.23.71:80" 2011/03/04 10:07:19 [error] 21387#0: *7 open() "/opt/nginx/html/site" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /site HTTP/1.1", host: "80.79.23.71:80" 2011/03/04 10:07:19 [error] 21387#0: *7 open() "/opt/nginx/html/404.html" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /site HTTP/1.1", host: "80.79.23.71:80" Why does nginx look for site in /opt/nginx/html/site as log shows when there's another path set in nginx.conf? Any idea whats wrong with my setup?

    Read the article

  • SMTP authentication error using PHPMailer

    - by Javier
    I am using PHPMailer to send a basic form to an email address but I get the following error: SMTP Error: Could not authenticate. Message could not be sent. Mailer Error: SMTP Error: Could not authenticate. SMTP server error: VXNlcm5hbWU6 The weird thing is that if I try to send it again, IT WORKS! Every time I submit the form after that first error it works. But if I leave it for a few minutes and then try again I get the same error again. The username and password have to be right as sometimes it works fine. I even created the following (very basic) script just to test it and I got the same result <?php require("phpmailer/class.phpmailer.php"); $mail = new PHPMailer(); $mail->IsSMTP(); $mail->Host = "smtp.host.com"; $mail->SMTPAuth = true; $mail->Username = "[email protected]"; $mail->Password = "password"; $mail->From = "[email protected]"; $mail->FromName = "From Name"; $mail->AddAddress("[email protected]"); $mail->AddReplyTo("[email protected]"); $mail->IsHTML(true); $mail->Subject = "Here is the subject"; $mail->Body = "This is the HTML message body <b>in bold!</b>"; $mail->AltBody = "This is the body in plain text for non-HTML mail clients"; if(!$mail->Send()) { echo "Message could not be sent. <p>"; echo "Mailer Error: " . $mail->ErrorInfo; exit; } echo "Message has been sent"; ?> I don't think this is relevant, but I just changed my hosting to a Linux shared server. Any idea why this is happening? Thanks! ***UPDATED 02/06/2012 I've been doing some tests. The results: I tested the script in an IIS server and it worked fine. The error seems to happen only in the Linux server. Also, if I use the gmail mail server it works fine in both, IIS and Linux. Could it be a problem with the configuration of my Linux server??

    Read the article

  • USB ports causing Wireless and Mobile Phone tethering drop out

    - by chrolli
    I have a problem with my USB ports and I can't seem to pinpoint the problem. I'm hoping that I can get some idea as to how to troubleshoot my problem. Problem description: I recently bought a USB Wireless Adapter in order to connect my Desktop PC to the WLAN. The dongle keeps dropping off connection regularly on average once a minute. And the connection is slow. What I've tried: I installed a network scanner to determine signal strength. The PC/adapter combo is getting about 70% signal strength. I have a laptop that has an internal Wireless adapter. I moved the laptop near the location of the PC. The signal strength was above 90%. I installed the USB adapter on the laptop. I placed the laptop near the location of the PC. The signal strength was above 80%. And there is no drop out issue. I've isolated the problem to be on the PC. I used internet tethering on my mobile phone in order to test the USB ports. I get the same problem, themobile phone keeps dropping in and out. I plugged the adapter and phone on all USB ports, and it is still dropping out on all ports. I've isolated the problem to be the Motherboard, USB hub. I updated the BIOS, and USB driver. Still dropping out. I noticed that when I used the mobile phone tethering method, the phone (iphone 5) is guarenteed to drop out and does not appear to charge if I jiggle the the 8-pin connector on the base of the phone. If I push the pin tightly on the phone, the phone starts charging, as soon as I let go, it stops and drops out from the PC. This is strange because this doesn't happen when it is plugged to the powerboard. Which means that the 8 pin connector is fine. I'm not sure what the problem is. I assume that it is to do with the USB ports not supplying enough power which in turn causes the devices to intermittently drop out. The reason I say this is because the USB cable works fine when plugged into a powerboard with enough power supplied to the phone. Only when it is plugged into the USB port on the PC, does it drop out. My motherboard is Asus P8z68-v le. My Wireless adapter is the D-link DWA131. I can't seem to find any settings in the BIOS to increase the south bridge voltage so as to supply more power to the USB ports. Although this problem only occurs when I'm trying to hook up to a device supplying internet connection. My USB HDD, Flash drives, and Mouse and Key are working fine! Any suggestions please?

    Read the article

  • Windows Server 2008 - one MAC Address, assign multiple external IP's to VirtualBoxes running as guests on host

    - by Sise
    Couldn't find any help @ google or here. The scenario: Windows Server 2008 Std x64 on i7-975, 12 GB RAM. The server is running in a data centre. One hardware NIC - RealTek PCIe GBE - one MAC Address. The data centre provides us 4 static external IP's. The first is assigned to the host by default of course. I have ordered all 4 IP's, the data centre can assign the available IP's to the physical MAC address of the given NIC only. This means one NIC, one MAC Address, 4 IP's. Everything works fine so far. Now, what I would like to have: Installed VirtualBox with 1-3 guests running, each gets it's own external IP assigned. Each of it should be an standalone Win Server 2008. It looks like the easiest way would be to put the guests into an virtual subnet and routing all data coming to the 2nd till 4th external IP through to this guests using there subnet IP's. I have been through the VirtualBox User Manuel regarding networking. What's not working: I can't use bridged networking without anything else, because the IP's are assigned to the one MAC address only. I can't use NAT networking because it does not allow access from outside or the host to the guest. I do not wanna use port forwarding. Host-only networking itself would not allow internet access, by sharing the default internet connection of the host, internet is granted from the guest to the outside but not from outside or the host to the guest. InternalNetworking is not really an option here. What I have tried is to create an additional MS Loopback adapter for a routed subnet, where the Vbox guests are in, now the idea was to NAT the internet connection to the loopback 'subnet'. But I can't ping the gateway from the guests. By using route command in the command shell or RRAS (static route, NAT) I didn't get there as well. Solutions like the following do work for the one way, but not for the way back: For your situation, it might be best to use the Host-Only adapter for ICS. Go to the preferences of VB itself and select network. There you can change the configuration for the interface. Set the IP address to 192.168.0.1, netmask 255.255.255.0. Disable the DHCP server if it isn't already and that's it. Now the Guest should get an IP from Windows itself and be able to get onto the internet, while you can also access the Host. Slowly I'm pretty stucked with this topic. There is a possibility I've just overlooked something or just didn't getting it by trying, especially using RRAS, but it's kinda hard to find useful howto's or something in the web. Thanks in advance! Best regards, Simon

    Read the article

  • SMPS stops when I plug in a SATA drive?

    - by claws
    Hello, Part 1: my first question is all the 4 wire power connectors (intended for hardisks/dvd drives not mother board) are same. Right? I've been using all of them same and I had no problem for years. Yesterday I borrowed a SATA disk from my friend and connected it my computer using Sata Power adaptor (4 wire) and when I switched on the computer. There were fumes coming out of the connector. I immediately turned it off (in just one second). I tested the voltages in the 4 wire power connector of my SMPS: They were 5.3v & 12.2V. I couldn't measure the current. But my SMPTS label reads: DC Output: 3.3v (25A) +5v (32A) -5v (0.3A) +12V (17A) -12V (0.8A) And the SATA hardisk label reads Input: +5v (0.72A) +12V (0.52A) I'm shocked! I never noticed this. Does the "sata power adaptor" scale down the current to required? If it doesn't, I've been connecting same way for years. I never had any problem. This is the first time I'm encountering it. Part 2: I wanted to return the drive to my friend. He has two hard disks, SATA & PATA. Its the SATA that I borrowed. When he usually switches on. The CPU fan starts & then stops for a sec and starts again and continues working. That was the earlier situation. I don't know why it stops & starts? Well, Now when I connect this SATA disk and switch ON the computer. CPU fan starts (just for an instant, not even a 0.5 sec) and stops. It doesn't start again, I mean the power from SMPS has stopped. But if I disconnect this SATA disk. It works fine. What seems to be the problem? I've no idea about why there were fumes or why his SMPS starts & stops giving power? What is its relation with the SATA disk connection?

    Read the article

  • Windows 7 x64 "upgrade" repair fails

    - by Polynomial
    I've been running into issues with Windows Update, which I can't seem to fix. The hotfixes don't work, nor does the Windows update readyness tool, or the manual SP1 upgrade. I get various esoteric errors which nobody seems to have a fix for. Looks like some of the update cache is corrupt and digital signatures seem to be broken on some packages / Windows Update components. Long story short, I have discovered the only option is to do a repair operation on the OS, to repair everything. It's so corrupt that only a complete replacement will fix it. According to various sources (including MSKB) one can perform a repair by running an in-place upgrade. I've got the Windows 7 Ultimate retail disc, which I've inserted into my machine. I ran setup.exe and went through in the following order: Install now Go online to get latest updates (I've also tried not getting updates) Wait for updates to be downloaded Select Windows 7 Ultimate (x64 architecture) and click next Accept the T&Cs, click next Click Upgrade At this point it spends a minute on the "checking compatibility" screen, after which I get the following error: The following issues are preventing Windows from upgrading. Cancel the upgrade, complete each task, and then restart the upgrade to continue. You can’t upgrade 64-bit Windows to a 32-bit version of Windows. To upgrade, obtain a 64-bit version of the installation disc, or go online to see how to install Windows 7 and keep your files and settings. 32-bit Windows cannot be upgraded to a 64-bit version of Windows. To upgrade, obtain a 32-bit version of the Windows installation disc. It also mentions a warning about potential conflicts with a storage driver and VS2010, but that doesn't seem to be the blocking issue. My currently installed version of Windows is Ultimate 64-bit (absolutely sure of this) and the disc is definitely a x86 / x64 combined Ultimate retail disc. There seem to be a few people who have run into this (e.g. this question), but I've not seen any answers. I've checked the event viewer, but can't spot anything in there that's related. Any idea how I can get this working? P.S: Just to pre-empt the inevitable "are you suuuuuuuuuuuuure it's x64 Ultimate?" questions:

    Read the article

  • Windows 7 playback of dvr-Microsoft files stutters

    - by Jim Lynn
    I've just had to install Windows 7 on my Media Center machine because my Vista installation had a faulty drive. I've got the latest drivers that I can find - Intel 945GM integrated Graphics, Realtek audio drivers. Things are working OK with one exception. Playback of old recordings, from dvr-Microsoft format files, is choppy. The picture freezes for a fraction of a second, then quickly catches up. The sound is uninterrupted and doesn't pause. These freezes happen once every 5 seconds or so. It's very regular. Playback of Live TV from the digital tuner is perfectly smooth. DVD playback is perfectly smooth. As an experiment, I used the MPEG editing package VideoReDo to create a small test file in three different formats. This program takes the raw MPEG streams and repackages them into the desired container. I took the same clip and created three files in three formats: dvr-Microsoft (Microsoft's old recorded TV format); mpg (standard MPEG); and ts (raw MPEG transport stream of the kind often produced by PVRs). When these three files are played back under Windows 7, the mpg and ts files play smoothly, but the dvr-Microsoft file stutters. The last piece of data I have is that two other Windows 7 machines can play back dvr-Microsoft files smoothly with no stuttering. One is a netbook, with less grunt than the media centre. So there must be something specific about my Media Center machine that's causing the problem. Does anyone have any idea where I can look now? I don't know much about AV software, codecs, filter graphs etc. but I suspect that's where the problem lies. Rendering the video isn't the problem, but extracting the streams is. How would I go about diagnosing the problem? Edited to add: I just used the GraphStudio tool to look at the filter graph on the offending PC. The filter graph it uses by default for dvr-Microsoft looks identical to the other machines, and, interestingly, when I play the files using GraphStudio they run smoothly. Under Windows Media Player and Windows Media Center they stutter. I'd like to see the filter graph for Windows Media Player but GraphStudio won't show it. It looks like Windows Media Player and WMC are using a different decoding path to GraphStudio. Edited again to add: Today I purchased a new HDTV. The same Media Center driving the TV at 1080p is now playing back the old Recorded TV files smoothly, without stuttering. So whatever the cause of the original problem, using a different resolution seems to have removed the problem. It might also explain why nobody else has had this problem. I doubt many people use Media Centre with a 14in portable TV.

    Read the article

  • How to setup nginx and a subdomain

    - by Evolutio
    i have gitlab installed on my server and it works on all domains eg: git.lars-dev.de, lars-dev.de and *.lars-dev.de how I can run gitlab only on git.lars-dev.de and another subdomain on files.lars-dev.de? my lars-dev conf: server { listen *:80; ## listen for ipv4; this line is default and implied #listen [::]:80 default_server ipv6only=on; ## listen for ipv6 root /var/www/webdata/lars-dev.de/htdocs; index index.html index.htm; server_name lars-dev.de; location / { try_files $uri $uri/ /index.html; } #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/www; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } and the gitlab configuration: upstream gitlab { server unix:/home/git/gitlab/tmp/sockets/gitlab.socket; } server { listen *:80; # e.g., listen 192.168.1.1:80; In most cases *:80 is a good idea server_name git.lars-dev.de; # e.g., server_name source.example.com; server_tokens off; # don't show the version number, a security best practice root /home/git/gitlab/public; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab unicorn) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://gitlab; } }

    Read the article

< Previous Page | 738 739 740 741 742 743 744 745 746 747 748 749  | Next Page >