Search Results

Search found 58272 results on 2331 pages for 'apache log files'.

Page 205/2331 | < Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >

  • C# KeyEvent doesn't log the enter/return key

    - by Pieter888
    Hey all, I've been making this login form in C# and I wanted to 'submit' all the data as soon as the user either clicks on submit or presses the enter/return key. I've been testing a bit with KeyEvents but nothing so far worked. void tbPassword_KeyPress(object sender, KeyPressEventArgs e) { MessageBox.Show(e.KeyChar.ToString()); } The above code was to test if the event even worked in the first place. It works perfectly, when I press 'd' it shows me 'd' when I press '8' it shows me '8' but pressing enter doesn't do anything. So I though this was because enter isn't really bound to a character but it did show backspace, it worked just fine so it got me confused about why it didn't register my enter key. So the question is: How do I log the enter/return key? and why doesn't it log the key press right right now like it should? note: I've put the event in a textbox tbPassword.KeyPress += new KeyPressEventHandler(tbPassword_KeyPress); So it fires when the enter button is pressed WHILE the textbox is selected (which is was the whole time of course) maybe that has something to do with the execution of the code.

    Read the article

  • Web Applications under Apache Tomcat with multiple directory contexts

    - by goran
    I have two webapps, prod-1.2.1.war and test-2.0.0.war. If I put these straight into the "tomcat/webapps"-folder, they'll get deployed as; hXXp://localhost/prod-1.2.1/ hXXp://localhost/test-2.0.0/ This works but really I would like them to show up as; hXXp://localhost/vegshop/prod/ hXXp://localhost/vegshop/test/ As you see I somehow would like the "vegshop" to be included in the context path. I also would like the version-numbering to disappear without having to rename the WAR-files. Thank you. This is Apache Tomcat v6.0 under Linux 2.6, running SUN JDK 1.6.

    Read the article

  • Webmin apache on CentOS 6.3 results in 403 forbidden, permissions are OK

    - by Mario De Schaepmeester
    First of all, I will mention that the permissions are fine for the document root directory, which is /webapps/nimbus/www/public_html The www directory contains a PHP application. PHP is a problem for later if it doesn't work, as I've tested it with a plain html file (does not work either) I just get 403 forbidden responses. The permissions are 755 on webapps and all subdirectories. I've checked other questions here and on the internet, but it was all about those permissions. Whatever info you still need, just ask, I don't know what's relevant as it's the first time ever I'm using webmin or configuring apache.

    Read the article

  • Apache + Bind Problems

    - by Gabriel
    Hello, I am using VirtualMin on Debian-50-lenny-64-LAMP (Debian Linux 5.0). I've upgraded some packages including "bind". Since the upgrade, both Apache and Bind stopped working. Here's the errors I get: Blockquote Starting web server: apache2apache2: Could not reliably determine the server's fully qualified domain name, using 78.46.92.11 for ServerName (98)Address already in use: make_sock: could not bind to address [::]:80 (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs failed! and Failed to start BIND : Unknown error I am sure that some files were changed after the upgrade and this is a simple problems to solve, but unfortunately, it's the first time I am in this situation and I just couldn't find a solution. I've Googled about the errors but still couldn't make it work. Now I am sorry I did the update. I usually make updates to have the latest versions of the packages that are installed in the server. Any ideas?

    Read the article

  • How to log out with a command in a gnome-less environment?

    - by octosquidopus
    I installed various window managers (Awesome, dwm, etc.) from which I am not able to log out back to the login screen (gnome-session) in order to switch to another window manager. I need to reboot to do that, which is a waste of time. Question How can you log out via the terminal? didn't work.. dbus-send --session --type=method_call --print-reply --dest=org.gnome.SessionManager /org/gnome/SessionManager org.gnome.SessionManager.Logout uint32:1 ..neither did this: gnome-session-save --force-logout ..nor that: gnome-session-quit --force-logout they all returned: Failed to call logout: The name org.gnome.SessionManager was not provided by any .service files Is there a quick way to log out back into Gnome's session manager from a non-Gnome desktop manager using a terminal emulator? I know that CTRL+ALT+BACKSPACE can be configured to restart X, but I'm looking for the easiest way to log out.

    Read the article

  • Ubuntu 12.04 graphics crashing when GVIM opens TEX files

    - by Pdp Molniya
    I am having a little problem everytime I open GVIM to edit *.TEX files.... the menus die, windows jiggle (maximize and minimize quickly) and I get a 'internal error ' crash report from ubuntu (12.04). It says the problem is at /usr/lib/unity/unity-panel-service. Any tips on how to solve this? It might be related to the Latex package of vim (also I get this message when I open gvim (with or without TEX files) on terminal: (gvim:5915): Gnome-WARNING **: Accessibility: failed to find module 'libgail-gnome' which is needed to make this application accessible (gvim:5915): GLib-GObject-WARNING **: cannot retrieve class for invalid (unclassed) type `' Issue is independant of theme I just checked... Thanks a lot for your help! Cheers, Pedro

    Read the article

  • Intermittent FTP login issues (Microsoft IIS FTP Service)

    - by JaggenSWE
    I've got a somewhat weird problem which I'm not sure how to troubleshoot. We have a FTP running on a Windows Server 2003 machine using the IIS FTP Service, this is for our clients and is configured with IP-restrictions. However, now ONE of the clients starts complaining that they can't log in to the server from time to time. This is just ONE of 10+ clients that have this issue, which makes me think it's a problem on their side. Just to be on the safe side I had a peek into the FTP logs and found something strange. Whenever succeed in loggin in this is what I can find in the logs: nnn.nnn.nnn.70, userxxx, 2012-06-11, 09:22:32, MSFTPSVC1, SERVERNAME, nnn.nn.nn.11, 0, 0, 0, 331, 0, [191747]USER, userxxx, -, nnn.nnn.nnn.70, userxxx, 2012-06-11, 09:22:32, MSFTPSVC1, SERVERNAME, nnn.nn.nn.11, 0, 0, 0, 230, 0, [191747]PASS, -, -, However, if the login fails I see the following events: nnn.nnn.nnn.70, userxxx, 2012-06-11, 09:16:33, MSFTPSVC1, SERVERNAME, nnn.nn.nn.11, 0, 0, 0, 331, 0, [191739]USER, userxxx, -, nnn.nnn.nnn.70, -, 2012-06-11, 09:16:33, MSFTPSVC1, SERVERNAME, nnn.nn.nn.11, 0, 0, 0, 530, 1326, [191739]PASS, -, -, When you look at the event where the clients sends the PASS in the successful login it seems to know that it is infact "userxxx" that is coupled to that PASS, but when it fails it seems to be lost since user in the PASS event is set to "-". Anyone have any ideas around this, any help would be appreciated. :) //JaggenSWE

    Read the article

  • Server Crash Diagnosis...Are there any 'black box recorder' style programs available.

    - by columbo
    My redhat server is crashing every three weeks or so at 4:15am ish on Sunday mornings. (well it was sundays the last two have been Thursday mornings at 4:15ish) Looking at the logs (mysql, httpd, messages) there are no clues as to why. They just seem to stop. I ran a little script to take memory readings every 15 minutes and it too stops (with normal readings) at this time. The server is remote at a provider so I can only access it via the web. I use Plesk. It appears to be a set job or something that is causing the issue. I can see nothing in crontab. So my question is...has anyone else had this and can offer advice? Failing that. Does any one know of a way to get more detailed logging than that offered by the messages file? I was thinking of a black box style recording program or maybe something as simple as an option somewhere to increase the level of reporting in the messages log. Thanks

    Read the article

  • Apache: Assign SSL server / client certs to directories

    - by Daniel Amaya
    I have multiple directories on my system, e.g., /var/www/dir1 /var/www/dir2 /var/www/dir3 And what I'd like to do is to generate a server/client SSL certificate for each directory, and then set up each directory such that the client cert must match the server cert in order to access said directory. Now, if someone has the client cert for /var/www/dir2 and they try to access /var/www/dir1, they will be unable to do so since those directories use different certs. Each of these directories is hosted on the same domain (i.e., domain.com/dir1, domain.com/dir2). Now, the problem I am having is that I am not exactly sure how to accomplish this in Apache. (Also, I don't really care for domain.com to require SSL, but I do want the directories to require it.)

    Read the article

  • Three apps going through apache. How to configure apache httpd?

    - by Chris F.
    I have a quick question but I've been struggling to find the best solution: I have two java webapps and wordpress (php) that I need to serve through my Prod website: App #1 should be accessed when pointing to www.example.com/ (this would have other url too such as "www.example.com/book") App #2 should be accessed when pointing to www.example.com/manage Finally WordPress would be accessed at www.example.com/info How can I configure apache to serve all these three instances at the same time? So far I have and it's not quite working right. Any suggestions would be much appreciated! Listen 8081 <VirtualHost *:8081> DocumentRoot /var/www/html </VirtualHost> ProxyPass /manage http://127.0.0.1:8080/manage ProxyPassReverse /manage http://127.0.0.1:8080/manage ProxyPass /info http://127.0.0.1:8081/info ProxyPassReverse /info http://127.0.0.1:8081/info ProxyPass / http://127.0.0.1:9000/ ProxyPassReverse / http://127.0.0.1:9000/

    Read the article

  • Apache - mod_pagespeed freezes my website

    - by Jonathan Rioux
    I have installed the mod_pagepseed module for Apache. I am using Debian so I downloaded the .deb file, and installed it successfully. I then configured some filters, and it worked like a charm for some minutes. Then after something like 10 minutes, my website no longer responded to the requests. When I was requesting for my website, it said "Waiting for www.blablabla.com" and I never got the page back from the server. I checked the processes running on my Debian box with top -d 0.5, and nothing eats up the CPU. To make my website responding to requests again, I must do a /etc/init.d/apache2 restart. And then it works again with mod_modspeed applying it's filters for a couple of minutes, and no more response again. How can I diagnose this issue? Is there some other configurations in the mod_pagespeed.conf file that I must set?

    Read the article

  • Apache will not start after IP change

    - by Doron
    I'm running CentOS 5.8 and I had to change my server's IP address. Afterwards, I'm unable start Apache. I am also running virtualmin. The error I'm receiving is: Failed to start service Starting httpd: httpd: Could not reliably determine the server's fully qualified domain name, using 184.106.146.125 for ServerName (99) Cannot assign requested address: make_sock: could not bind to address 50.56.33.100:8080 no listening sockets available, shutting down Unable to open logs I edited my httpd.conf to point to the new IP address like such: #Listen 12.34.56.78:80 Listen 184.106.146.125:80 And looking at the error it still seems to be referring to the old ip address (50.*).

    Read the article

  • VisualSVN Server , Windows 7 and Apache Problem

    - by Ash
    I am running Visual SVN Server(with Apache) on a Windows 7 computer and network. After about 15-20 minutes of my first commit/update, I am unable to access the repository via Tortoise SVN. The error message I get is: OPTIONS of "https://jason/svn/repository1": could not connect to server (https://jason) Restarting the Visual SVN Server service helps sometimes but fails quite often. The only sure-shot way to get it working is to restart the computer. The server - https://jason is also not accessible via the browser when I get this error 1) I tried reinstalling Windows 7, Visual SVN server and Tortoise SVN but I still keep getting this error. 2) I searched several forums but I dont seem to be able to find an answer. Please help.

    Read the article

  • Retrieve malicious IP addresses from Apache logs and block them with iptables

    - by Gabriel Talavera
    Im trying to keep away some attackers that try to exploit XSS vulnerabilities from my website, I have found that most of the malicious attempts start with a classic "alert(document.cookie);\" test. The site is not vulnerable to XSS but I want to block the offending IP addresses before they found a real vulnerability, also, to keep the logs clean. My first thought is to have a script constantly checking in the Apache logs all IP addresses that start with that probe and send those addresses to an iptables drop rule. With something like this: cat /var/log/httpd/-access_log | grep "alert(document.cookie);" | awk '{print $1}' | uniq Why would be an effective way to send the output of that command to iptables? Thanks in advance for any input!

    Read the article

  • W3c Markup Validator on Windows 2003 with Apache

    - by rihatum
    Hi All, OS = Windows 2003 (latest sp / hotfixes etc) Perl = Active Perl 5.8.9 Build 825 Apache 2.2.11 Followed the following How-To: http://validator.w3.org/docs/install_win.html Facing the following errors : (had an html error too, but I used Perl Package manager to upgrade the required package. Now, the Package manager isn't showing any update of the following package and some others too : SGML::Parser::OpenSP version 0.991 required--this is only version 0.99 at C:/www/validator/httpd/cgi-bin/check line 61. Q : How can I download the latest package for OpenSP ? Q : Would It be just a matter of click and install the package? If someone can provide a step by step that would be very helpful, I am not fluent with building perl packages. Thanks and Regards

    Read the article

  • Improve Log Exceptions

    - by Jaider
    I am planning to use log4net in a new web project. In my experience, I see how big the log table can get, also I notice that errors or exceptions are repeated. For instance, I just query a log table that have more than 132.000 records, and I using distinct and found that only 2.500 records are unique (~2%), the others (~98%) are just duplicates. so, I came up with this idea to improve logging. Having a couple of new columns: counter and updated_dt, that are updated every time try to insert same record. If want to track the user that cause the exception, need to create a user_log or log_user table, to map N-N relationship. Create this model may made the system slow and inefficient trying to compare all these long text... Here the trick, we should also has a hash column of binary of 16 or 32, that hash the message and the exception, and configure an index on it. We can use HASHBYTES to help us. I am not an expert in DB, but I think that will made the faster way to locate a similar record. And because hashing doesn't guarantee uniqueness, will help to locale those similar record much faster and later compare by message or exception directly to make sure that are unique. This is a theoretical/practical solution, but will it work or bring more complexity? what aspects I am leaving out or what other considerations need to have? the trigger will do the job of insert or update, but is the trigger the best way to do it?

    Read the article

  • Apache only logs PHP errors if LogLevel is set to debug

    - by Sudowned
    I'm developing a CodeIgniter application and for reasons that I do not fully understand errors have stopped being logged in the file specified in the Apache site conf. The page I'm testing is definitely generating a 500 error, but that is not reflected in the logs unless I set LogLevel debug. Setting LogLevel to error or warn results in no errors being logged. I don't think this is a CI issue because I've been developing this site for close to a week now and errors have been logged as expected until I picked the project up again this morning. Though for what it's worth, I've got: error_reporting(E_ALL); set in my index.php.

    Read the article

  • Limit the size of a directory by deleting old files

    - by Sulliwane
    I have a IP cam which save its recordings in a specific directory named Camera1 in my Ubuntu Server 12.04. I would like to limit the size of this folder to 5 gigs, by deleting -say once a day- the oldest files. I first checked the quota program but it doesn't seem to allow the creation of new files and deleting of the old ones. So I think the best workaround would be to run a bash script ? But I have no idea how to write it... Thank you guys !

    Read the article

  • How to nest a Location directive inside a virtual host config?

    - by Josh
    I am trying nest a Location directive inside a virtual host config like this: <VirtualHost *:80> ServerName mysite.com DocumentRoot /home/deployer/apps/mysite/current/public ErrorLog /var/log/prod.log <Location "/shop"> DocumentRoot /home/deployer/apps/mysite_shop/current/public ErrorLog /var/log/prod.log </Location> </VirtualHost> What I want to do is go to mysite.com/shop, and point it to another application. Is this possible? Is there another method of doing this? I get an error because apparently Location directives do not accept DocumentRoot. Thanks.

    Read the article

  • Keeping files private on the internet (.htaccess password or software/php/wordpress password)

    - by jiewmeng
    I was asked a while ago to setup a server such that only authenticated users can access files. It was like a test server for clients to view WIP sites. More recently, I want to do something similar for some of my files. Tho they are not very confidential, I wish that I am the only one viewing it. I thought of doing the same, Create a robots.txt User-agent: * Disallow: / Setup some password protection, .htpasswd seems like a very ugly way to do it. It will prompt me even when I log into FTP. I wonder if software method like password protected posts in Wordpress will do the trick of locking out the public and hiding content from Search Engines? Or some self made PHP script will do the trick?

    Read the article

  • supervise/daemontools conflicts with apache -D FOREGROUND

    - by Kevin G.
    Hoping that somebody can help us understand this behavior. We've got a bunch of daemontools services under /etc/service/. One of the services controls apache, and the run script has this in it. exec envdir /var/lib/supervise/wwwproxy/env setuidgid root bash <<-BASH ulimit -n 8192 # also increase the running user's file descriptor limit exec apache2 -f /path/to/demo_apache2.conf -D FOREGROUND BASH We were having the problem that svc -d /etc/service/* actually had the effect of restarting all the services, it didn't take them down. We finally tracked it down to that one service, and found that svc -d /etc/service/apache2 would bring up any other service was down, including itself. Changing FOREGROUND to NO_DAEMONIZE fixes the behavior, but we'd really like to understand what's going on. Can anybody explain why an svc -d on one service would bring an other service up? Thanks for any clue you can offer.

    Read the article

  • How do I analyze an Apache Bench result?

    - by Alan Hoffmeister
    I need some help with analyzing a log from Apache Bench: Benchmarking texteli.com (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Completed 600 requests Completed 700 requests Completed 800 requests Completed 900 requests Completed 1000 requests Finished 1000 requests Server Software: Server Hostname: texteli.com Server Port: 80 Document Path: /4f84b59c557eb79321000dfa Document Length: 13400 bytes Concurrency Level: 200 Time taken for tests: 37.030 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 13524000 bytes HTML transferred: 13400000 bytes Requests per second: 27.01 [#/sec] (mean) Time per request: 7406.024 [ms] (mean) Time per request: 37.030 [ms] (mean, across all concurrent requests) Transfer rate: 356.66 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 27 37 19.5 34 319 Processing: 80 6273 1673.7 6907 8987 Waiting: 47 3436 2085.2 3345 8856 Total: 115 6310 1675.8 6940 9022 Percentage of the requests served within a certain time (ms) 50% 6940 66% 6968 75% 6988 80% 7007 90% 7025 95% 7078 98% 8410 99% 8876 100% 9022 (longest request) What this results can tell me? Isn't 27 rps too slow?

    Read the article

  • Recovering user files with a Live CD

    - by user33617
    For some reason my bootup isn't working. I get an error akin to "Operating System Not Found". So I tried bootrepair, and that didn't work. So then I decided I would just save my personal files, wipe everything, and reinstall. Except when I go to the /home directory, my username folder isn't there, instead it goes to the Live CD's desktop and file folders. Is there some other error occurring? Is there a way to recover the files?

    Read the article

  • How to shrink Windows partition with unmovable files in dual boot installation

    - by Tim
    To install Ubuntu alongside Windows 7, I have to shrink Windows 7 partition C:. But due to some unmovable files, I cannot shrink as much as I plan by using Windows own shrinking tool. I guess many of you who have both OSes on the same hard drive must have similar experience. How to solve this problem? Any reference that can help is also appreciated! Thanks and regards! UPDATE: I have identified what unmovable file currently stop further shrinking: \ProgramData\Microsoft\Search\Data\Applications\Windows\Projects\SystemIndex\Indexer\CiFiles\00010015.wid::$DATA If I understand correctly, the file belongs to Windows Search. Can I set up somewhere in Windows system settings to temperately eliminate the file and similar ones (because there are many similar files under the same directory which I guess will also stand in the way of shrinking and unmovable by defrag)?

    Read the article

< Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >