Search Results

Search found 41598 results on 1664 pages for 'segmentation fault'.

Page 158/1664 | < Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >

  • Mod_Proxy_AJP set up issues

    - by TripWired
    I'm trying to set up Tomcat behind Apache using mod_proxy_ajp. After tons of messing around with the configs I am stuck at a 403 page when trying to access tomcat. I had a 404 before but apparently something I changed along the way fixed that. I'm not sure which setting to change at this point. Could anyone look over the configs I have and see if anything is missing. httpd.conf <IfModule mod_proxy.c> ProxyRequests Off <Proxy *> Order deny,allow Deny from all Allow from localhost </Proxy> proxy_ajp.conf LoadModule proxy_ajp_module modules/mod_proxy_ajp.so # # When loaded, the mod_proxy_ajp module adds support for # proxying to an AJP/1.3 backend server (such as Tomcat). # To proxy to an AJP backend, use the "ajp://" URI scheme; # Tomcat is configured to listen on port 8009 for AJP requests # by default. # # # Uncomment the following lines to serve the ROOT webapp # under the /tomcat/ location, and the jsp-examples webapp # under the /examples/ location. # ProxyPass /tomcat ajp://127.0.0.1:8009/ ProxyPassReverse /tomcat ajp://127.0.0.1:8009/ ProxyPass /examples/ ajp://localhost:8009/jsp-examples/

    Read the article

  • MySQL per-database replication?

    - by LucasBr
    So, my problem is interesting: we want to migrate from one server to another. We made a master-slave replication, but my boss came with the idea to make migration one database at a time. So he asked me to setup at the new server another MySQL instance, let the slave almost as-is and make the new instance be the new master incrementally, one database at a time. Is it possible, that is, can I transfer the database 'x' from old master to new master and just tell slave to synchronize 'x' at the new master from now on? I've read at this old thread ( Mysql Replication - are per-database threads possible? ) that this was not possible at that time. This can be done now? Thanks! Lucas Bracher.

    Read the article

  • rpm build from src file

    - by danielrutledge
    Hi all, I'm trying to build from a *.src.rpm file on FC 12 in such a way that the files are distributed a across my system as they would with a normal binary build (in this case, *.h files end up in /usr/include). When I ran rpmbuild, the headers weren't present. Here's my rpmbuild command: [root@localhost sphirewalld]# rpm -ivv /home/dan/Downloads/gtest-1.3.0-2.20090601svn257.fc12.src.rpm ============== /home/dan/Downloads/gtest-1.3.0-2.20090601svn257.fc12.src.rpm Expected size: 489395 = lead(96)+sigs(180)+pad(4)+data(489115) Actual size: 489395 loading keyring from pubkeys in /var/lib/rpm/pubkeys/*.key couldn't find any keys in /var/lib/rpm/pubkeys/*.key loading keyring from rpmdb opening db environment /var/lib/rpm/Packages cdb:mpool:joinenv opening db index /var/lib/rpm/Packages rdonly mode=0x0 locked db index /var/lib/rpm/Packages opening db index /var/lib/rpm/Name rdonly mode=0x0 read h# 931 Header sanity check: OK added key gpg-pubkey-57bbccba-4a6f97af to keyring read h# 1327 Header sanity check: OK added key gpg-pubkey-7fac5991-4615767f to keyring read h# 1420 Header sanity check: OK added key gpg-pubkey-16ca1a56-4a100959 to keyring read h# 1896 Header sanity check: OK added key gpg-pubkey-a3a882c1-4a1009ef to keyring Using legacy gpg-pubkey(s) from rpmdb /home/dan/Downloads/gtest-1.3.0-2.20090601svn257.fc12.src.rpm: Header SHA1 digest: OK (3e98ed9b1631395d417e00f35c83ebe588ea9d3b) added source package [0] found 1 source and 0 binary packages Expected size: 489395 = lead(96)+sigs(180)+pad(4)+data(489115) Actual size: 489395 InstallSourcePackage at: psm.c:232: Header SHA1 digest: OK (3e98ed9b1631395d417e00f35c83ebe588ea9d3b) gtest-1.3.0-2.20090601svn257.fc12 ========== Directories not explicitly included in package: 0 /root/rpmbuild/SOURCES/ 1 /root/rpmbuild/SPECS/ ========== warning: user mockbuild does not exist - using root warning: group mockbuild does not exist - using root fini 100664 1 ( 0, 0) 478034 /root/rpmbuild/SOURCES/gtest-1.3.0.tar.bz2;4ba93ce1 unknown warning: user mockbuild does not exist - using root warning: group mockbuild does not exist - using root fini 100644 1 ( 0, 0) 30505 /root/rpmbuild/SOURCES/gtest-svnr257.patch;4ba93ce1 unknown warning: user mockbuild does not exist - using root warning: group mockbuild does not exist - using root fini 100644 1 ( 0, 0) 2732 /root/rpmbuild/SPECS/gtest.spec;4ba93ce1 unknown GZDIO: 63 reads, 511788 total bytes in 0.005930 secs closed db index /var/lib/rpm/Name closed db index /var/lib/rpm/Packages closed db environment /var/lib/rpm/Packages Thanks for your help.

    Read the article

  • How to create a Linux user without a password but being able to set it?

    - by Leonid Shevtsov
    I have a username and an SSH key for a (hypothetical) guy and I need to give him admin access to a Linux (Ubuntu) server. I want him to be able to log in via SSH and then set his password by himself over a secure connection, instead of passing the password around. I know how to make the password expire and force him to reset it on first login. But this doesn't work unless he has some password already, which I then have to tell him. I thought about making the password blank - SSH wouldn't allow login, but then anyone can su into the user. My question is, is there some best practice to creating accounts in such a way? Or setting a default password is unavoidable?

    Read the article

  • Windows could not update the computer's boot configuration.

    - by Luke Puplett
    Hello, I am trying to install Windows Web Server 2008 x64 (have tried 32-bit, too) onto an IBM eServer 326m and am getting the following error message some time after the unpacking files section: Windows could not update the computer's boot configuration. Installation cannot proceed. I can repair the boot information using the Repair option in the WinPE bit of the setup, and it reboots into the Windows installation, so it has good boot data on the drive. Just to complicate things, the server does not have a DVD-ROM so I'm installing from HDD to HDD, both SATA, one an SSD. I've tried each drive in isolation, e.g. removing the SSD and installing from spindle onto itself, same error every time. Flashed IBM BIOS, too. Thanks for your help. Luke

    Read the article

  • Installing Munin on Centos 6

    - by justinhj
    I've hit problems installing munin on Centos 6. This seems to be a conflict between parts of Perl. I think the version of Perl is newer on Centos 6 (v5.10.1) When installing munin via yum I get errors relating to perl dependencies as below. I'm not a big enough whiz at yum or rpm to figure out the issue. Munin documentation does not yet talk about installing to Centos 6.0 Error: Package: munin-node-1.4.2-0.rpl1.el5.noarch (/munin-node-1.4.2-0.rpl1.el5.noarch) Requires: perl(Net::SNMP) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: bitstream-vera-fonts Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(HTML::Template) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl-Net-SNMP Error: Package: munin-common-1.4.2-0.rpl1.el5.noarch (/munin-common-1.4.2-0.rpl1.el5.noarch) Requires: perl(:MODULE_COMPAT_5.8.8) Error: Package: munin-node-1.4.2-0.rpl1.el5.noarch (/munin-node-1.4.2-0.rpl1.el5.noarch) Requires: perl(DBI) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(Log::Log4perl) Error: Package: munin-node-1.4.2-0.rpl1.el5.noarch (/munin-node-1.4.2-0.rpl1.el5.noarch) Requires: perl(LWP::Simple) Error: Package: munin-node-1.4.2-0.rpl1.el5.noarch (/munin-node-1.4.2-0.rpl1.el5.noarch) Requires: perl(:MODULE_COMPAT_5.8.8) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(RRDs) Error: Package: munin-node-1.4.2-0.rpl1.el5.noarch (/munin-node-1.4.2-0.rpl1.el5.noarch) Requires: perl-Net-Server Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(Date::Manip) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(:MODULE_COMPAT_5.8.8) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl-Net-Server Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(CGI::Fast) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(Time::HiRes)

    Read the article

  • MaxStartups and MaxSessions configurations parameter for ssh connections?

    - by Webby
    I am copying the files from machineB and machineC into machineA as I am running my below shell script on machineA. If the files is not there in machineB then it should be there in machineC for sure so I will try copying the files from machineB first, if it is not there in machineB then I will try copying the same files from machineC. I am copying the files in parallel using GNU Parallel library and it is working fine. Currently I am copying 10 files in parallel. Below is my shell script which I have - #!/bin/bash export PRIMARY=/test01/primary export SECONDARY=/test02/secondary readonly FILERS_LOCATION=(machineB machineC) export FILERS_LOCATION_1=${FILERS_LOCATION[0]} export FILERS_LOCATION_2=${FILERS_LOCATION[1]} PRIMARY_PARTITION=(550 274 2 546 278) # this will have more file numbers SECONDARY_PARTITION=(1643 1103 1372 1096 1369 1568) # this will have more file numbers export dir3=/testing/snapshot/20140103 find "$PRIMARY" -mindepth 1 -delete find "$SECONDARY" -mindepth 1 -delete do_Copy() { el=$1 PRIMSEC=$2 scp david@$FILERS_LOCATION_1:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMSEC/. || scp david@$FILERS_LOCATION_2:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMSEC/. } export -f do_Copy parallel --retries 10 -j 10 do_Copy {} $PRIMARY ::: "${PRIMARY_PARTITION[@]}" & parallel --retries 10 -j 10 do_Copy {} $SECONDARY ::: "${SECONDARY_PARTITION[@]}" & wait echo "All files copied." Problem Statement:- With the above script at some point I am getting this exception - ssh_exchange_identification: Connection closed by remote host ssh_exchange_identification: Connection closed by remote host ssh_exchange_identification: Connection closed by remote host And I guess the error is typically caused by too many ssh/scp starting at the same time. That leads me to believe /etc/ssh/sshd_config:MaxStartups and MaxSessions is set too low. But my question is on which server it is pretty low? machineB and machineC or machineA? And on what machines I need to increase the number? On machineA this is what I can find - root@machineA:/home/david# grep MaxStartups /etc/ssh/sshd_config #MaxStartups 10:30:60 root@machineA:/home/david# grep MaxSessions /etc/ssh/sshd_config And on machineB and machineC this is what I can find - [root@machineB ~]$ grep MaxStartups /etc/ssh/sshd_config #MaxStartups 10 [root@machineB ~]$ grep MaxSessions /etc/ssh/sshd_config #MaxSessions 10

    Read the article

  • Configuring RealVNC to accept copy/paste on a remote server

    - by AP257
    Hi So, I'm using RealVNC Viewer locally (Mac OSX 10.6) and connecting to a VNC Server on a remote machine (Debian - I'm no Unix expert). I want to be able to copy and paste at the command line, but Ctrl+V, Shift+V and Command+V all do nothing on the remote command line. First question: should I be trying a different combination of keys? Secondly, if it's not that I'm using the wrong combination of keys, how can I configure VNC Server to accept copy and paste? I have 'Share clipboard with VNC Server' checked locally, so I figure it must be a problem on the remote machine. I only have command-line access on the remote machine (though I am root) so I need to configure the option somehow via the command line.

    Read the article

  • Process runs slower as a scheduled task than it does interactively

    - by Charlie
    I have a scheduled task which is very CPU- and IO-intensive, and takes about four hours to run (building source code, if you're curious). The task is a Powershell script which spawns various sub-processes to do its work. When I run the same process interactively from a Powershell prompt, as the same user account, it runs in about two and a half hours. The task is running on Windows Server 2008 R2. What I want to know is why it takes so much longer to run as a scheduled task - more than an hour longer. One thing I noticed is that the task scheduler runs at Below-Normal priority, so when my task starts, it inherits the same lowered priority. However, I've updated the script to set the Powershell process priority back to Normal, and it still takes just as long. Anybody have an idea what could be different between the two scenarios? I've ruled out differences in processor and IO load - this task is the only thing the system is used for, so there's nothing else running that could be competing for resources.

    Read the article

  • IIS reverse proxy to windows authenticated internal site

    - by keithwarren7
    I have an internal windows authenticated website that I need to expose anonymously to external users. extern: http://foo.com/ (public) intern: http://privatefoo/ (requires windows auth) I want people hitting foo.com to see no security prompt, just get access to privatefoo - I know this is possible in a simple reverse proxy setup but does anyone know how to make the proxy provide windows credentials?

    Read the article

  • Delay before download starts when serving files using nginx

    - by glumbo
    I am currently using nginx to serve downloads off my website. Users sometimes need to wait about 5 seconds before their download starts after clicking a download link. I'm not sure if I need to start using raid 10 (I'm currently using raid 50) or if this is a problem with my nginx configuration. I am also on a 1gbit line but download sometimes go as low as 10kB/s. My server: Dual Xeon 5620 CPU, 12x2TB drives with 8GB ram. This is my nginx.conf #user nobody; worker_processes 12; worker_rlimit_nofile 10240; worker_rlimit_sigpending 32768; error_log logs/error.log crit; #pid logs/nginx.pid; events { worker_connections 2048; } http { include mime.types; default_type application/octet-stream; access_log off; limit_conn_log_level info; log_format xfs '$arg_id|$arg_usr|$remote_addr|$body_bytes_sent|$status'; #sendfile on; #tcp_nopush on; reset_timedout_connection on; server_tokens off; autoindex off; keepalive_timeout 0; #keepalive_timeout 65; limit_zone one $binary_remote_addr 10m; perl_modules perl; perl_require download.pm;

    Read the article

  • How does SELinux affect the /home directory?

    - by Matt Solnit
    Hi everyone. I'm migrating a CentOS 5.3 system from MySQL to PostgreSQL. The way our machine is set up is that the biggest disk partition is mounted to /home. This is out of my control and is managed by the hosting provider. Anyway, we obviously want the database files to be on /home for this reason. With MySQL, we did the following: Edited my.cnf and changed the datadir setting to /home/mysql Added a new "File type" policy record (I hope I'm using the right terminology) to set /home/mysql(/.*)? to mysqld_db_t Ran restorecon -R /home/mysql to assign the labels and everything was good. With PostgreSQL, however, I did the following: Edited /etc/init.d/postgresql and changed the PGDATA and PGLOG variables to /home/pgsql/data and /home/pgsql/pgstartup.log, respectively Added a new policy record to set /home/pgsql/pgstartup.log to postgresql_log_t Added a new policy record to set /home/pgsql/data(/.*)? to postgresql_db_t Ran restorecon -R /home/pgsql to assign the labels At this point, I still cannot start PostgreSQL. pgstartup.log says: # cat pgstartup.log postmaster cannot access the server configuration file "/home/pgsql/data/postgresql.conf": Permission denied The weird thing is that I don't see any messages related to this in /var/log/messages or /var/log/secure, but if I turn off SElinux, then everything works. I made sure all the permissions are correct (600 for files and 700 for directories), as well as the ownership (postgres:postgres). Can anyone tell me what I am doing wrong? I'm using the Yum repository from commandprompt.com, version 8.3.7. EDIT: The reason my question specifically mentions the /home directory is that if I go through all these steps for any other directory, e.g. /var/lib/pgsql2 or /usr/local/pgsql, then it works as expected.

    Read the article

  • Failed to connect from slave to master with error "error connecting to master (1045)"

    - by Victor Lin
    I try to setup replication from slave to the master. CHANGE MASTER TO MASTER_HOST = 'master', MASTER_PORT = 3306, MASTER_USER = 'repl', MASTER_PASSWORD = 'xxx'; And I did grant privileges to the user on master. I can connect with mysql command from slave machine to the master mysql -h master -u repl -p mysql> show grants; GRANT RELOAD, SUPER, REPLICATION SLAVE, CREATE USER ON *.* TO 'repl'@'xxx' IDENTIFIED BY PASSWORD 'xxx' mysql> select 1; +---+ | 1 | +---+ | 1 | +---+ 1 row in set (0.04 sec) As you can see, privileges are correct, connection works fine, but however, the connection for replication to master always failed. mysql> show slave status\G *************************** 1. row *************************** Slave_IO_State: Connecting to master Master_Host: master Master_User: repl Master_Port: 3306 Connect_Retry: 60 Master_Log_File: Read_Master_Log_Pos: 4 Relay_Log_File: slave-replay-bin.000002 Relay_Log_Pos: 4 Relay_Master_Log_File: Slave_IO_Running: Connecting Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 0 Relay_Log_Space: 107 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 1045 Last_IO_Error: error connecting to master 'repl@master:3306' - retry-time: 60 retries: 86400 Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 0 1 row in set (0.00 sec) Is this caused by different version of MySQL server? The version of master is 5.0.77, and the slave is 5.5.13. But all articles I could find tell me that it's okay to replicate from a newer slave to old master. How to solve this problem? -- Update -- I even try to upgrade the old MySQL, but still, the problem is not solved. mysql> show slave status\G *************************** 1. row *************************** Slave_IO_State: Connecting to master Master_Host: master Master_User: repl Master_Port: 3306 Connect_Retry: 60 Master_Log_File: master-bin.000007 Read_Master_Log_Pos: 107 Relay_Log_File: slave-replay-bin.000001 Relay_Log_Pos: 4 Relay_Master_Log_File: master-bin.000007 Slave_IO_Running: Connecting Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 107 Relay_Log_Space: 107 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 1045 Last_IO_Error: error connecting to master 'repl@master' - retry-time: 60 retries: 86400 Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 0 1 row in set (0.00 sec)

    Read the article

  • aliasing "git" ssh login to "gitolite"

    - by Randal Schwartz
    I'm installing gitolite from CentOS packages for my client. The package creates a gitolite user, which will be visible explicitly during a "git clone" operations. The client wants to use "git" and not "gitolite", in case we change to something more fancy later. I'm not very familiar with CentOS, so I don't want to try to build the package myself from source. I'm wondering if there's a way to do one of the following: Trick sshd into treating "git" as "gitolite". Somehow "alias" a new git username to be the same in all ways as the existing gitolite username (perhaps through some complex combinations of useradd). Rename the "gitolite" username to "git" without upsetting later yum update operations Something else that I hadn't thought of I'd appreciate detailed instructions or pointers.

    Read the article

  • Cannot exclude a path from basic auth when using a front controller script

    - by Adam Monsen
    I have a small PHP/Apache2 web application wherein I'd like to do two seemingly incompatible operations: Route all requests through a single PHP script (a "front controller", if you will) Secure everything except API calls with HTTP basic authentication I can satisfy either requirement just fine in isolation, it's when I try to do both at once that I am blocked. For no good reason I'm trying to accomplish these requirements solely with Apache configuration. Here are the requirements stated as an example. A GET request for this URL: http://basic/api/listcars?max=10 should be sent through front.php without requiring basic auth. front.php will get /api/listcars?max=10 and do whatever it needs to with that. Here's what I think should work. In my /etc/hosts I added 127.0.0.1 basic and I am using this Apache config: <Location /> AuthType Basic AuthName "Home Secure" AuthUserFile /etc/apache2/passwords require valid-user </Location> <VirtualHost *:80> ServerName basic DocumentRoot /var/www/basic <Directory /var/www/basic> <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteRule ^(.*)$ /front.php/$1 [QSA,L] </IfModule> </Directory> <Location /api> Order deny,allow Allow from all Satisfy any </Location> </VirtualHost> But I still always get a HTTP 401: Authorization Required response. I can make it work by changing <Location /api> into <Location ~ /api> but this allows more than I want to past basic auth. I also tried changing the <Directory /var/www/basic> section into <Location />, but this doesn't work either (and it results in some strange values for PATH_TRANSLATED being passed to the script). I searched around and found many examples of selective exclusion of basic auth, but none that also incorporated a front controller. I could certainly do something like handle basic auth in the front controller, but if I can have Apache do that instead I'll be able to keep all authentication logic out of my PHP code. A friend suggested splitting this into two vhosts, which I know also works. This used to be two separate vhosts, actually. I'm using Apache 2.2.22 / PHP 5.3.10 on Ubuntu 12.04.

    Read the article

  • Stress test speed on a gateway?

    - by TheLQ
    I'm interested in stress testing my gateway server but am lost on how. Most of the stress testing applications I've seen only see how much load an app like Apache can handle, but not this. Essentially I want to send as many packets I can into this box with one computer on one card and see how many come out the other in another computer just to get an idea of what kind of load this can handle. I'm also interested how Snort will perform. I'm not really sure how to do this though. What tools could you recommend that could do this?

    Read the article

  • Ionics Rewrite Filter setup on IIS 5.1

    - by Neil Aitken
    I'm trying to configure IIRF 2 on IIS 5.1 running on XP Pro, so that I can run the Zend Framework. I've managed to get the filter running on a second website that I setup using one the IIS admin scripts. When I goto iirfStatus I get this: The problem is the .ini path for the site is pointing to c:\windows\system32\Irif.ini rather than the site root. If I try creating an IIS application under IIS-Website Properties-Home Directory then iirfStatus stops working entirely. Any ideas how I can set the ini path correctly, or will I only be able to get away with this on a proper server edition of IIS?

    Read the article

  • Apache logs 200 instead of 404

    - by elle
    I've been getting the following in my apache access log: "GET /work//?module=www&section=working=../../../../../../../../../../../../../../../../../../../../../../../..//proc/self/environ%0000 HTTP/1.1" 200 5187 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12\",\"Mozilla/5.0 (Windows; U; Windows NT 5.1; pl-PL; rv:1.8.1.24pre) Gecko/20100228 K-Meleon/1.5.4\",\"Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/540.0 (KHTML,like Gecko) Chrome/9.1.0.0 Safari/540.0\",\"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Comodo_Dragon/4.1.1.11 Chrome/4.1.249.1042 Safari/532.5\",\"Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.9.0.16) Gecko/2009122206 Firefox/3.0.16 Flock/2.5.6\",\"Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US) AppleWebKit/533.1 (KHTML, like Gecko) Maxthon/3.0.8.2 Safari/533.1\",\"Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.8.1.8pre) Gecko/20070928 Firefox/2.0.0.7 Navigator/9.0RC1\",\"Opera/9.99 (Windows NT 5.1; U; pl) Presto/9.9.9\",\"Mozilla/5.0 (Windows; U; Windows NT 6.1; zh-HK) AppleWebKit/533.18.1 (KHTML, like Gecko) Version/5.0.2 Safari/533.18.5\",\"Seamonkey-1.1.13-1(X11; U; GNU Fedora fc 10) Gecko/20081112\",\"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 2.0.50727; SLCC2; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; Zune 4.0; Tablet PC 2.0; InfoPath.3; .NET4.0C; .NET4.0E)\",\"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; MS-RTC LM 8; .NET4.0C; .NET4.0E; InfoPath.3)" If I try the URL, I get a 404 instead of 200 which the above request received. Is there a way I can confirm that the 200 was real and not spoofed? Where is the long info on the client coming from?

    Read the article

  • Ubuntu 12.04 transmission-daemon and zfsonlinux: bad file descriptor and corrupt pieces

    - by Ivailo Karamanolev
    I'm running a Ubuntu 12.04 with zfsonlinux and transmission-daemon. The issues: sporadic Bad File Descriptor and Piece #xxx is corrupt errors. After I recheck the torrent, everything seems fine. That happens only when downloading: once it's in seeding mode. This only happens after the torrent client has been running for some time. I installed zfsonlinux from the offical stable ppa (https://launchpad.net/~zfs-native/+archive/stable). I previously tried running transmission-daemon from the Ubuntu repository, but since I've switched to building the latest transmission from source with the latest libevent (all stable) - same thing. I've seen bug reports (https://trac.transmissionbt.com/ticket/4147) for that issue, but none of them seem to have a solution. How can I fix these errors, or at least understand where they come from and what I can do to rectify the issue?

    Read the article

  • Shibboleth SP, IIS

    - by OrangeGrover
    I have a Shibboleth SP instance on Server 2008 R2 and everything is authenticating fine with the IdP. I was testing protecting a single page and that is working fine by doing the following in the shibboleth2.xml file: <Host name="MyUrl.com"> <Path name="page.jsp" authType="shibboleth" requireSession="true"/> </Host> When I go to https://MyUrl.com/page.jsp I get redirected to enter credentials, and then end up back on the page.jsp Now I found out that I should be protecting the Document Root, but not the entire site. Basically I need to be authenticated by Shibboleth, and once I am, then I'll get redirected back to the Document Root where a session is set with separate software, I get redirected to a different page and the Document Root will never be used again. Any help is appreciated

    Read the article

  • How to install mod_wsgi 3.1 on Ubuntu 9.10

    - by pthulin
    I have a Python 3 web app so mod_wsgi < 3.1 doesn't cut it for me. However, on my Ubuntu 9.10 installation there doesn't seem to be a package for mod_wsgi 3.1. Is there an alternative repository that has a package for mod_wsgi 3.1? There's a new Ubuntu release not so long from now, will it contain mod_wsgi 3.1? Some other distro ready with mod_wsgi 3.1 to recommend? Maybe my best bet is to compile it myself? From a quick google it looks like I only need the python and apache dev packages installed. Thanks!

    Read the article

  • Supplementary Developer Laptop

    - by David Smith
    I'm looking to buy a laptop with the following specs for a developer. The goal will be to have a development machine supplementing the devs desktop. During work hours the dev will be on a beefy desktop. For working while on the go: trains, client sites, code camps, it would be nice to have a machine which can run Visual Studio 2008 without needing to remote desktop into their primary machine. What do you think is the lowest cost laptop meeting this need? Here are the specs I have in mind: SSD drive 64GB-doesn't need to be huge, most data is stored on servers. Will need to fit Windows 7, IIS, SQL Server, and Visual Studio 2010. RAM-3GB processor =Pentium Core 2 duo Screen size = 14 inches. OS doesn't matter. It will be paved with Windows 7 Ultimate optical drive omitted would be a plus. weight and battery life aren't so important because the machine will be plugged in almost all the time.

    Read the article

  • Exchange 2010: Replication Service Still Trying to Replicate Deleted Mailbox Store

    - by ThaKidd
    In advance, thank you for your opinions! I just migrated from Server/Exchange 2003 to Server 2008 SR2 running Exchange 2010. I had an extra mailbox that appeared with some system mailboxes in it. I used the EMS to move those mailboxes over and then deleted the store out of the EMC. Since then every so often I get an Error in Event Viewer. Source: MSExchangeRepl ID: 4098 Error: The Microsoft Exchange Replication service couldn't find a valid configuration for database '5f012f40-3bad-4003-a373-dbc0ffb6736f' on server 'EXCHSERVER'. Error: (nothing after this) I can confirm that the above GUID is the mailbox store of that I deleted. No other Exchange errors occur. How can I tell Exchange Replication to ignore this store? Setup, one Exchange server 2003 transitioned over to 2010. No other Exchange servers. Is there a way to fix this? Do I need to change a setting to stop replication? I plan to add a second Exchange server in the next few days so stopping replication would be a bad thing. Thanks again in advance. Jason

    Read the article

  • How do I elevate privileges when running appcmd from a nant task?

    - by Rune
    We are using a Windows 7 box as build server. As part of our continuous integration process I would like to stop and start an IIS 7 website. I have tried doing this from the command line using appcmd: appcmd start site "my website" However, this only works if I start the console window by choosing "Run as Administrator", so it won't work out-of-the-box from NAnt etc. How do I script appcmd to be run with elevated privileges (or am I going about this in the wrong way)? Thank you.

    Read the article

  • Errors when attempting to update source files, Server 2012R2 (errors 80073701 and 14081)

    - by jeremy
    I have a Windows Server 2012R2 machine that I installed with Server Core, and then decided that I wanted to switch to GUI. I'll make the long story short: I ran windows updates, and now the source files are older/out of sync with the operating system, and I need to update the source files. Here are a couple of articles that outline how this is supposed to work: http://blog.coretech.dk/kaj/why-i-cant-convert-my-windows-server-2012-r2-core-to-gui/ http://blogs.technet.com/b/joscon/archive/2012/11/14/how-to-update-local-source-media-to-add-roles-and-features.aspx I have followed these instructions, but the updates are not successfully updating the source. I get errors like: "An error occurred - Package_for_KB29671203 Error: 0x80073701, Error: 14081, The referenced assembly could not be found." or "add-windowspackage failed. error code = 0x80073701, add-windowspackage: the referenced assembly could not be found" I've extensively searched for help on those error codes related to Server 2012 and windows updates, but my google-fu is failing me. I am using windows update packages found in c:\Windows\SoftwareDistribution\Download How can I get these updates to bring my source files up to current? Thanks!

    Read the article

< Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >