Daily Archives

Articles indexed Tuesday October 22 2013

Page 5/20 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • IIS Url Rewrite Capturing query string and escaping characters

    - by LiamB
    We are just adding some redirects for an old site to a new one in IIS7 using the URL Rewrite 'plugin'. The old site's URL are all based on the query string, we'd usually do explicit rewrites like below. But this wont work in the case of the query string. <rule name="Redirect-1" patternSyntax="Wildcard" stopProcessing="true"> <match url="index.php?option=m_content&view=article&id=15&Itemid=16" /> <action type="Redirect" url="http://newurl/some-page" /> </rule> So using the 2 URL's above how can we do a 301 redirect?

    Read the article

  • How to find the cause of locked user account in Windows AD domain

    - by Stephane
    After a recent incident with Outlook, I was wondering how I would most efficiently resolve the following problem: Assume a fairly typical small to medium sized AD infrastructure: several DCs, a number of internal servers and windows clients, several services using AD and LDAP for user authentication from within the DMZ (SMTP relay, VPN, Citrix, etc.) and several internal services all relying on AD for authentication (Exchange, SQL server, file and print servers, terminal services servers). You have full access to all systems but they are a bit too numerous (counting the clients) to check individually. Now assume that, for some unknown reason, one (or more) user account gets locked out due to password lockout policy every few minutes. What would be the best way to find the service/machine responsible for this ? Assuming the infrastructure is pure, standard Windows with no additional management tool and few changes from default is there any way the process of finding the cause of such lockout could be accelerated or improved ? What could be done to improve the resilient of the system against such an account lockout DOS ? Disabling account lockout is an obvious answer but then you run into the issue of users having way to easily exploitable passwords, even with complexity enforced.

    Read the article

  • Mirror DFS configuration data between 2 servers/ sites

    - by Retro69
    I have 1 Windows 2008 R2 server in Site A running Domain Integrated DFS in 2008 mode with a Single Namespace with a large number of DFS Targets all configured to point to a share on our NetApp SAN. Step 1. I want to initially copy this configuration data across to a 2012 server in Site A preserving all the configuration data. Step 2. I need to mirror this configuration to a 2nd server in Site B so we dont have a single point of failure for the DFS namespace. For Example. A user in Site B would "connect" to the DFS server in Site B, but if that site was down, it would attempt to connect to the Server in site A and vice versa. Note im not interesting in replicating actual Data here, just the configuration. Our NetApp SANS have mirroring which take care of that. Is this possible? Many thanks.

    Read the article

  • Postfix - Gmail - Mountain Lion // can't send mail

    - by miako
    I have read most of the tutorials found on google but still can't make it work. I run the command : date | mail -s "Test" [email protected] . The log is this : Oct 22 11:38:00 XXX.local postfix/master[288]: daemon started -- version 2.9.2, configuration /etc/postfix Oct 22 11:38:00 XXX.local postfix/pickup[289]: 9D85418A031: uid=501 from=<me> Oct 22 11:38:00 XXX.local postfix/cleanup[291]: 9D85418A031: message-id=<[email protected]> Oct 22 11:38:00 XXX.local postfix/qmgr[290]: 9D85418A031: from=<[email protected]>, size=327, nrcpt=1 (queue active) Oct 22 11:38:00 XXX.local postfix/smtp[293]: initializing the client-side TLS engine Oct 22 11:38:02 XXX.local postfix/smtp[293]: setting up TLS connection to smtp.gmail.com[173.194.70.109]:587 Oct 22 11:38:02 XXX.local postfix/smtp[293]: smtp.gmail.com[173.194.70.109]:587: TLS cipher list "ALL:!EXPORT:!LOW:+RC4:@STRENGTH:!eNULL" Oct 22 11:38:02 XXX.local postfix/smtp[293]: SSL_connect:before/connect initialization Oct 22 11:38:02 XXX.local postfix/smtp[293]: SSL_connect:SSLv2/v3 write client hello A Oct 22 11:38:03 XXX.local postfix/smtp[293]: SSL_connect:SSLv3 read server hello A Oct 22 11:38:03 XXX.local postfix/smtp[293]: smtp.gmail.com[173.194.70.109]:587: certificate verification depth=2 verify=0 subject=/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA Oct 22 11:38:03 --- last message repeated 1 time --- Oct 22 11:38:03 XXX.local postfix/smtp[293]: smtp.gmail.com[173.194.70.109]:587: certificate verification depth=1 verify=1 subject=/C=US/O=Google Inc/CN=Google Internet Authority G2 Oct 22 11:38:03 XXX.local postfix/smtp[293]: smtp.gmail.com[173.194.70.109]:587: certificate verification depth=0 verify=1 subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=smtp.gmail.com Oct 22 11:38:03 XXX.local postfix/smtp[293]: SSL_connect:SSLv3 read server certificate A Oct 22 11:38:03 XXX.local postfix/smtp[293]: SSL_connect:SSLv3 read server done A Oct 22 11:38:03 XXX.local postfix/smtp[293]: SSL_connect:SSLv3 write client key exchange A Oct 22 11:38:03 XXX.local postfix/smtp[293]: SSL_connect:SSLv3 write change cipher spec A Oct 22 11:38:03 XXX.local postfix/smtp[293]: SSL_connect:SSLv3 write finished A Oct 22 11:38:03 XXX.local postfix/smtp[293]: SSL_connect:SSLv3 flush data Oct 22 11:38:03 XXX.local postfix/smtp[293]: SSL_connect:SSLv3 read server session ticket A Oct 22 11:38:03 XXX.local postfix/smtp[293]: SSL_connect:SSLv3 read finished A Oct 22 11:38:03 XXX.local postfix/smtp[293]: smtp.gmail.com[173.194.70.109]:587: subject_CN=smtp.gmail.com, issuer_CN=Google Internet Authority G2, fingerprint E4:CA:10:85:C3:53:00:E6:A1:D2:AC:C4:35:E4:A2:10, pkey_fingerprint=D6:06:2E:15:AF:DF:E9:50:A5:B4:E2:E4:C5:2E:F9:BA Oct 22 11:38:03 XXX.local postfix/smtp[293]: Untrusted TLS connection established to smtp.gmail.com[173.194.70.109]:587: TLSv1 with cipher RC4-SHA (128/128 bits) Oct 22 11:38:03 XXX.local postfix/smtp[293]: 9D85418A031: to=<[email protected]>, relay=smtp.gmail.com[173.194.70.109]:587, delay=3.4, delays=0.26/0.13/2.8/0.26, dsn=5.5.1, status=bounced (host smtp.gmail.com[173.194.70.109] said: 530-5.5.1 Authentication Required. Learn more at 530 5.5.1 http://support.google.com/mail/bin/answer.py?answer=14257 s3sm54097220eeo.3 - gsmtp (in reply to MAIL FROM command)) Oct 22 11:38:04 XXX.local postfix/cleanup[291]: D4D2F18A03C: message-id=<[email protected]> Oct 22 11:38:04 XXX.local postfix/qmgr[290]: D4D2F18A03C: from=<>, size=2382, nrcpt=1 (queue active) Oct 22 11:38:04 XXX.local postfix/bounce[297]: 9D85418A031: sender non-delivery notification: D4D2F18A03C Oct 22 11:38:04 XXX.local postfix/qmgr[290]: 9D85418A031: removed Oct 22 11:38:04 XXX.local postfix/local[298]: D4D2F18A03C: to=<[email protected]>, relay=local, delay=0.11, delays=0/0.08/0/0.02, dsn=2.0.0, status=sent (delivered to mailbox) Oct 22 11:38:04 XXX.local postfix/qmgr[290]: D4D2F18A03C: removed Oct 22 11:39:00 XXX.local postfix/master[288]: master exit time has arrived I am really confused as i have never setup MTA again an i need it for local web development. I don't use XAMPP. I use the built in Servers. Can anyone guide me?

    Read the article

  • CentOS Existing host to new host with all data/files

    - by ganesh
    Good noon. Our small startup management decided to move our production server from existing provider to azure. We have centOS on both. It is for classified's related site, considerable amount of data and ~thousands users with their disc space quota. This is our first time moving our servers. I need your Guidance and suggestions on these. 1) How to migrate the mysql db (dump OR slave OR copy filesystem)? 2) How to manage the emails during the downtime. 3) Manage the files 4) How to security/Firewall check list for the new system 5) IP/DNS related Checklist 6) Anything that I missed out!. Since first time, planning to be more cautious. Any reference documents Highly appreciated. Thank you all!.

    Read the article

  • Apache suEXEC execute script on user dir basis?

    - by Blame
    Iam looking for a way to run a script with different users. I dont want to hardcode the users in the config... and I found some information that it should be possible that the user goes to... lets say: Code: http://localhost/~user1/myscript.cgi and the script gets executes as user 'user1'. Does anybody know if that is possible? If not, do I have to make a new vhost config for every user? Thanks a lot! Greets, Kodak

    Read the article

  • Chef Knife-Windows

    - by Nick Zagoreos
    I'm trying to bootstrap a windows 2008 R2 Server with chef and i'm receiving this error :"CScript Error: Execution of the Windows Script Host failed. (0x800A0007)". After some research i find out that i must install the "specific_install" gem and use Knife-windows gem from git but when i'm trying to install gem with this command "gem specific_install -l https://github.com/opscode/knife-windows.git" i'm receiving the following error : "ERROR: While executing gem ... (NoMethodError) undefined method `build' for Gem::Package:Module" What am i doing wrog? Thank you in advance

    Read the article

  • php on ubuntu 13.10 won't get parsed

    - by fefe
    I'm facing a strange issue with a fresh installed ubuntu 13.10 apache2 with mysql and php. My php won't get parsed and I have been doing every changes what I found during my researches PHP 5.5.3-1ubuntu2 (cli) (built: Oct 9 2013 14:49:12) sudo a2enmod php5 apache2.conf <FilesMatch \.php$> SetHandler application/x-httpd-php </FilesMatch> /etc/apache2/mods-enabled/php5.conf </FilesMatch> # Deny access to files without filename (e.g. '.php') <FilesMatch "^\.ph(p[345]?|t|tml|ps)$"> Order Deny,Allow Deny from all </FilesMatch> # Running PHP scripts in user directories is disabled by default # # To re-enable PHP in user directories comment the following lines # (from <IfModule ...> to </IfModule>.) Do NOT set it to On as it # prevents .htaccess files from disabling it. #<IfModule mod_userdir.c> # <Directory /home/*/public_html> # php_admin_value engine Off # </Directory> #</IfModule>

    Read the article

  • Running Multiple sites with multiple domains apache

    - by PsychoData
    I am having a rough time running apache and using multiple domain names here is a snippet of my config file. I keep getting a error saying that NameVirtualHost has no VirtualHosts. I want them both running on the same IP and I'm not sure why this doesn't work. I've been digging through the documentation for VirtualHosts, NameVirtualHost, and apache's page about name based virtual hosting. That example in the name based page is almost exactly my config! What am I doing wrong? Listen *:80 NameVirtualHost *:80 <VirtualHost *:80> ServerName www.sample1.net DocumentRoot /var/www/sample1-net </VirtualHost> <VirtualHost *:80> ServerName www.example2.net DocumentRoot /var/www/example2-net </VirtualHost>

    Read the article

  • Gearman too many processes issue

    - by Roman Newaza
    I use Net_Gearman from PECL, Gearmand 1.1.11 and Gearman Manager. Every time I add background job, I can see new worker listed with no Function, nor Id in Ggearman-Monitor: If I add many messages in the bash loop, after some time it becomes very slow. for i in $(seq 0 9999); do php Client.php && echo $i; done Yesterday, the situation was even worse - I had many error messages in Gearmand log regarding Too many open files and once I added --file-descriptors=49152 as an option and swithched to 1.1.11 from 1.0.6, these errors gone. Here is lsof -p $(cat /var/run/gearman/gearmand.pid) output: COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME gearmand 2020 gearman cwd DIR 8,2 4096 2 / gearmand 2020 gearman rtd DIR 8,2 4096 2 / gearmand 2020 gearman txt REG 8,2 3852472 3672962 /opt/sbin/gearmand gearmand 2020 gearman mem REG 8,2 52120 9961752 /lib/x86_64-linux-gnu/libnss_files-2.15.so gearmand 2020 gearman mem REG 8,2 47680 9961756 /lib/x86_64-linux-gnu/libnss_nis-2.15.so gearmand 2020 gearman mem REG 8,2 97248 9961768 /lib/x86_64-linux-gnu/libnsl-2.15.so gearmand 2020 gearman mem REG 8,2 35680 9961750 /lib/x86_64-linux-gnu/libnss_compat-2.15.so gearmand 2020 gearman mem REG 8,2 92720 9964871 /lib/x86_64-linux-gnu/libz.so.1.2.3.4 gearmand 2020 gearman mem REG 8,2 109288 11014600 /usr/lib/x86_64-linux-gnu/libsasl2.so.2.0.25 gearmand 2020 gearman mem REG 8,2 1030512 9961759 /lib/x86_64-linux-gnu/libm-2.15.so gearmand 2020 gearman mem REG 8,2 1930616 9964982 /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 gearmand 2020 gearman mem REG 8,2 382896 9964977 /lib/x86_64-linux-gnu/libssl.so.1.0.0 gearmand 2020 gearman mem REG 8,2 1815224 9961748 /lib/x86_64-linux-gnu/libc-2.15.so gearmand 2020 gearman mem REG 8,2 88384 9964865 /lib/x86_64-linux-gnu/libgcc_s.so.1 gearmand 2020 gearman mem REG 8,2 962656 11014043 /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.16 gearmand 2020 gearman mem REG 8,2 199600 11016157 /usr/lib/x86_64-linux-gnu/libmemcached.so.11.0.0 gearmand 2020 gearman mem REG 8,2 31752 9961755 /lib/x86_64-linux-gnu/librt-2.15.so gearmand 2020 gearman mem REG 8,2 14768 9961763 /lib/x86_64-linux-gnu/libdl-2.15.so gearmand 2020 gearman mem REG 8,2 414280 9183971 /usr/lib/libboost_program_options.so.1.46.1 gearmand 2020 gearman mem REG 8,2 283832 9183656 /usr/lib/libevent-2.0.so.5.1.4 gearmand 2020 gearman mem REG 8,2 664504 11014432 /usr/lib/x86_64-linux-gnu/libsqlite3.so.0.8.6 gearmand 2020 gearman mem REG 8,2 135366 9961757 /lib/x86_64-linux-gnu/libpthread-2.15.so gearmand 2020 gearman mem REG 8,2 3534240 9175810 /usr/lib/libmysqlclient.so.18.1.0 gearmand 2020 gearman mem REG 8,2 149280 9961760 /lib/x86_64-linux-gnu/ld-2.15.so gearmand 2020 gearman 0u CHR 1,3 0t0 1029 /dev/null gearmand 2020 gearman 1u CHR 1,3 0t0 1029 /dev/null gearmand 2020 gearman 2u CHR 1,3 0t0 1029 /dev/null gearmand 2020 gearman 3w REG 8,2 9381897 3409366 /var/log/gearman-job-server/gearman.log gearmand 2020 gearman 4r FIFO 0,8 0t0 38869143 pipe gearmand 2020 gearman 5w FIFO 0,8 0t0 38869143 pipe gearmand 2020 gearman 6u 0000 0,9 0 6826 anon_inode gearmand 2020 gearman 7u unix 0xffff880230fdf500 0t0 38869144 socket gearmand 2020 gearman 8u unix 0xffff880230fdde40 0t0 38869145 socket gearmand 2020 gearman 9u IPv4 38869146 0t0 TCP localhost:4730 (LISTEN) gearmand 2020 gearman 10r FIFO 0,8 0t0 38869147 pipe gearmand 2020 gearman 11w FIFO 0,8 0t0 38869147 pipe gearmand 2020 gearman 12u 0000 0,9 0 6826 anon_inode gearmand 2020 gearman 13u unix 0xffff880230fde4c0 0t0 38869148 socket gearmand 2020 gearman 14u unix 0xffff880230fdeb40 0t0 38869149 socket gearmand 2020 gearman 15r FIFO 0,8 0t0 38869150 pipe gearmand 2020 gearman 16w FIFO 0,8 0t0 38869150 pipe gearmand 2020 gearman 17u 0000 0,9 0 6826 anon_inode gearmand 2020 gearman 18u 0000 0,9 0 6826 anon_inode gearmand 2020 gearman 19u unix 0xffff880230fdb400 0t0 38869151 socket gearmand 2020 gearman 20u unix 0xffff880230fdaa40 0t0 38869152 socket gearmand 2020 gearman 21r FIFO 0,8 0t0 38869153 pipe gearmand 2020 gearman 22w FIFO 0,8 0t0 38869153 pipe gearmand 2020 gearman 23u unix 0xffff880203cfce00 0t0 38868290 socket gearmand 2020 gearman 24u unix 0xffff880203cfdb00 0t0 38868291 socket gearmand 2020 gearman 25r FIFO 0,8 0t0 38868292 pipe gearmand 2020 gearman 26w FIFO 0,8 0t0 38868292 pipe gearmand 2020 gearman 27u 0000 0,9 0 6826 anon_inode gearmand 2020 gearman 28u unix 0xffff880203cf9040 0t0 38868293 socket gearmand 2020 gearman 29u unix 0xffff880203cfaa40 0t0 38868294 socket gearmand 2020 gearman 30r FIFO 0,8 0t0 38868295 pipe gearmand 2020 gearman 31w FIFO 0,8 0t0 38868295 pipe gearmand 2020 gearman 32u IPv4 38868324 0t0 TCP localhost:4730->localhost:57954 (ESTABLISHED) gearmand 2020 gearman 33u IPv4 38868325 0t0 TCP localhost:4730->localhost:57955 (ESTABLISHED) gearmand 2020 gearman 34u IPv4 38901247 0t0 TCP localhost:4730->localhost:38594 (ESTABLISHED) gearmand 2020 gearman 35u IPv4 38868327 0t0 TCP localhost:4730->localhost:57957 (ESTABLISHED) gearmand 2020 gearman 36u IPv4 38867483 0t0 TCP localhost:4730->localhost:57959 (ESTABLISHED) gearmand 2020 gearman 37u IPv4 38867484 0t0 TCP localhost:4730->localhost:57958 (ESTABLISHED) gearmand 2020 gearman 38u IPv4 38901248 0t0 TCP localhost:4730->localhost:38595 (CLOSE_WAIT) gearmand 2020 gearman 39u IPv4 38901249 0t0 TCP localhost:4730->localhost:38597 (ESTABLISHED) gearmand 2020 gearman 40u IPv4 38869201 0t0 TCP localhost:4730->localhost:57979 (ESTABLISHED) gearmand 2020 gearman 41u IPv4 38900437 0t0 TCP localhost:4730->localhost:38599 (ESTABLISHED) gearmand 2020 gearman 42u IPv4 38900438 0t0 TCP localhost:4730->localhost:38602 (ESTABLISHED) gearmand 2020 gearman 43u IPv4 38868375 0t0 TCP localhost:4730->localhost:57987 (ESTABLISHED) gearmand 2020 gearman 44u IPv4 38900468 0t0 TCP localhost:4730->localhost:38606 (CLOSE_WAIT) gearmand 2020 gearman 45u IPv4 38868381 0t0 TCP localhost:4730->localhost:57999 (ESTABLISHED) gearmand 2020 gearman 46u IPv4 38868388 0t0 TCP localhost:4730->localhost:58007 (ESTABLISHED) gearmand 2020 gearman 47u IPv4 38868393 0t0 TCP localhost:4730->localhost:58011 (ESTABLISHED) gearmand 2020 gearman 48u IPv4 38903950 0t0 TCP localhost:4730->localhost:38609 (ESTABLISHED) gearmand 2020 gearman 49u IPv4 38870276 0t0 TCP localhost:4730->localhost:58019 (ESTABLISHED) gearmand 2020 gearman 50u IPv4 38903955 0t0 TCP localhost:4730->localhost:38613 (ESTABLISHED) gearmand 2020 gearman 51u IPv4 38900477 0t0 TCP localhost:4730->localhost:38617 (CLOSE_WAIT) gearmand 2020 gearman 52u IPv4 38867630 0t0 TCP localhost:4730->localhost:58031 (ESTABLISHED) gearmand 2020 gearman 53u IPv4 38867633 0t0 TCP localhost:4730->localhost:58035 (ESTABLISHED) gearmand 2020 gearman 54u IPv4 38867636 0t0 TCP localhost:4730->localhost:58039 (ESTABLISHED) gearmand 2020 gearman 55u IPv4 38900536 0t0 TCP localhost:4730->localhost:38619 (ESTABLISHED) gearmand 2020 gearman 56u IPv4 38868419 0t0 TCP localhost:4730->localhost:58047 (ESTABLISHED) gearmand 2020 gearman 57u IPv4 38869263 0t0 TCP localhost:4730->localhost:58051 (ESTABLISHED) gearmand 2020 gearman 58u IPv4 38900537 0t0 TCP localhost:4730->localhost:38621 (ESTABLISHED) gearmand 2020 gearman 59u IPv4 38869271 0t0 TCP localhost:4730->localhost:58059 (ESTABLISHED) gearmand 2020 gearman 60u IPv4 38900538 0t0 TCP localhost:4730->localhost:38623 (ESTABLISHED) gearmand 2020 gearman 61u IPv4 38870319 0t0 TCP localhost:4730->localhost:58067 (ESTABLISHED) gearmand 2020 gearman 62u IPv4 38900540 0t0 TCP localhost:4730->localhost:38628 (ESTABLISHED) gearmand 2020 gearman 63u IPv4 38869289 0t0 TCP localhost:4730->localhost:58075 (ESTABLISHED) ... gearmand 2020 gearman 2229u IPv4 38903885 0t0 TCP localhost:4730->localhost:38572 (ESTABLISHED) gearmand 2020 gearman 2230u IPv4 38901211 0t0 TCP localhost:4730->localhost:38576 (ESTABLISHED) gearmand 2020 gearman 2234u IPv4 38901237 0t0 TCP localhost:4730->localhost:38588 (ESTABLISHED)

    Read the article

  • Global Address List, Multiple All Address Lists in CN=Address Lists Container

    - by Jonathan
    When my colleges (that was way before my time here) updated Exchange 2000 to 2003 a English All Address Lists appeared in addition to the German variant. The English All Address Lists have German titled GAL below it. This has just been a cosmetic problem for the last few years. Now as we are in the process of rolling out Exchange 2010 this causes some issues. Exchange 2010 picked the wrong i.e. English Address Lists Container to use. In ADSI Editor we see CN=All Address Lists,CN=Address Lists Container,CN=exchange org,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=domain and CN=Alle Adresslisten,CN=Address Lists Container,CN=exchange org,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=domain. In the addressBookRoots attribute of CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=domain both address lists were stored as values. We removed the English variant from addressBookRoots and restarted all (old and new) Exchange servers. User with mailboxes on the Exchange 2003 now only sees the German variant. Exchange 2010 is still stuck with the English/Mixed variant as are Users on Exchange 2010. Our goal would be to have Outlook display the German title of All Address Lists and get rid of the wrong Address Lists Container.

    Read the article

  • How to find cause of main file system going to read only mode

    - by user606521
    Ubuntu 12.04 File system goes to readonly mode frequently. First of all I have read this question file system is going into read only mode frequently already. But I have to know if it's not caused by something else than dying hard drive. This is server provided by my client and I am just runing there some node.js workers + one node.js server and I am using mongodb. From time to time (every 20-50h) system suddenly makes filesystem read only, mongodb process fails (due read-only fs) and my node workers/server (which are started by forever) are just killed. Here is the log from dmesg - I can see there some errors and messages that FS is going to read-only, and there is also some JOURNAL error but I would like to find cause of those errors.. http://speedy.sh/Ux2VV/dmesg.log.txt edit smartctl -t long /dev/sda smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.5.0-23-generic] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net SMART support is: Unavailable - device lacks SMART capability. A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options. What I am doing wrong? Same is for sda2. Morover now when I type any command that not exists in shell I get this: Sorry, command-not-found has crashed! Please file a bug report at: https://bugs.launchpad.net/command-not-found/+filebug Please include the following information with the report:

    Read the article

  • How to create chroot jail with ability to change some system settings

    - by Tadeck
    How to properly create chroot jail (on Ubuntu, or some some other Linux if not applicable) to make user able to edit system settings (eg. with ifconfig) and be able to communicate with external scripts? The use case would be to enable user to authenticate using SSH and then be able to perform very limited set of actions from command line. Unfortunately the tricky part is the access to system settings. I have considered multiple options and the alternative is to setup fake SSH server (eg. with Twisted), try to use restricted shell (however, I seem to need chroot still), or write a script on top of the shell (?).

    Read the article

  • My URL has been identified as a phishing site

    - by user2118559
    Some months before ordered VPS at Ramnode According to tutorial (ZPanelCP on CentOS 6.4) http://www.zvps.co.uk/zpanelcp/centos-6 Installed CentOS and ZPanel) Today received email We are requesting that you secure and investigate the phishing website identified below. This URL has been identified as a phishing site and is currently involved in identity theft activities. URL: hxxp://111.11.111.111/www.connet-itunes.fr/iTunesConnect.woasp/ //IP is modified (not real) This site is being used to display false or spoofed content in an apparent effort to steal personal and financial information. This matter is URGENT. We believe that individuals are being falsely directed to this page and may be persuaded into divulging personal information to a criminal, if the content is not immediately disabled. Trying to understand. Some hacker hacked VPS, placed some file (?) with content that redirects to www.connet-itunes.fr/iTunesConnect.woasp? Then questions 1) how can I find the file? Where it may be located? url is URL: hxxp://111.11.111.111/ IP address, not domain name 2) What to do to protect VPS (with CentOS)? Any tutorial? Where may be security problem? I mean may be someone faced something similar....

    Read the article

  • How to redirect http requests to https (nginx)

    - by spuder
    There appear to be many questions and guides out there that instruct how to setup nginx to redirect http requests to https. Many are outdated, or just flat out wrong. # MANAGED BY PUPPET upstream gitlab { server unix:/home/git/gitlab/tmp/sockets/gitlab.socket; } # setup server with or without https depending on gitlab::gitlab_ssl variable server { listen *:80; server_name gitlab.localdomain; server_tokens off; root /nowhere; rewrite ^ https://$server_name$request_uri permanent; } server { listen *:443 ssl default_server; server_name gitlab.localdomain; server_tokens off; root /home/git/gitlab/public; ssl on; ssl_certificate /etc/ssl/certs/ssl-cert-snakeoil.pem; ssl_certificate_key /etc/ssl/private/ssl-cert-snakeoil.key; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers AES:HIGH:!ADH:!MDF; ssl_prefer_server_ciphers on; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab puma) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Ssl on; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://gitlab; } } I've restarted after every configuration change, and yet I still only get the 'Welcome to nginx' page when visiting http://192.168.33.10. whereas https://192.168.33.10 works perfectly. Why will nginx still not redirect http requests to https? I've also tried the following configurations listen *:80; server_name <%= @fqdn %>; #root /nowhere; #rewrite ^ https://$server_name$request_uri? permanent; #rewrite ^ https://$server_name$request_uri permanent; #return 301 https://$server_name$request_uri; #return 301 http://$server_name$request_uri; #return 301 http://192.168.33.10$request_uri; return 301 http://$host$request_uri; The logs tailf /var/log/nginx/access.log 192.168.33.1 - - [22/Oct/2013:03:41:39 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0" 192.168.33.1 - - [22/Oct/2013:03:44:43 +0000] "GET / HTTP/1.1" 200 133 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0" tailf /var/log/nginx/gitlab_error.lob 2013/10/22 02:29:14 [crit] 27226#0: *1 connect() to unix:/home/git/gitlab/tmp/sockets/gitlab.socket failed (2: No such file or directory) while connecting to upstream, client: 192.168.33.1, server: gitlab.localdomain, request: "GET / HTTP/1.1", upstream: "http://unix:/home/git/gitlab/tmp/sockets/gitlab.socket:/", host: "192.168.33.10" Resources http://wiki.nginx.org/Pitfalls How to make nginx redirect How to force or redirect to SSL in nginx? nginx ssl redirect Nginx & Https Redirection https://www.tinywp.in/301-redirect-wordpress/ How to force or redirect to SSL in nginx?

    Read the article

  • Cannot establish XMPP server-to-server connection to gmail

    - by v_2e
    My jabber-server fails to connect to gmail.com giving the error: outgoing s2s stream myserver.com.ua-bot.talk.google.com closed: undefined-condition (myserver.com.ua is a Google Apps Domain with Talk service enabled.) I am using the Prosody XMPP server. It works just fine with other jabber-servers I tested so far (e.g. jabber.ru). However, when some of my clients tries to add a gmail contact to his contact-list, the subscription request lasts forever, and the Prosody gives the following sequence of messages in its log: Oct 21 22:57:16 s2sout95897f8 info Beginning new connection attempt to gmail.com ([173.194.70.125]:5269) Oct 21 22:57:16 s2sout95897f8 info sent dialback key on outgoing s2s stream Oct 21 22:57:16 s2sout95897f8 info Session closed by remote with error: undefined-condition (myserver.com.ua is a Google Apps Domain with Talk service enabled.) Oct 21 22:57:16 s2sout95897f8 info outgoing s2s stream myserver.com.ua->gmail.com closed: undefined-condition (myserver.com.ua is a Google Apps Domain with Talk service enabled.) Oct 21 22:57:16 s2sout95897f8 info sending error replies for 2 queued stanzas because of failed outgoing connection to gmail.com Here for the domain name of my server I use myserver.com.ua I found a similar problem described in this thread, but there is no detailed description of the solution there. As for the Google services, I did have a google account where I added the domain name under question to the Webmasters tools page. However, I deleted my account long ago, so now it is unclear, how any of the Google services can relate to my domain name. So my question is: What is the real cause of this problem (my jabber-server configuration or imaginary Google account or something else) and how can I make my Prosody server connect to gmail.com jabber service?

    Read the article

  • debian - running unattended-upgrades on a particular day of the week

    - by dastra
    We're running unattended-upgrades on debian squeeze, and would like it to run once a week, only on a Wednesday morning. To attempt this, we have set: APT::Periodic::Unattended-Upgrade "7" in /etc/apt/apt.conf.d/50unattended-upgrades And then touched the /var/lib/apt/periodic/update-stamp to set the timestamp to a Wednesday, for instance: touch -t 201211280000 /var/lib/apt/periodic/update-stamp Running: stamp=$(date --date=$(date -r /var/lib/apt/periodic/update-stamp --iso-8601) +%s 2/dev/null) date -u --date="1970-01-01 $stamp sec GMT" Gives the correct timestamp: Wed Nov 28 00:00:00 UTC 2012 However, unattended-upgrades then seems to ignore this, and run the updates on a Saturday morning. Could anyone enlighten me as to how this parameter works, and how to set up upgrades to run on a Wednesday?

    Read the article

  • How to run Hyper-V Virtual Machine Management Service using a domain account

    - by Selhoum
    I have an Domain account with admin priviliges and I need to use that account to run the Hyper-v Machine Management Service. my goal is to use that domain account to create VMs using ISO files that are in a different server within the same domain. I was told that if I use the local account to do this things may not work. How do I run the Hyper-V Virtual Machine Management Service under a domain account?

    Read the article

  • MSSQL 2008 is claiming the firewall is blocking ports even from local machine

    - by Mercurybullet
    I was just hoping to step through a couple queries to see how the temp tables are interacting and I'm getting this message. The windows firewall on this machine is currently blocking remote debugging. Remote debugging requires that the debugging be allowed to receive information from the network.Remote debugging also requires DCOM (TCP port 135) and IPSEC (UDP 4500/UDP500) be unblocked Even when I walked over to the actual machine and tried running the debugger, I'm still getting the same message. Am I missing something or does the debugger try to run remotely even from the local machine? Since this was meant to be just a quick check, I don't need instructions on how to open up the firewall, just hoping there is a way to run the debugger locally instead.

    Read the article

  • iotop for Linux kernel 2.6.18

    - by Lightsauce
    So it has to come to my attention that iotop isn't availalbe for 2.6.18 since it's less than 2.6.20 and requires Python 2.6+. I've done some research and came across this article: http://lserinol.blogspot.com/2009/09/io-usage-per-process-on-linux.html According to this, if these process have io stats in /proc/pid#/io (where pid# is the process #) it's doable regardless of the kernel version. So, in reality, I could upgrade Python to 2.6 and test out iotop. However, my flavor of Linux, CentOS release 5.5 (Final), only supports Python 2.4.3-44.el5 currently. If I were to do uninstall from yum, it doesn't look so pretty. It ends up wanting to uninstall 235 packages, most of which are very important! I read in one place, online (I forget the URL from yesterday), that you can install Python 2.6+ parallel to this one, and have the rpm install for iotop use that. Well, I didn't choose that route. I figured, what the heck, lets write iotop (not copying it, but reverse engineering it without actually looking at it's code/it in use) in bash. I thought it would just grab the /proc/pid#/io file and parse stats. So I wrote a script to grab the top 10 rchar, wchar, read_bytes, and write_bytes by collecting all these stats from all the /proc/pid#/io files, sorting them by each metric, then grabbing the top 10 highest values. The conclusion, the data seems completely useless. Does anybody know any resources for advanced Linux where I can figure out how to take these /proc/pid#/ directories and figure out what the heck they are doing with io on the disk? My main goal is to figure out what exactly is causing high load on my disk. I just know it's on the / partition (/dev/sda2 in this case), and I'm not really sure how to narrow it down without the help of iotop. If I run iostat to grab metrics for 1 minute, every second, the first result it gives me shows a high 'kB_read/s', so that makes me think, it's reading mostly. However, if I watch the update it gives me every second, it's actually just showing values for kB_wrtn/s. This makes me think the initial value iostat gives me is misleading.

    Read the article

  • Ubuntu's 'drag window to side of screen expand' in a hotkey

    - by Bentley4
    In ubuntu(using version 12.04), when you drag a window to the left or right side of your screen the window gets auto-maximized in height and is half of the screen on the left or right side respectively. What would be super useful imo is a hotkey that autodetects what side of the screen your window is on the most and then expands it to a state identical to the one described in the above paragraph. Has anyone programatically done this before? If not, how would you go about achieving this?

    Read the article

  • What will happen if on my DB server I'll run out of space?

    - by Noam
    I'm seeing a hugh difference of free disk space between df -h and du -sxh / I've understood in my question Resolving unix server disk space not adding up that du -sxh / is a better estimation as to when I will run out of disk space. Having said that, assuming in my case the above sentence will prove to be wrong and I will run soon out of disk space, what will happen? I assume the MySQL will fail INSERT queries, but other than that, will I just need to delete some files or will it be a problematic situation?

    Read the article

  • How to limit my CPU power programmatically on Windows 7?

    - by Ivan
    Whenever I run a CPU-heavy activity (like compressing a big set of files into an archive for example) my CPU switches to its full throttle (maximum frequency) and shuts down of overheat in less than a minute. Instead, I would like it to keep slowed-down slightly to do the task a bit slower but be able to reach the finish. At the same time I don't want to dim my screen brightness or adjust anything else what standard Windows power-saving system does. So how do I actually set a cap to limit my CPU power? The CPU is Core 2 Duo T7250, the OS is Windows 7 32-bit, there seem to be no BIOS settings or jumpers available to configure the frequencies.

    Read the article

  • Drupal 7: One-time user account

    - by Noob
    I'm going to create a survey in Drupal 7 with the webform module, installed on a debian system which may be adapted in every way. The users (personally known, approx. 120) doing that survey will walk into a room and complete the survey in browsers on different computers. After that, they'll leave the room and other persons will enter, complete the survey on the same computers and so on. Each user may enter only one submission. The process needs to be anonymous, i. e. I mustn't have any idea of who did wich submission. My current solution is to generate random one-time-passwords and hand out one password per user (without noting who got which password). Within the survey there will be a password field where the one-time-password is entered. The value is checked by webform to be unique. I'll get the data via csv or Excel and verify the passwords manually in excel by comparing them to the list of valid passwords. The problem is: I don't like the idea of manually generating the password list, copying it to excel and doing a manual check. That's a good idea for one-time-use, but we're going to repeat the survey every once in a while. I'd rather generate one-time-logins (like user0001/fdlkjewf, user0002/dfrefnnr, ...) for each survey, hand them out to the users and let drupal/debian/whatever check whether a submission is valid or not. Do you have any idea how to batch-generate about 120 users with one-time-passwords in Drupal 7 and verify that each user may submit the form only once? Do you even have a better idea how to accomplish the task within the intranet? Thank you for your help.

    Read the article

  • Add usm user to a specified group in net-snmp

    - by tdi
    How to add usm created user to a group ? Is it even possible in net-snmp ? This is my initial config. I would like user tdi to be in group snmp3grp com2sec snmp3test localhost dummy group snmp3grp usm snmp3test view widok included system access snmp3grp "" usm priv exact widok all all createUser tdi MD5 "blablablabalab" DES rwuser tdi -V widok

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >