Search Results

Search found 26969 results on 1079 pages for 'prevent default'.

Page 581/1079 | < Previous Page | 577 578 579 580 581 582 583 584 585 586 587 588  | Next Page >

  • Excel 2007 - "The macro may not be available in this workbook" Error

    - by Psycho Bob
    We use an Excel sheet that has been protected to prevent modification of it from end users. All in all they are only able to edit certain tabs to add information that will then be used to generate information on other tabs using equations and such. On the tab with the equations, a button is present called "Prep for Internal Hard Copy Print." This button runs a macro that selects the information on the tab, unprotects it, then sends a print job to the user's default printer that contains the unprotected content. Normally this works like a champ. This time around, however, the macro is throwing the following error: Cannot run the macro "FILENAME.xlsx'!MacroName'. The macro may not be available in this workbook or all macros may be disabled. As far as I can tell, the macros are still present within the workbook. This sheet is normally a .xlsm though the user saved it with a different filename as a .xlsx. Also, the macros appear only as MacroName in the .xlsm file and not "FILENAME.xlsx'!MacroName' as it does in the .xlsx. Finally, when I open the .xlsm it asks if I want to enable the macro content while the .xlsx does not prompt for this. Can anyone tell me what's going on with this sheet or know of a way that I can get the macros working in the .xlsx without having to start over with a different sheet?

    Read the article

  • Apache, ISPConfig & Roundcube alias

    - by Jay Zus
    I'm using ISPConfig to setup all the websites on my server but I also like to try to fiddle with the files myself to see how it works. Like you guessed, yes, I've broken something. I can't access my webmail setup by default on the server with the alias /webmail (I access it via the http://xxx.xxx.xx.xx/webmail) Firefox tells me that The page isn't redirecting properly Firefox has detected that the server is redirecting the request for this address in a way that will never complete. So I cleaned up my vhost files and the one of my websites work as intended, I think that the problem comes from my default.vhost. Here's the content of it <Directory /var/www/> AllowOverride None Order Deny,Allow Deny from all </Directory> <VirtualHost *:80> DocumentRoot /var/www/ ServerAdmin [email protected] <Directory /var/www/> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> This isn't a lot and I can't really see what's wrong with it, all I know is that it isn't the one that came with ISPConfig and I can't find an original one. Here's the roundcube.conf that loads with apache # Those aliases do not work properly with several hosts on your apache server # Uncomment them to use it or adapt them to your configuration # Alias /roundcube/program/js/tiny_mce/ /usr/share/tinymce/www/ # Alias /roundcube /var/lib/roundcube Alias /webmail /var/lib/roundcube/ # Access to tinymce files <Directory "/usr/share/tinymce/www/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order allow,deny allow from all </Directory> <Directory /var/lib/roundcube/> Options +FollowSymLinks # This is needed to parse /var/lib/roundcube/.htaccess. See its # content before setting AllowOverride to None. AllowOverride All order allow,deny allow from all </Directory> # Protecting basic directories: <Directory /var/lib/roundcube/config> Options -FollowSymLinks AllowOverride None </Directory> <Directory /var/lib/roundcube/temp> Options -FollowSymLinks AllowOverride None Order allow,deny Deny from all </Directory> <Directory /var/lib/roundcube/logs> Options -FollowSymLinks AllowOverride None Order allow,deny Deny from all </Directory> I didn't touch that file, but I guess it has something to do with the problem. I just can't find why it doesn't work. EDIT: This is the errors in my apache's log [21/Jun/2013:21:10:54 +0200] "GET /webmail/ HTTP/1.1" 302 540 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0" [21/Jun/2013:21:10:54 +0200] "GET /webmail/ HTTP/1.1" 302 475 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0" [21/Jun/2013:21:10:54 +0200] "GET /webmail/ HTTP/1.1" 302 475 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0" [21/Jun/2013:21:10:54 +0200] "GET /webmail/ HTTP/1.1" 302 475 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0" [21/Jun/2013:21:10:54 +0200] "GET /webmail/ HTTP/1.1" 302 475 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0" [21/Jun/2013:21:10:54 +0200] "GET /webmail/ HTTP/1.1" 302 475 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0" [21/Jun/2013:21:10:54 +0200] "GET /webmail/ HTTP/1.1" 302 475 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0"

    Read the article

  • Postfix qmgr process causes heavy overload on mailservers

    - by Mattias
    We are using Postfix as MTA for our e-mailmarketing software and once in a while we see that the load on one of the mailservers rises above 5. The load is caused by the qmgr-process which is the heart of Postfix and I see that it is consuming a lot of CPU resources. The process seems to be stuck because after 15 minutes it is still doing the samething and still increasing the load. Once I restart the postfix service the load rapidly decreases to below 1 and Postfix continues to send e-mails without any problems. I'm wondering if anyone else has encountered this problem and if people have suggestions on how to prevent it. The problem shows up on all our mailservers but almost never at more than 1 at the time. It seems to be triggered only when we are sending a mailing but the size (10 or 100.000 e-mails doesn't seem to make a difference). It maybe happens once a week or even less often and the time and day is also different every time. We tried to solve the problem by decreasing the amount of messages qmgr is allowed to process but this didn't solve it. We are using Postfix 2.5.5 on Debian Lenny 5.0.8 (postfix is installed through the default Debian repository). No special messages can be found in the logs (syslog, messages, mail.*). Thank you for your time

    Read the article

  • What is the IPv6 equivalent to IPv4 RFC1918 addresses?

    - by Kumba
    Having a hard time wrapping my head around IPv6 here. A lot of the lingo seems targeted at enterprise-level IPv6 deployments, discussing link-local, site-local, global unicast, scopes, etc. Not a lot of solid information on really small networks, like home networks. I want to check my thinking and make sure I am getting the correct translations from IPv4-speak to IPv6-speak. The first question is, what's the equivalent of RFC1918 for IPv6? Initial searches suggested there was no equivalent. Then I stumbled upon Unique Local Addresses (RFC4193), and that states that all ULA's should be assigned the prefix fc00, followed by a 40-bit random number in the routing prefix. This random number is to "prevent collisions when two IPv6 networks are interconnected" -- again, another reference to an enterprise-level function. If I have a small local LAN at home, numbered using 192.168.4.0/24, what's my equivalent in IPv6's ULA scope? Assuming I will never, ever, tie that IPv6 address into the real internet (a router will NAT & firewall it), can I ignore the RFC to an extent and go with fc00::4:0/120? It also seems that any address in fc00::/7 are to be globally routable. Does this mean I'll need extra protections so my router would not automatically start advertising these private IPv6 addresses to the world? Second question, what's this link-local thing? Reading suggests a default-assigned address in the fe80::/10 range that has the last 64bits of the address comprised of the interface's MAC address. Seems to be required, too, but I'm annoyed by the constant discussion of it in relation to enterprise networks. Third question, what is scope id for? Seems to be yet another term tossed around in relation to enterprise networks, especially when interconnecting them, but almost no explanation on the smaller home network level. Can I see a scope ID AND CIDR notation used together? I.e., fc00::4:0/120%6, or are scope IDs only supposed to be applied to a single /128 IPv6 address?

    Read the article

  • Automate creation of Windows startup script?

    - by Niten
    Is there a good way to automate installing local startup (rather than login) scripts in Windows XP and Windows 7, via the command line, WMI, COM, or otherwise (even Win32 if it comes to that)? I need to setup a local startup script on a large number of computers, and unfortunately, Active Directory is absolutely not an option. I would like to write a script or small program that I can run on each computer to perform the startup script installation in order to save myself a lot of error-prone point-and-click manual labor. I see that when one uses gpedit.msc to create a local startup script, information about the script gets stored in the registry here: HKLM\Software\Policies\Microsoft\Windows\System\Scripts\Startup However, if you create such a script and then delete its registry key, the script will remain listed in the local Group Policy editor; as is so often the case in Windows, apparently there is more going on there than meets the eye. This leads me to question whether it's safe to manually add subkeys for new startup scripts here (I wouldn't want my script to be overwritten by later changes made using the local Group Policy editor, for instance)... Another option that's occurred to me is to create an item in the Task Scheduler configured to run at system startup. However, my concerns there are twofold: Can this be automated any more easily? For instance, the at command doesn't appear to let you schedule a task for system startup, and WMI's Win32_ScheduledJob interface looks unreliable (it fails to show any of my currently scheduled tasks, for one thing). Would I be able to prevent users from logging in until the scheduled startup task is completed, as can be done with "normal" Windows startup scripts? Thanks in advance for any suggestions, I've been banging my head against this one for a bit...

    Read the article

  • Whats the best way to update Ubuntu 9.04?

    - by Fu86
    I have a Ubuntu 9.04 server which has no packase support anymore. If I want to update my package lists, I get th following errors: Err http://de.archive.ubuntu.com jaunty-security/multiverse Packages 404 Not Found [IP: 141.30.13.10 80] W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/jaunty/main/binary-amd64/Packages 404 Not Found [IP: 141.30.13.10 80] .... I read at the official Ubuntu-Support-Page, that there is a update-manager-core-Package to upgrade to a new release. Unfortunately I dont have this package installed and I am unable to install it because of the lack of package sources. EDIT: Installing the package update-manager-core from another release doesn't work because it depends on a higher version of python-apt. (Tried with 10.04) $ dpkg -i update-manager-core_0.134.7_amd64.deb Selecting previously deselected package update-manager-core. (Reading database ... 28743 files and directories currently installed.) Unpacking update-manager-core (from update-manager-core_0.134.7_amd64.deb) ... dpkg: dependency problems prevent configuration of update-manager-core: update-manager-core depends on python-apt (>= 0.7.13.4ubuntu3); however: Version of python-apt on system is 0.7.9~exp2ubuntu10. update-manager-core depends on python-gnupginterface; however: Package python-gnupginterface is not installed. dpkg: error processing update-manager-core (--install): dependency problems - leaving unconfigured Errors were encountered while processing: update-manager-core So, whats the best way to upgrade to to current Release without reinstalling the complete (virtual) server?

    Read the article

  • Problem restoring from tar backup: why are there /dev/disk/by-id/ symlinks and how can I avoid them?

    - by SK.
    Hello, I'm trying to make a bare-bone backup system with the most basic tools available on openSUSE 11.3 (in this case: bash, fdisk, tar & grub legacy) Here's the workflow for my scripts: backup.sh: (Run from external system, e.g. LiveCD) make an fdisk script ($fscript) from fdisk -l's output [works] mount the partitions from the system's fstab [works] tar the crucial stuff in file.tgz [works] restore.sh: (Run from external system, e.g. LiveCD) run fdisk $dest < $fscript to restore partitioning [works] format and mount partitions from system's fstab [fails] extract from file.tgz [works when mounting manually] restore grub [fails] I have recently noticed that openSUSE (though I'm sure it has nothing to do with the distro) has different output in /etc/fstab and /boot/grub/menu.lst, more precisely the partition name is for example "/dev/disk/by-id/numbers-brandname-morenumbers-part2" instead of "/dev/sda2" -- but it basically is a simple symlink. My questions about this: what is the point of such symlinks, especially if we're restoring on a different disk? is there a way to cleanly prevent the creation of those symlinks and use the "true" /dev/sdx everywhere instead? if the previous is no, do you know a way to replace those symlinks on the fly in a text file? I tried this script but only works if the file starts with the symlink description (case of fstab, not menu.lst): ### search and replace /dev/disk/by-id/... to /dev/sdx while read oldVolume rest; do # get first element, ignore rest of line if [[ "$oldVolume" =~ ^/dev/disk/by-id/.*(-part[0-9]*$)? ]]; then newVolume=$(readlink $oldVolume) # replace pointer by pointee, returns "../../sdx" echo /dev/${newVolume##*/} $rest >> TMP # format to "/dev/sdx", write line else echo $oldVolume $rest >> TMP # nothing to do fi done < $file mv -f TMP $file # save changes I've had trouble finding a solution to this on google so I was hoping some of the members here could help me. Thank you.

    Read the article

  • tar Cannot stat: No such file or directory

    - by VVP
    Hi all, I have got this error in during my mail server backup: 2010-09-16 06:24:20 ERROR backup of /var/mail/vhosts failed: tar: Removing leading `/' from member names tar: /var/mail/vhosts/host-name/0/user-name/.maildir/cur/1284588471.Vfd00I16e0223M187263.server.host-name\:2,: Cannot stat: No such file or directory tar: /var/mail/vhosts/host-name/0/user-name/.maildir/cur/1284587441.Vfd00I16e0220M85965.server.host-name\:2,: Cannot stat: No such file or directory tar: /var/mail/vhosts/host-name/0/user-name/.maildir/cur/1284588863.Vfd00I16e0225M370937.server.host-name\:2,: Cannot stat: No such file or directory tar: /var/mail/vhosts/host-name/0/user-name/.maildir/cur/1284602404.Vfd00I16e022aM416444.server.host-name\:2,: Cannot stat: No such file or directory tar: /var/mail/vhosts/host-name/0/user-name/.maildir/cur/1284594551.Vfd00I16e0228M678444.server.host-name\:2,: Cannot stat: No such file or directory tar: /var/mail/vhosts/host-name/0/user-name/.maildir/cur/1284588944.Vfd00I16e0226M622591.server.host-name\:2,: Cannot stat: No such file or directory tar: /var/mail/vhosts/host-name/0/user-name/.maildir/cur/1284587271.Vfd00I16e021fM96119.server.host-name\:2,: Cannot stat: No such file or directory tar: /var/mail/vhosts/host-name/0/user-name/.maildir/cur/1284599458.Vfd00I16e0229M181400.server.host-name\:2,: Cannot stat: No such file or directory tar: Error exit delayed from previous errors Is it happened because user deleted his messages? Is there any way how to prevent this? Well I am assuming it can be happened not only with e-mail backup. Can I rely on tar & gzip as a mail backup system?

    Read the article

  • centos6.3 varnish3.03 get the wrong backend

    - by Sola.Shawn
    I install varnish3.03 with yum! I got a problem with it my varnish config bellow:** # #backend weibo { .host = "192.168.1.178"; .port = "8080"; .connect_timeout=20s; .first_byte_timeout=20s; .between_bytes_timeout=20s; } #backend smth { .host = "192.168.1.115"; .port = "8080"; .connect_timeout=20s; .first_byte_timeout=20s; .between_bytes_timeout=20s; } #sub vcl_recv { if (req.restarts == 0) { if (req.http.x-forwarded-for) { set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + client.ip; } else { set req.http.X-Forwarded-For = client.ip; } } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { # /* Non-RFC2616 or CONNECT which is weird. */ return(pipe); } if (req.request != "GET" && req.request != "HEAD") { # /* We only deal with GET and HEAD by default */ return(pass); } if (req.http.Authorization || req.http.Cookie) { /* Not cacheable by default */ return(pass); } if (req.http.host ~ "^(hk.)?weibo.com"){ set req.http.host = "hk.weibo.com"; set req.backend = weibo; } elseif (req.http.host ~ "^(www.)?newsmth.net"){ set req.http.host = "www.newsmth.net"; set req.backend = smth; } else { error 404 "Unknown virtual host"; } return(lookup); } ##sub vcl_pipe { return(pipe); } #sub vcl_pass { return(pass); } #sub vcl_hash { hash_data(req.url); if(req.http.host) { hash_data(req.http.host); } else { hash_data(server.ip); } return(hash); } #sub vcl_hit { if(req.http.Cache-Control~"no-cache"||req.http.Cache-Control~"max-age=0"||req.http.Pragma~"no-cache"){ set obj.ttl=0s; return (restart); } return(deliver); } #sub vcl_miss { return(fetch); } #sub vcl_fetch { if (beresp.ttl <= 120s || beresp.http.Set-Cookie || beresp.http.Vary == "*") { /* * Mark as "Hit-For-Pass" for the next 2 minutes */ set beresp.ttl = 10s; return (hit_for_pass); } return(deliver); } #sub vcl_deliver { return(deliver); } #sub vcl_init { return(ok); } #sub vcl_fini { return(ok); } and my Win7's hosts file add bellow: 192.168.1.178 www.newsmth.net 192.168.1.178 hk.weibo.com start varnish varnishd -f /etc/varnish/dd.vcl -s malloc,100M -a 0.0.0.0:8000 -T 0.0.0.0:3500<br> but when I access the "hk.weibo.com:8000" it fine, and got: Hello,I am hk.weibo.com! but when access http://www.newsmth.net:8000/, got: Hello,I am hk.weibo.com! <br> My question is why it isn't "Hello,I am www.newsmth.net!"? varnish fetched the content from the wrong backend. Does anyone know how to fix this?

    Read the article

  • Setting up a network where packets are traced

    - by Marcus
    My situation is the following: I have an internet connection, which is shared between people. More or less obviously, people is using it to download illegal stuff. Since I'm the owner of the connection, I want to avoid being sued. I don't want to prevent the people from doing the things they want, but I want to be legally safe. Now, I have relatively little competences in network administration, so I was wondering: is it possible to setup a network, where the source and destination of the packets are logged? I would use this to prove, in case of lawsuit, that the traffic was coming from a given machine. if the idea is feasible, is there any wireless router on which I can install linux, where I can install the packet sniffer? how much space could the logs take (containing only the timestamp/source/destination), per GB of traffic? a very rough estimation would be very helpful. if a machine on my network is sending bittorrent packets to a certain IP, would this log be able to reflect the time, source ip and destination ip? I assume that obviously the torrent data would be encrypted and un-decryptable. Am I missing something? Is there a better strategy? Any pointer to documentation would be helpful as well - in that case, I would use this as starting point.

    Read the article

  • ASA 5505 stops local internet when connected to VPN

    - by g18c
    Hi I have a Cisco ASA router running firmware 8.2(5) which hosts an internal LAN on 192.168.30.0/24. I have used the VPN Wizard to setup L2TP access and I can connect in fine from a Windows box and can ping hosts behind the VPN router. However, when connected to the VPN I can no longer ping out to my internet or browse web pages. I would like to be able to access the VPN, and also browse the internet at the same time - I understand this is called split tunneling (have ticked the setting in the wizard but to no effect) and if so how do I do this? Alternatively, if split tunneling is a pain to setup, then making the connected VPN client have internet access from the ASA WAN IP would be OK. Thanks, Chris names ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Vlan1 nameif inside security-level 100 ip address 192.168.30.1 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address 208.74.158.58 255.255.255.252 ! ftp mode passive access-list inside_nat0_outbound extended permit ip any 10.10.10.0 255.255.255.128 access-list inside_nat0_outbound extended permit ip 192.168.30.0 255.255.255.0 192.168.30.192 255.255.255.192 access-list DefaultRAGroup_splitTunnelAcl standard permit 192.168.30.0 255.255.255.0 access-list DefaultRAGroup_splitTunnelAcl_1 standard permit 192.168.30.0 255.255.255.0 pager lines 24 logging asdm informational mtu inside 1500 mtu outside 1500 ip local pool LANVPNPOOL 192.168.30.220-192.168.30.249 mask 255.255.255.0 icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 192.168.30.0 255.255.255.0 route outside 0.0.0.0 0.0.0.0 208.74.158.57 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 timeout floating-conn 0:00:00 dynamic-access-policy-record DfltAccessPolicy http server enable http 192.168.30.0 255.255.255.0 inside snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set ESP-AES-256-MD5 esp-aes-256 esp-md5-hmac crypto ipsec transform-set ESP-DES-SHA esp-des esp-sha-hmac crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec transform-set ESP-DES-MD5 esp-des esp-md5-hmac crypto ipsec transform-set ESP-AES-192-MD5 esp-aes-192 esp-md5-hmac crypto ipsec transform-set ESP-3DES-MD5 esp-3des esp-md5-hmac crypto ipsec transform-set ESP-AES-256-SHA esp-aes-256 esp-sha-hmac crypto ipsec transform-set ESP-AES-128-SHA esp-aes esp-sha-hmac crypto ipsec transform-set ESP-AES-192-SHA esp-aes-192 esp-sha-hmac crypto ipsec transform-set ESP-AES-128-MD5 esp-aes esp-md5-hmac crypto ipsec transform-set TRANS_ESP_3DES_SHA esp-3des esp-sha-hmac crypto ipsec transform-set TRANS_ESP_3DES_SHA mode transport crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map SYSTEM_DEFAULT_CRYPTO_MAP 65535 set transform-set ESP-AES-128-SHA ESP-AES-128-MD5 ESP-AES-192-SHA ESP-AES-192-MD5 ESP-AES-256-SHA ESP-AES-256-MD5 ESP-3DES-SHA ESP-3DES-MD5 ESP-DES-SHA ESP-DES-MD5 TRANS_ESP_3DES_SHA crypto map outside_map 65535 ipsec-isakmp dynamic SYSTEM_DEFAULT_CRYPTO_MAP crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 10 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 telnet timeout 5 ssh timeout 5 console timeout 0 dhcpd auto_config outside ! threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept webvpn group-policy DefaultRAGroup internal group-policy DefaultRAGroup attributes dns-server value 192.168.30.3 vpn-tunnel-protocol l2tp-ipsec split-tunnel-policy tunnelspecified split-tunnel-network-list value DefaultRAGroup_splitTunnelAcl_1 username user password Cj7W5X7wERleAewO8ENYtg== nt-encrypted privilege 0 tunnel-group DefaultRAGroup general-attributes address-pool LANVPNPOOL default-group-policy DefaultRAGroup tunnel-group DefaultRAGroup ipsec-attributes pre-shared-key ***** tunnel-group DefaultRAGroup ppp-attributes no authentication chap authentication ms-chap-v2 ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum client auto message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect rsh inspect rtsp inspect esmtp inspect sqlnet inspect skinny inspect sunrpc inspect xdmcp inspect sip inspect netbios inspect tftp inspect ip-options ! service-policy global_policy global prompt hostname context : end

    Read the article

  • Scripting an 'empty' password in /etc/shadow

    - by paddy
    I've written a script to add CVS and SVN users on a Linux server (Slackware 14.0). This script creates the user if necessary, and either copies the user's SSH key from an existing shell account or generates a new SSH key. Just to be clear, the accounts are specifically for SVN or CVS. So the entry in /home/${username}/.ssh/authorized_keys begins with (using CVS as an example): command="/usr/bin/cvs server",no-port-forwarding,no-agent-forwarding,no-X11-forwarding,no-pty ssh-rsa ....etc...etc...etc... Actual shell access will never be allowed for these users - they are purely there to provide access to our source repositories via SSH. My problem is that when I add a new user, they get an empty password in /etc/shadow by default. It looks like: paddycvs:!:15679:0:99999:7::: If I leave the shadow file as is (with the !), SSH authentication fails. To enable SSH, I must first run passwd for the new user and enter something. I have two issues with doing that. First, it requires user input which I can't allow in this script. Second, it potentially allows the user to login at the physical terminal (if they have physical access, which they might, and know the secret password -- okay, so that's unlikely). The way I normally prevent users from logging in is to set their shell to /bin/false, but if I do that then SSH doesn't work either! Does anyone have a suggestion for scripting this? Should I simply use sed or something and replace the relevant line in the shadow file with a preset encrypted secret password string? Or is there a better way? Cheers =)

    Read the article

  • Autologin 2 Windows users OR Login another user from the desktop

    - by fpdragon
    I'm using two windows users on my HTPC at the same time. One is just for watching videos and one for administration via remote. This setup is quite ideal for me since windows can handle multiple concurrent logins and the win "rdp concurrent hack" (Google). The problem is, I want both users to be logged in automatically when the pc was started. It shall be possible to watch tv and also the admin user shall be automatically logged in to start my scripts and other tasks, even if I haven't logged in via remote desktop manually. Later, when I want to admin my htpc I can just rdp connect the admin user without interrupting the video playback on the actual HTPC's screen and check my cleanup tasks, downloads, ... witch already executed for this admin user. But right now I found no solution to automatically login user A from a user B desktop and I also found no solution to autologin both users immediately at startup. As a workaround I have to fire up my other notebook machine and login one time with the remote user via rdp. From this time on the remote admin user is running concurrent with the main user in the background of the machine. The other workaround would be... after startup switch user from main user to admin user and then back again. But that also requires manual steps. I'm on a Windows 8 System right now but all infos for Win7 or XP would be also interesting. thanks a lot for all ideas. PS: just to prevent useless posts... don't tell me that only one user can be logged in to windows. ;)

    Read the article

  • /usr/bin/sshd isn't linked against PAM on one of my systems. What is wrong and how can I fix it?

    - by marc.riera
    Hi, I'm using AD as my user account server with ldap. Most of the servers run with UsePam yes except this one, it has lack of pam support on sshd. root@linserv9:~# ldd /usr/sbin/sshd linux-vdso.so.1 => (0x00007fff621fe000) libutil.so.1 => /lib/libutil.so.1 (0x00007fd759d0b000) libz.so.1 => /usr/lib/libz.so.1 (0x00007fd759af4000) libnsl.so.1 => /lib/libnsl.so.1 (0x00007fd7598db000) libcrypto.so.0.9.8 => /usr/lib/libcrypto.so.0.9.8 (0x00007fd75955b000) libcrypt.so.1 => /lib/libcrypt.so.1 (0x00007fd759323000) libc.so.6 => /lib/libc.so.6 (0x00007fd758fc1000) libdl.so.2 => /lib/libdl.so.2 (0x00007fd758dbd000) /lib64/ld-linux-x86-64.so.2 (0x00007fd759f0e000) I have this packages installed root@linserv9:~# dpkg -l|grep -E 'pam|ssh' ii denyhosts 2.6-2.1 an utility to help sys admins thwart ssh hac ii libpam-modules 0.99.7.1-5ubuntu6.1 Pluggable Authentication Modules for PAM ii libpam-runtime 0.99.7.1-5ubuntu6.1 Runtime support for the PAM library ii libpam-ssh 1.91.0-9.2 enable SSO behavior for ssh and pam ii libpam0g 0.99.7.1-5ubuntu6.1 Pluggable Authentication Modules library ii libpam0g-dev 0.99.7.1-5ubuntu6.1 Development files for PAM ii openssh-blacklist 0.1-1ubuntu0.8.04.1 list of blacklisted OpenSSH RSA and DSA keys ii openssh-client 1:4.7p1-8ubuntu1.2 secure shell client, an rlogin/rsh/rcp repla ii openssh-server 1:4.7p1-8ubuntu1.2 secure shell server, an rshd replacement ii quest-openssh 5.2p1_q13-1 Secure shell root@linserv9:~# What I'm doing wrong? thanks. Edit: root@linserv9:~# cat /etc/pam.d/sshd # PAM configuration for the Secure Shell service # Read environment variables from /etc/environment and # /etc/security/pam_env.conf. auth required pam_env.so # [1] # In Debian 4.0 (etch), locale-related environment variables were moved to # /etc/default/locale, so read that as well. auth required pam_env.so envfile=/etc/default/locale # Standard Un*x authentication. @include common-auth # Disallow non-root logins when /etc/nologin exists. account required pam_nologin.so # Uncomment and edit /etc/security/access.conf if you need to set complex # access limits that are hard to express in sshd_config. # account required pam_access.so # Standard Un*x authorization. @include common-account # Standard Un*x session setup and teardown. @include common-session # Print the message of the day upon successful login. session optional pam_motd.so # [1] # Print the status of the user's mailbox upon successful login. session optional pam_mail.so standard noenv # [1] # Set up user limits from /etc/security/limits.conf. session required pam_limits.so # Set up SELinux capabilities (need modified pam) # session required pam_selinux.so multiple # Standard Un*x password updating. @include common-password Edit2: UsePAM yes fails With this configuration ssh fails to start : root@linserv9:/home/admmarc# cat /etc/ssh/sshd_config |grep -vE "^[ \t]*$|^#" Port 22 Protocol 2 ListenAddress 0.0.0.0 RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys ChallengeResponseAuthentication yes UsePAM yes Subsystem sftp /usr/lib/sftp-server root@linserv9:/home/admmarc# The error it gives is as follows root@linserv9:/home/admmarc# /etc/init.d/ssh start * Starting OpenBSD Secure Shell server sshd /etc/ssh/sshd_config: line 75: Bad configuration option: UsePAM /etc/ssh/sshd_config: terminating, 1 bad configuration options ...fail! root@linserv9:/home/admmarc#

    Read the article

  • Exclude list of specific files in wget

    - by nanker
    I am trying to download a lot of pages from a website on dial-up and it can be brutally slow. I have almost got the perfect wget command, but because I'm downloading pages from the same site wget wastes times downloading the same standard images for each page. If I know the name of the default page images, is there any way to have wget ignore and thus avoid downloading those for each and every page? Here is an example of one of the wget commands that my shell script generates into another shell script to download all of the pages: mkdir candy-canes-on-the-flannel-board-in-preschool cd candy-canes-on-the-flannel-board-in-preschool wget -p -nd -A jpg,html -k http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ wget -c --random-wait --timeout=30 --user-agent="Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.3) Gecko/2008092416 Firefox/3.0.3" http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ -O "candy-canes-on-the-flannel-board-in-preschool" rm Baby-and-Toddler.jpg Childrens-Books.jpg Creative-Art.jpg Felt-Fun.jpg Happy_Rainbow-e1338766526528.jpg index.html Language-and-Literacy.jpg Light-table-Button.jpg Math.jpg Outdoor-Play.jpg outer-jacket1-300x153.jpg preschoolspot-button-small.jpg robots.txt Science-and-Nature.jpg Signature-2.jpg Story-Telling.jpg Tags-on-Preschool.jpg Teaching-Two-and-Three-Year-olds.jpg cd ../ Now I realize the script is not likely as savvy as it could be but it is doing what I need at the moment except that you can see from the rm command that I would just like to prevent wget from downloading the files in the first place if possible. I almost forgot to mention, there are two wget commands and that is because the first one downloads the page as index.html and for some reason it does not open in my browser, however, when I open it and look at it in vim all of the page's content is there, so I am not sure why it does not open. But if I just issue the second wget command as it is then that page, same file really with an alternate name, opens up fine. Something that if I could fix would also help to streamline the process.

    Read the article

  • Disable CTRL+mouse wheel zooming in Chrome?

    - by Peter Nore
    I'm a normal-sighted person and I would like to view pages at 100% all the time. I use keyboard shortcuts that involve CTRL a lot, so about twenty times a day I accidentally hit CTRL at the same time that I'm scrolling, which results in the page being reflowed and repainted. This in is annoying because it can take up to 30 seconds to fix the issue, depending on how complex the site layout is. On sites with dynamic layout such as Google Docs the problem is more serious; accidentally hitting CTRL+mouse wheel corrupts the display and forces me to refresh the page entirely, sometimes causing me to loose information in the process. I would like to either decouple CTRL+mouse wheel from zoom, or disable zoom functionality altogether. This is possible on Firefox by using about:config; is there a similar way to edit detailed settings in Chrome? Would I have access to the detailed settings if I used Chromium instead of Chrome? I'll probably jump ship back to Firefox if I can't solve this problem. There is a superuser question that asks basically the same thing I'm asking, but for Firefox and Internet Explorer exclusively. Other people on the Chrome forum have had related issues, but none have the same problem. "I would really like it if I could deactivate the auto zoom in/out." had "something with laptops and Windows 7", not the feature built into Chrome. Other people have had PDF specific issues, which doesn't concern me. I've also tried searching for extensions that allow you to disable the scroll; I had hoped that "Zoom Lock" would have the ability to lock the zoom at 100% and prevent CTRL+scroll wheel from distorting the display, but it doesn't work for my use case. Google Chrome version 9.0.597.84 (Official Build 72991) Operating System: Ubuntu 10.10

    Read the article

  • Nginx : Proper use of limit_req_zone and limit_req

    - by xperator
    I have 2 website running on VPS. Their purpose is sharing music files and publishing news. Both of them use wordpress. What I am trying is that I want to prevent little hackers from flooding the webserver and putting stress on the server to make it crash. The problem is that after using limit_req_zone and limit_req my website became very slow. Browsing Wordpress control panel takes a long long time. I tried changing values but it didn't improve much. I guess the problem is Wordpress because it's the only script I am using on both front and back end. Here is the last setting which seems to be more responsive than others : limit_req_zone $binary_remote_addr zone=flood:5m rate=10r/m; location ~ \.php$ { limit_req zone=flood burst=100 nodelay; } What are the optimal values that should be used in my case (wp) ? I want the website have it's normal behavior, On the other hand stopping lifeless people from flooding. Another question, Is it safe and enough to use limit_req only on php files ?

    Read the article

  • 9000+ different subdomains 301 to main domain, .htacess apache

    - by Karim
    I bought a domain that had various subdomains such as Kim.domain.com/whatever john.domain.com/whatever1 Lizo.domain.com/whatever2 Simon.domain.com/whatever1 And this was in the thousands, and also had links to these pages I'd like to do a 301 redirect for all these urls into http://domain.com Any idea how this could be done? This is for a apache web server and needs to be done via .htaccess I have implemented the solution from reading the answer below. RewriteEngine On RewriteCond %{HTTP_HOST} !^www. domain.com$ RewriteCond %{HTTP_HOST} !^$ RewriteRule ^/(.*)$ http:/ / www. domain.com/$1 [L,R=301] However I have a slight problem, I would like to redirect all subdomains + subfolders to http://www. domain.com/ With the exception of http: //domain. com/subfolder/, in which case I would like to redirect to http: // www.domain. com/subfolder/ [i.e. exception for no subdomain] I'm guessing I need to add an exception, what can I do to implement this. Note: example URLs above have had spaces added to them to prevent spam blocks for blocking the post.

    Read the article

  • SMTP message rate control on Ubuntu 8.04, preferably with postfix

    - by TimDaMan
    Maybe I am chasing a bug but I am trying to set up a smtp proxy of sorts. I have a postfix server which receives all the email for a collection of servers/clients. It them uses a smarthost (relayhost=...) to forward it's mail to our corporate MTA. I would like to limit the number of messages an individual server can relay to prevent swamping the corporate MTA. Postfix has a program called "anvil" that is capable of tracking stats about mail to be used for such things but it doesn't seem to be executed. I ran "inotifywait -m /usr/lib/postfix/anvil" while I started postfix and sent a number of messages through it from a remote server. inotifywait indicated anvil was never run. Anyone gotten postfix/anvil rate controls to work? main.cf smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no append_dot_mydomain = no readme_directory = no myhostname = site-server-q9 alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = localhost relayhost = Out outgoing mail relay mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 10.0.0.0/8 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = 10.X.X.X smtpd_client_message_rate_limit = 1 anvil_rate_time_unit = 1h master.cf extract anvil unix - - - - 1 anvil smtp inet n - - - - smtpd

    Read the article

  • SSL timeout on some sites, across all browsers, on Mac OS X Snow Leopard

    - by dansays
    For the past several weeks, I've been receiving "Error 7 (net::ERR_TIMED_OUT): The operation timed out" when I attempt to connect to either Twitter or Paypal via SSL. I get this specific error in Google Chrome, but the same problem occurs in both Safari and Firefox. Other sites work fine, and other computers on my network can access these two sites. I have no firewall settings that would prevent me from accessing these sites over port 443. I notice that both Twitter and Paypal both have "Verisign Class 3 Extended Validation SSL CA" certificates. It is unclear whether this is related to the problem. In an effort to troubleshoot, I attempted to open the test sites referenced on Verisign's root certificate support page, which worked fine. Just to be sure, I downloaded and installed the root package file and installed all included Verisign certificates. No joy. I feel like I've hit a dead end. Any ideas? Update the first: I also cannot connect to FedEx.com, who also has a Verisign Class 3 Extended Validation cert. Update the second: Aaaaaaand it fixed itself. I did nothing. Or, I did something that worked, but in a delayed fashion. Frustrating, but a win is a win. I'll take it.

    Read the article

  • Why my laptop sends ARP request to itself ?

    - by user58859
    I have just started to learn about protocols. While studying the packets in wireshark, I came across a ARP request sent by my machine to my own IP. Here is the details of the packet : No. Time Source Destination Protocol Info 15 1.463563 IntelCor_aa:aa:aa Broadcast ARP Who has 192.168.1.34? Tell 0.0.0.0 Frame 15: 42 bytes on wire (336 bits), 42 bytes captured (336 bits) Arrival Time: Jan 7, 2011 18:51:43.886089000 India Standard Time Epoch Time: 1294406503.886089000 seconds [Time delta from previous captured frame: 0.123389000 seconds] [Time delta from previous displayed frame: 0.123389000 seconds] [Time since reference or first frame: 1.463563000 seconds] Frame Number: 15 Frame Length: 42 bytes (336 bits) Capture Length: 42 bytes (336 bits) [Frame is marked: False] [Frame is ignored: False] [Protocols in frame: eth:arp] [Coloring Rule Name: ARP] [Coloring Rule String: arp] Ethernet II, Src: IntelCor_aa:aa:aa (aa:aa:aa:aa:aa:aa), Dst: Broadcast (ff:ff:ff:ff:ff:ff) Destination: Broadcast (ff:ff:ff:ff:ff:ff) Address: Broadcast (ff:ff:ff:ff:ff:ff) .... ...1 .... .... .... .... = IG bit: Group address (multicast/broadcast) .... ..1. .... .... .... .... = LG bit: Locally administered address (this is NOT the factory default) Source: IntelCor_aa:aa:aa (aa:aa:aa:aa:aa:aa) Address: IntelCor_aa:aa:aa (aa:aa:aa:aa:aa:aa) .... ...0 .... .... .... .... = IG bit: Individual address (unicast) .... ..0. .... .... .... .... = LG bit: Globally unique address (factory default) Type: ARP (0x0806) Address Resolution Protocol (request) Hardware type: Ethernet (0x0001) Protocol type: IP (0x0800) Hardware size: 6 Protocol size: 4 Opcode: request (0x0001) [Is gratuitous: False] Sender MAC address: IntelCor_aa:aa:aa (aa:aa:aa:aa:aa:aa) Sender IP address: 0.0.0.0 (0.0.0.0) Target MAC address: 00:00:00_00:00:00 (00:00:00:00:00:00) Target IP address: 192.168.1.34 (192.168.1.34) Here the sender's mac address is mine(Here I have hiden my mac address). target IP is mine. Why my machine is sending ARP request to itself? I found 3 packets of this type. There was no ARP reply for these packets. Can anybody explain me why it is? (My operating system is windows-7. I am directly connected to a wifi modem. I got these packets as soon as I started my connection.) I want one suggestion also. many places I read that RFC's are enough for study about protocols. I studied the RFC 826 on ARP. I personally feel that is not enough at all. Any suggestion regarding this? Is there more then 1 RFC for a protocol? I want to study about the protocols in very detail. Can anybody guide me for this? Thanks in advance.

    Read the article

  • All application passwords lost on Windows 7

    - by Rynardt
    A couple of days ago I changed my Windows 7 login password. My laptop is on my company's domain, so password changes are done over the internal network. Since changing the password I noticed that all my saved Chrome passwords are missing. Also Skype, Windows Live, Internet Explorer and Outlook lost their saved passwords. I guess there could be more applications with lost passwords, but I have not opened them yet. This makes me think that most applications saves their passwords to a general password vault on the Windows system and this vault got somehow corrupted when I changed my domain login password for windows. Do anyone have any idea of how to fix this and prevent it from happening again? EDIT : More Info I do development work at the office, so most of the time I bypass the firewall and connect directly to the internet gateway. Now and then I would connect to the company wifi network to do printing and access files on a NAS. So by default my laptop does not connect to the wifi hotspot. On this occasion to update the password, I had to connect to the wifi. So referring to the comment by OmnipotentEntity below, could this have happened when the system rebooted without a connection to the network as the laptop does not auto connect to the wifi hotspot?

    Read the article

  • Clarification On Write-Caching Policy, Its Underlying Options And How It Applies To Hard Drives And Solid-State Drives

    - by Boris_yo
    In last week after doing more research on subject matter, I have been wondering about what I have been neglecting all those years to understand write-caching policy, always leaving it on default setting. Write-caching policy improves writing performance and consists of write-back caching and write-cache buffer flushing. This is how I understand all the above, but correct me if I erred somewhere: Write-through cache / Write-through caching itself is not a part of write caching policy per se and it's when data is written to both cache and storage device so if Windows will need that data later again, it is retrieved from cache and not from storage device which means only improved read performance as there is no need for waiting for storage device to read required data again. Since data is still written to storage device, write performance isn't improved and represents no risk of data loss or corruption in case of power failure or system crash while only data in cache gets lost. This option seems to be enabled by default and is recommended for removable devices with no need to use function of "Safely Remove Hardware" on user's part. Write-back caching is similar to above but without writing data to storage device, periodically releasing data from cache and writing to storage device when it is idle. In my opinion this option improves both read and write performance but represents risk if power failure or system crash occurs with the outcome of not only losing data eventually to be written to storage device, but causing file inconsistencies or corrupted file system. Write-back caching cannot be enabled together with write-through caching and it is not recommended to be enabled if no backup power supply is availabe. Write-cache buffer flushing I reckon is similar to write-back caching but enables immediate release and writing of data from cache to storage device right before power outage occurs but I don't know if it applies also to occasional system crash. This option seem to be complementary to write-back cache reducing or potentially eliminating risk of data loss or corruption of file system. I have questions about relevance of last 2 options to today's modern SSDs in order to get best performance and with less wear on SSDs: I know that traditional hard drives come with onboard cache (I wonder what type of cache that is), but do SSDs also come with cache? Assuming they do, is this cache faster than their NAND flash and system RAM and worth taking the risk of utilizing it by enabling write-back cache? I read somewhere that generally storage device's cache is faster than RAM, but I want to be sure. Additionally I read that write-caching should be enabled since current data that is to be written later to NAND flash is kept for a while in cache and provided there is data that gets modified a lot before finally being written, holding of this data and its periodic release reduces its write times to SSD thereby reducing its wearing. Now regarding to write-cache buffer flushing, I heard that SSD controllers are so fast by themselves that enabling this option is not required, because they manage flushing. However, once again, I don't know if SSDs have their own onboard cache and whether or not it is faster than their NAND flash and system RAM because if it is, keeping this option enabled would make sense. Recently I have posted question about issue with my Intel 330 SSD 120GB which was main reason to do deeper research having suspicion of write-caching policy being the culprit of SSD's freezing issue assuming data being released is what causes freezes. Currently I have write-cache enabled and write-cache buffer flushing disabled because I believe SSD controller's management of write-cache flushing and Windows write-cache buffer flushing are conflicting with each other: Since I want to troubleshoot in small steps to finally determine the source of issue, I have decided to start with write-caching policy and the move to drivers, switching to AHCI later on and finally disabling DIPM (device initiated power management) through registry modification thanks to @TomWijsman

    Read the article

  • JVM disappeared on Mac OS X Snow Leopard 10.6.8

    - by weisjohn
    I'm working in Eclipse one night, (also using Android's DDMS from the commandline). The next morning, I open the lid... attempt to run Eclipse and get an error. me$ sudo /Applications/eclipse/eclipse JavaVM: requested Java version ((null)) not available. Using Java at "" instead. JavaVM: Failed to load JVM: /bundle/Libraries/libserver.dylib So I then attempt to find out where my JDKs are pointed: me$ ls -la /System/Library/Frameworks/JavaVM.framework/Versions/ total 64 drwxr-xr-x 12 root wheel 408 Nov 16 10:44 . drwxr-xr-x 12 root wheel 408 Sep 7 09:39 .. lrwxr-xr-x 1 root wheel 5 Sep 7 17:07 1.3 -> 1.3.1 drwxr-xr-x 3 root wheel 102 Dec 2 2009 1.3.1 lrwxr-xr-x 1 root wheel 10 Sep 7 17:07 1.4 -> CurrentJDK lrwxr-xr-x 1 root wheel 10 Sep 7 17:07 1.4.2 -> CurrentJDK lrwxr-xr-x 1 root wheel 10 Sep 7 17:07 1.5 -> CurrentJDK lrwxr-xr-x 1 root wheel 10 Sep 7 17:07 1.5.0 -> CurrentJDK lrwxr-xr-x 1 root wheel 10 Sep 7 17:07 1.6 -> CurrentJDK drwxr-xr-x 9 root wheel 306 Nov 16 10:44 A lrwxr-xr-x 1 root wheel 1 Sep 7 17:07 Current -> A lrwxr-xr-x 1 root wheel 59 Sep 7 17:07 CurrentJDK -> /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents Everything looks normal so far... me$ ls -la /System/Library/Java/JavaVirtualMachines/ total 0 drwxr-xr-x 2 root wheel 68 Nov 16 10:44 . drwxr-xr-x 5 root wheel 170 Nov 16 10:44 .. Apparently, my virtual machines have been deleted or moved? I'll probably be able to just re-install Java, but does anyone have any insight into why this may have happened or how to prevent in the future?

    Read the article

  • Recommendations for handling Directory Harvesting spam on Exchange 2003

    - by Aaron Alton
    Our Exchange server is getting slammed with anywhere between 450,000 and 700,000 spam messages per day. We receive about 1700 legitimate messages in the same time frame. Roughly 75% of the spam is directory harvesting. We currently have GFI MailEssentials installed. To it's credit, it's doing a very good job, but the sheer volume of spam that we're receiving, and the number of connections that our exchange server is making is preventing legitimate email from being delivered in a timely manner. GFI is set up to check for directory harvesting at the SMTP level, which I presume intercepts the mail before it hits the Exchange services , or goes through SMSE. This "module" is ordered at the top of the list, so (hopefully) dealing with the harvesting is consuming a minimum amount of server resources and bandwidth. My question is, is there anything I can do to prevent our Exchange server's connection pool from being eaten up by these spam hosts? We had to limit the number of concurrent connections being made by Exchange, because it was consuming all of our bandwidth. Thanks, in advance.

    Read the article

< Previous Page | 577 578 579 580 581 582 583 584 585 586 587 588  | Next Page >