Search Results

Search found 7577 results on 304 pages for 'admin generator'.

Page 247/304 | < Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >

  • Anti Virus Service does not run - Windows XP SP3 32bit Home

    - by Stefan Fassel
    I have a somewhat strange problem here. I am trying to run Anti Virus Software on my Windows XP Home 32bit System. After a serious crash I had to fall back to an outdated copy of my initial installation and had Windows install 5 years of updates. So far so good. After Intalling a new Anti Virus Software (Bitdefender 2012) everything seemed to be fine, initial scanning went fine and configuration was working. But after restarting the System the Virus Scanner was unable to start up again. Even the Configuration console of the AV Software did not start. I tried scanning the System for malware, but nothing was found. Then I tried a different AV Software (MS Security Essentials), but in the end it did fail to start too. I have tried to start the Service manually, but I seem to be missing the privilege to do so. I am logged in as a Non-"Administrator" User with Admin privileges (Not much choices there on a XP Home System). I cannot switch to Administrator account outside the protected mode. When running Windows in protected mode I am unable to start the AV Software because it does not run in protected mode. I am a bit at loss now...

    Read the article

  • Any non-custom way to manage iptables with fail2ban and libvirt+kvm?

    - by Peter Hansen
    I have an Ubuntu 9.04 server running libvirt/kvm and fail2ban (for SSH attacks). Both libvirt and fail2ban integrate with iptables in different ways. Libvirt uses (I think) some XML config and during startup (?) configures forwarding to the VM subnet. Fail2ban installs a custom chain (probably at init) and periodically modifies it to ban/unban probable attackers. I also need to install my own rules to forward various ports to servers running in VMs and on other machines, and set up rudimentary security (e.g. drop all INPUT traffic except the few ports I want open), and of course I'd like the ability to add/remove rules safely without restarting. It seems to me iptables is a powerful tool that's sorely lacking some sort of standardized way of juggling all this stuff. Every project, and every sysadmin, seems to do it differently! (And I think there's lots of "cargo cult" admin going on here, with people cloning crude approaches like "use iptables-save like so".) Short of figuring out the gory details of exactly how both of these (and potentially other) tools manipulate the netfilter tables, and developing my own scripts or just manually executing iptables commands, is there any way to safely work with iptables while not breaking the functionality of these other tools? Any nascent standards or projects defined to bring sanity to this area? Even a helpful web page I missed that might cover at least these two packages together?

    Read the article

  • Linux And NTFS Permissions

    - by VGE IT
    Trying to restrict a folder within a directory created in linux filesystem. I have changed the permissions to: root rwx, a special active directory group rwx and all others r. Upon doing so, people that are not in the special AD group can access the directory and modify files. Upon doing so the group changes to "Domain Users" when the user modifies documents within the directory. I have to manualy change the documents default group back to my AD group. I have tried to create another AD group and modify permissons to deny write access. When doing so through windows explorer, the settings seem to take affect until I go back in a look at permissions for the restricted group. No permissions show when I view for the second time. Please assist. Samba share properties [MyShare] comment = "blah blah blah" browseable = yes guest ok = no read only = no path = /xxx/xxxxx/ create mask = 0640 directory mask = 0750 admin users = @"domain\Domain Admins", @"domain\group A", @"domain\group B" valid users = @"domain\Domain Admins", @"domain\group A", @"domain\group B" nt acl support = Yes inherit acls = yes inherit owner = yes inherit permissions = yes

    Read the article

  • fail2ban regex working but no action being taken

    - by fpghost
    I have the following snippet of fail2ban configuration on Ubuntu 13.10 server: #jail.conf [apache-getphp] enabled = true port = http,https filter = apache-getphp action = iptables-multiport[name=apache-getphp, port="http,https", protocol=tcp] mail-whois[name=apache-getphp, dest=root] logpath = /srv/apache/log/access.log maxretry = 1 #filter.d/apache-getphp.conf [Definition] failregex = ^<HOST> - - (?:\[[^]]*\] )+\"(GET|POST) /(?i)(PMA|phptest|phpmyadmin|myadmin|mysql|mysqladmin|sqladmin|mypma|admin|xampp|mysqldb|mydb|db|pmadb|phpmyadmin1|phpmyadmin2|cgi-bin) ignoreregex = I know the regex is good, because if I run the test command on my access.log: fail2ban-regex /srv/apache/log/access.log /etc/fail2ban/filter.d/apache-getphp.conf I get a SUCCESS result with multiple hits, and in my log I see entries like 187.192.89.147 - - [13/Apr/2014:11:36:03 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 301 585 "-" "-" 187.192.89.147 - - [13/Apr/2014:11:36:03 +0100] "GET /phpMyAdmin/scripts/setup.php HTTP/1.1" 301 593 "-" "-" Secondly I know email is configured correctly, as each time I service fail2ban restart I get an email for each of the filters stopping/starting. However despite all this no action seems to be taken when one of these requests comes in. No email with whois, and no entries in iptables. What possibly could be preventing fail2ban from taking action? (everything looks in order in fail2ban-client -d and I can see the chains have loaded with iptables -L)

    Read the article

  • how to setup .ssh directory inside an encrypted volume on Mac OSX and still have public key logins?

    - by Vitaly Kushner
    I have my .ssh directory inside an encrypted sparse image. i.e. ~/.ssh is a symlink to /Volumes/VolumeName/.ssh The problem is that when I try to ssh into that machine using a public key I see the following error message in /var/log/secure.log: Authentication refused: bad ownership or modes for directory /Volumes Any way to solve this in a clean way? Update: The permissions on ~/.ssh and authorized_keys are right: > ls -ld ~ drwxr-xr-x+ 77 vitaly staff 2618 Mar 16 08:22 /Users/vitaly/ > ls -l ~/.ssh lrwxr-xr-x 1 vitaly staff 22 Mar 15 23:48 /Users/vitaly/.ssh@ -> /Volumes/Astrails/.ssh > ls -ld /Volumes/Astrails/.ssh drwx------ 3 vitaly staff 646 Mar 15 23:46 /Volumes/Astrails/.ssh/ > ls -ld /Volumes/Astrails/ drwx--x--x@ 18 vitaly staff 1360 Jan 12 22:05 /Volumes/Astrails// > ls -ld /Volumes/ drwxrwxrwt@ 5 root admin 170 Mar 15 20:38 /Volumes// error message sats the problem is with /Volumes, but I don't see the problem. Yes it is o+w but it is also +t which should be ok but apparently isn't. The problem is I can't change /Volumes permissions (or rather shouldn't) but I do want public key login to work. First I thought of mounting the image on other place then /Volumes, but it is automaunted on login by standard OSX mounting. I asked about it here: How to change disk image's default mount directory on osx The only answer I got is "you can't" ;) I could hack my way around, by writing some shellscript that will manually mounting volume at a non-standard location but it would be a gross hack, I'm still looking for a cleaner way to do what I need.

    Read the article

  • Solr dataimporthandler problem import data latin

    - by Alvin
    I'm using Solr 1.4 and Tomcat6. DB mysql 5.1 store data latin. when i run dataimporthandler this data = view data in solr admin error font. <doc> <str name="id">295</str> <str name="subject">Tuấn Tú</str> - ...<arr name="title"> <str>tunt721</str> </arr> </doc> True data view : <doc> <str name="id">295</str> <str name="subject">Tu?n Tú</str> - ...<arr name="title"> <str>tunt721</str> </arr> </doc> help me fix problem. Many thanks

    Read the article

  • How to make a Linux software RAID1 detect disc corruption?

    - by Paul
    This is one of the nightmare days: A virtualized server running on a Linux SW-RAID1 runs a VM that exhibits random segfaults in seemingly random codechunks. While debugging I find that a file gives different md5sums on each and every run. Digging deeper I find this: The raw disc partitions that make up the RAID1 mirror contain 2 bit-differences and ca. 9 sectors are completely empty on one disc and filled with data on the other disc. Obviously Linux gives back a sector from a undeterministically chosen disc of the mirror set. So sometimes the same sector is returned OK, sometimes the corrupted is given back. The docs say: RAID cannot and is not supposed to guard against data corruption on the media. Therefore, it doesn't make any sense either, to purposely corrupt data (using dd for example) on a disk to see how the RAID system will handle that. It is most likely (unless you corrupt the RAID superblock) that the RAID layer will never find out about the corruption, but your filesystem on the RAID device will be corrupted. Thanks. That will help me sleep. :-/ Is there a way to have Linux at least detect this corruption by using sector checksumming or something like that? Would this be detected in a RAID5 setup? Is this the moment I wish I used ZFS or btrfs (once it becomes usable without uber-admin capabilities)?

    Read the article

  • APC fragmentation on EC2 Micro for Wordpress + W3TC

    - by Maarten Provo
    I'm trying to optimize APC for my Amazon EC2 Micro server running one Wordpress-site with W3TC. I've started with the settings advised by TechZilla in another topic but I keep getting high fragmentation with 50% of space being free. I've uploaded an image to http://www.maartenprovo.be/downloads/apc.jpg but I can't post it here since I need at least 10 reputation. What values can I optimize to prevent fragmentation? [apc] apc.enabled=1 apc.shm_segments=1 ;32M per WordPress install apc.shm_size=164M ;Leave at 2M or lower. WordPress does't have any file sizes close to 2M apc.max_file_size=2M ;Relative to the number of cached files apc.num_files_hint=1000 ;Relative to the size of WordPress apc.user_entries_hint=4096 ;The number of seconds a cache entry is allowed to idle in a slot before APC dumps the cache apc.ttl=7200 apc.user_ttl=7200 apc.gc_ttl=3600 ;Auto update chache files on change in WP-ADMIN or W3TC apc.stat=1 ;This MUST be 0, WP can have errors otherwise! apc.include_once_override=0 ;Only set to 1 while debugging apc.enable_cli=0 ;Allow 2 seconds after a file is created before it is cached to prevent users from seeing half-written/weird pages apc.file_update_protection=2 ;Ignore files apc.filters apc.slam_defense = 0 apc.write_lock = 1 apc.cache_by_default=1 apc.use_request_time=1 apc.mmap_file_mask=/var/tmp/apc.XXXXXX apc.stat_ctime=0 apc.canonicalize=1 apc.write_lock=1 apc.report_autofilter=0 apc.rfc1867=0 apc.rfc1867_prefix =upload_ apc.rfc1867_name=APC_UPLOAD_PROGRESS apc.rfc1867_freq=0 apc.rfc1867_ttl=3600 apc.lazy_classes=0 apc.lazy_functions=0

    Read the article

  • Proper DNS records for handling subdomains and missing subdomains

    - by Cerin
    I'm trying to craft DNS records to support: Explicitly defined subdomains, e.g. ftp.mydomain.com A missing subdomain that redirects to www. Implicitly defined subdomains, e.g. <some user entered value>.mydomain.com For 1, I'm using CNAME records. All seems to be working well. For 2, I'm using an A record, @ -> 123.456.789.012. Worked well. For 3, I ran into some trouble. I tried adding another A record, * -> 123.456.789.012. This appeared to work initially, but it broke #2. i.e. now browsing to mydomain.com doesn't redirect to www.mydomain.com. I tried adding the CNAME record @ -> 123.456.789.012, but my DNS admin tool won't accept it because it's saying the @ is already in use, even though I deleted the A record using it. Am I configuring this incorrectly? What am I doing wrong?

    Read the article

  • Windows : Map-a-network-drive to a remote Shared-Folder (on QNAP NAS) using OpenVPN

    - by spelltox
    Provided my lack of networking knowledge, I've been struggling with this issue for quite a few days now : I have a QNAP-TS212 NAS on which i've created a shared-folder (mostly excel files). All the computers in the local network (windows) are able to access it without any problem. Now, i want to access that shared-folder remotely (windows client), so : I enabled OpenVPN (and PPTP) in QNAP admin. Installed OpenVPN on the remote client. Applied the configuration file that the QNAP generated - Configuration (openvpn.ovpn) : client dev tun script-security 3 proto udp remote ***MY_WAN_IP*** 1194 resolv-retry infinite nobind ca ca.crt auth-user-pass reneg-sec 0 cipher AES-128-CBC comp-lzo OpenVPN connect successfully from the remote client. Now, here's my problem : I can ping the NAS (got IP 10.8.0.1) from the remote client, But when i try to map-a-network-drive, i don't see the shared folder or the NAS or any of the other computers in the network... I checked - all computers are in "WORKGROUP" workgroup. I'm probably missing some basic knowledge, So - any help would be greatly appreciated ! Many thanks.

    Read the article

  • Parallels Plesk returning strange numbers

    - by Jack W-H
    Hi everyone, As a relatively new Server Admin I've become a bit confused by some statistics Parallels Plesk Panel 10.0.1 is returning to me. I have a domain ('subscription') set up, mysite.com. Mysite.com only hosts files, mostly images Its file contents use up about 390MB of disk space Here's a screenshot: this is what Plesk is reporting mysite.com to use: And some more info: Now this is pretty confusing... I thought at first my site might have been hacked and had contents written to disk, but I checked and all is in order, nothing has been hacked into as far as I can tell. So I had a look in the site's CP for some more in-depth statistics, and this is what's returned... Now - sod's law - when I go to check my disk space statistics in more depth via the control panel, this morning it says "The data were not collected yet." - not too sure what that means, but, last night when I checked it was reporting something odd. It said Files were using up 390MB, but 1.80GB or so were being used up by 'Mail Accounts'. This is really strange, as there are no mail accounts set up for the domain. The only hint of 'mail' there is, is the catchall set up to forward *@mysite.com to a separate, ISP-hosted email account. Any ideas anybody? I can post more details if you need it. Sorry to be a bit vague but I'm not sure what else I can post. Thanks, Jack

    Read the article

  • Enabling JMX for proxool with tomcat

    - by dialt0ne
    I am trying to get proxool's MBeans available so that I can see/manipulate them with jconsole. I have jconsole working, but I don't see anything related to proxool. The system is using Sun Java 1.5.0_17 (I know, I know... I'm working with the developers to upgrade). JMX is enabled by modifying $JAVA_OPTS in my tomcat 5.5 startup script: SJO="$SJO -Dcom.sun.management.jmxremote" SJO="$SJO -Dcom.sun.management.jmxremote.port=4998" SJO="$SJO -Dcom.sun.management.jmxremote.authenticate=false" SJO="$SJO -Dcom.sun.management.jmxremote.ssl=false" JAVA_OPTS="$JAVA_OPTS $SJO" I have proxool configured with JNDI in server.xml: <GlobalNamingResources> <Resource name="jdbc/database" auth="Container" type="javax.sql.DataSource" factory="org.logicalcobwebs.proxool.ProxoolDataSource" user="username" password="password" proxool.driver-url="jdbc:oracle:thin:@fqdn.example.com:1521:MYSID" proxool.driver-class="oracle.jdbc.driver.OracleDriver" proxool.alias="mysid" proxool.maximum-connection-count="20" proxool.statistics="20s,5m,15m" proxool.statistics-log-level="INFO" proxool.jmx="true" proxool.verbose="true" /> </GlobalNamingResources> My test .jsp can run queries and I can see it using the connections with the proxool admin servlet, but I'm unsure if there's more I need to configure in tomcat or proxool to get JMX functioning. Advice? jmxproxy info edit: The jmxproxy servlet is working - when I go to the URL http://tomcatserver.example.com:4999/manager/jmxproxy/?qry=*:type%3DRequestProcessor,* the results are: OK - Number of results: 2 Name: Catalina:type=RequestProcessor,worker=http-8080,name=HttpRequest0 modelerType: org.apache.coyote.RequestInfo bytesSent: 0 requestBytesSent: 0 contentLength: -1 bytesReceived: 0 requestProcessingTime: 1297983483666 globalProcessor: org.apache.coyote.RequestGroupInfo@32dc51c8 requestBytesReceived: 0 serverPort: -1 stage: 0 requestCount: 0 maxTime: 0 processingTime: 0 errorCount: 0 Name: Catalina:type=RequestProcessor,worker=jk-127.0.0.1-8009,name=JkRequest794 modelerType: org.apache.coyote.RequestInfo virtualHost: tomcatserver.example.com bytesSent: 0 method: GET remoteAddr: 172.30.3.51 requestBytesSent: 0 contentLength: -1 workerThreadName: TP-Processor15 bytesReceived: 0 requestProcessingTime: 9 globalProcessor: org.apache.coyote.RequestGroupInfo@1e7d3b8e protocol: HTTP/1.1 currentQueryString: qry=*%3Atype%3DRequestProcessor%2C* requestBytesReceived: 0 serverPort: 4999 stage: 3 requestCount: 0 maxTime: 0 processingTime: 0 currentUri: /manager/jmxproxy/ errorCount: 0 And more to the point http://tomcatserver.example.com:4999/manager/jmxproxy/?qry=Catalina:type%3DEnvironment,resourcetype%3DGlobal,name%3DProxool yields: OK - Number of results: 0

    Read the article

  • Apache - virtualhost - works only one

    - by user1811829
    I need a couple of virtualhosts on my local dev machine. Unfortunately it needs to be windows. httpd-vhost.conf <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "C:/xampp/htdocs" ServerName localhost </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "C:/xampp/htdocs/manadom.local/public" ServerName manadom.local ErrorLog "logs/manadom.local-error.log" CustomLog "logs/manadom.local-access.log" combined </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "C:/xampp/htdocs/galeriabiznesu" ServerName gb.loc ErrorLog "logs/gb.loc-error.log" CustomLog "logs/gb.loc-access.log" combined </VirtualHost> And hosts file: 127.0.0.1 localhost 127.0.0.1 manadom.local 127.0.0.1 gb.loc The problem is: localhost points to C:/xampp/htdocs/manadom.local/public manadom.local points to C:/xampp/htdocs/manadom.local/public too gb.loc points to C:/xampp/htdocs/manadom.local/public I can't idea what's wrong? Please help me, i'm not an admin but i read about it lot and i don't know what possibly i can do wrong.

    Read the article

  • CLI package to replace Plesk

    - by dotancohen
    Myself and another programmer are tasked with maintaining a few webservers. I prefer CLI tools, she prefers Plesk. However, I am adamant about not installing Plesk for quite a few reasons. I have written a small Python script for adding new domains, and now I am about to add the ability to configure email addresses while abstracting the details of Postfix from her. Before I go that route, I have googled to see if anything already exists, and am surprised that I have come up with nothing! Are there any mature, stable "control panels" or "server admin" tools like Plesk, but which are accessed via the CLI over SSH? I am looking for the following features: Add / remove / configure domains served by Apache. Add / remove / configure email boxes and mail groups. Add / remove MySQL databases, users, and configure users to databases. Provide basic monitoring of "server health", that is: memory usage, disk usage, CPU usage, bandwidth usage. Possibly set up STFP accounts so that only specific FTP users could access specific /var/www/someSite/ directories. Note that I was unsure if this question is OT for ServerFault. As per the ServerFault about page (There seems to be no more FAQ) this question meets two of the "ask about" criterion and zero of the "don't ask about" with the possible exception of being opinion-based. Therefore, to keep on-topic, I would like to know about the available applications but we should be subjective and less opinionated. Thank you!

    Read the article

  • Removing/modifying LDAP objectclasses/attributes using olc

    - by Foezjie
    I'm having trouble using openldap's olc to modify a schema without shutting down the server. To test some things out, I made the following schema: objectIdentifier tests orgUlyssisOID:4 objectIdentifier testAttribute tests:1 objectIdentifier testObjectClass tests:2 attributeType ( testAttribute:1 NAME 'attr1' DESC 'attribuut 1' SYNTAX '1.3.6.1.4.1.1466.115.121.1.40' ) attributeType ( testAttribute:2 NAME 'attr2' DESC 'attribuut 2' SUP userPassword SINGLE-VALUE ) objectclass ( testObjectClass:1 NAME 'class1' DESC 'objectclass 1' SUP top STRUCTURAL MUST (attr1 $ attr2 ) ) And added it to a new schema called test. (cn={9}test.ldif in cn=schema). Now I can't seem to figure out how to delete class1 from that schema. I use the following LDIF (and tried lots of variations too, to no avail) dn : cn={9}test,cn=schema,cn=config changetype: modify delete: olcObjectClasses olcObjectClasses: ( testObjectClass:1 NAME 'class1' DESC 'objectclass 1' SUP top STRUCTURAL MUST ( attr1 $ attr2 ) ) Running ldapmodify -x -W -D cn=admin,cn=config -f test.ldif -d 0 gives no output. -d 1 gives this: ldap_create ldap_sasl_bind ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP localhost:389 ldap_new_socket: 4 ldap_prepare_socket: 4 ldap_connect_to_host: Trying 127.0.0.1:389 ldap_pvt_connect: fd: 4 tm: -1 async: 0 ldap_open_defconn: successful ldap_send_server_request ber_scanf fmt ({it) ber: ber_scanf fmt ({i) ber: ber_flush2: 38 bytes to sd 4 ldap_result ld 0x7f2a8ccf3430 msgid 1 wait4msg ld 0x7f2a8ccf3430 msgid 1 (infinite timeout) wait4msg continue ld 0x7f2a8ccf3430 msgid 1 all 1 ** ld 0x7f2a8ccf3430 Connections: * host: localhost port: 389 (default) refcnt: 2 status: Connected last used: Mon Sep 10 11:29:57 2012 ** ld 0x7f2a8ccf3430 Outstanding Requests: * msgid 1, origid 1, status InProgress outstanding referrals 0, parent count 0 ld 0x7f2a8ccf3430 request count 1 (abandoned 0) ** ld 0x7f2a8ccf3430 Response Queue: Empty ld 0x7f2a8ccf3430 response count 0 ldap_chkResponseList ld 0x7f2a8ccf3430 msgid 1 all 1 ldap_chkResponseList returns ld 0x7f2a8ccf3430 NULL ldap_int_select read1msg: ld 0x7f2a8ccf3430 msgid 1 all 1 ber_get_next ber_get_next: tag 0x30 len 12 contents: read1msg: ld 0x7f2a8ccf3430 msgid 1 message type bind ber_scanf fmt ({eAA) ber: read1msg: ld 0x7f2a8ccf3430 0 new referrals read1msg: mark request completed, ld 0x7f2a8ccf3430 msgid 1 request done: ld 0x7f2a8ccf3430 msgid 1 res_errno: 0, res_error: <>, res_matched: <> ldap_free_request (origid 1, msgid 1) ldap_parse_result ber_scanf fmt ({iAA) ber: ber_scanf fmt (}) ber: ldap_msgfree ldap_free_connection 1 1 ldap_send_unbind ber_flush2: 7 bytes to sd 4 ldap_free_connection: actually freed So no real indication of an error. Where am I doing it wrong? Bonus question: If I have some entries of a certain objectclass, can I modify it (add/remove attributeTypes) without removing the entries? Thanks in advance for all help.

    Read the article

  • Overriding HOMEDRIVE and HOMEPATH as a Windows 7 user

    - by MikeC
    My employer has an Active Directory group policy which sets my Windows 7 laptop HOMEDRIVE to "M:" (a mapped network drive) and my HOMEPATH to "\". Since I have read-only permissions for the root of that shared drive, I cannot create files or directories in my windows home directory. My attempts to work with the IT department have been unsuccessful. Is there a way for me to globally change these envars at boot or login time? I need for all applications to use alternate values (such as "C:" and "\Users\myname"). I have some installed utilities (like gvim and others) that store preference files in the user's home directory. IMPORTANT: Changing these envars under "System Properties Environment Variables" does not work. I have tried setting these as both User and System Variables (including a reboot). TypingSET HOMEin a DOS window clearly shows that my settings are ignored. Also, using "Start in" in a Windows shortcut will also not solve this, as I need things like Explorer context menu items (like "Edit with Vim") to operate correctly. I do have admin rights on this company laptop, but I am not a Win7 guru. Back in the day, a boot script would have solved this in a minute. Is it even possible today? Thanks.

    Read the article

  • Squid: The request or reply is too large

    - by Ueli
    I have done a reverse proxy with an Apache in the background (on the same server). All works great but I can't open one page. I get the error "The request or reply is too large." In my cache.log contains: 2010/12/09 15:28:29| WARNING: http.c:971: HTTP header too large 2010/12/09 15:29:03| ctx: enter level 0: 'http://server/admin/cms/nav' 2010/12/09 15:29:03| httpProcessReplyHeader: Too large reply header 2010/12/09 15:29:03| ctx: exit level 0 In my squid.conf i disabled the limitations of the request and reply header, without success: reply_body_max_size 0 allow all request_body_max_size 0 Does someone know why that don't work? Thank you very much. Squid Version: Squid Cache: Version 2.7.STABLE3 configure options: '--prefix=/usr' '--exec_prefix=/usr' '--bindir=/usr/sbin' '--sbindir=/usr/sbin' '--libexecdir=/usr/lib/squid' '--sysconfdir=/etc/squid' '--localstatedir=/var/spool/squid' '--datadir=/usr/share/squid' '--enable-async-io' '--with-pthreads' '--enable-storeio=ufs,aufs,coss,diskd,null' '--enable-linux-netfilter' '--enable-arp-acl' '--enable-epoll' '--enable-removal-policies=lru,heap' '--enable-snmp' '--enable-delay-pools' '--enable-htcp' '--enable-cache-digests' '--enable-underscores' '--enable-referer-log' '--enable-useragent-log' '--enable-auth=basic,digest,ntlm,negotiate' '--enable-negotiate-auth-helpers=squid_kerb_auth' '--enable-carp' '--enable-follow-x-forwarded-for' '--with-large-files' '--with-maxfd=65536' 'amd64-debian-linux' 'build_alias=amd64-debian-linux' 'host_alias=amd64-debian-linux' 'target_alias=amd64-debian-linux' 'CFLAGS=-Wall -g -O2' 'LDFLAGS=' 'CPPFLAGS='

    Read the article

  • getent passwd fails, getent group works?

    - by slugman
    I've almost got my AD integration working completely on my OpenSUSE 12.1 server. I have a OpenSUSE 11.4 system successfully integrated into our AD environment. (Meaning, we use ldap to authenticate to AD directory via kerberos, so we can login to our *nix systems via AD users, using name service caching daemon to cache our passwords and groups). Also, important to note these systems are in our lan, ssl authentication is disabled. I am almost all the way there. Nss_ldap is finally authenticating with ldap server (as /var/log/messages shows), but right now, I have another problem: getent passwd & getent shadow fails (shows local accounts only), but getent group works! Getent group shows all my ad groups! I copied over the relavent configuration files from my working OpenSUSE 11.4 box: /etc/krb5.conf /etc/nsswitch.conf /etc/nscd.conf /etc/samba/smb.conf /etc/sssd/sssd.conf /etc/pam.d/common-session-pc /etc/pam.d/common-account-pc /etc/pam.d/common-auth-pc /etc/pam.d/common-password-pc I didn't modify anything between the two. I really don't think I need to modify anything, because getent passwd, getent shadow, and getent group all works fine on the OpenSUSE11.4 box. Attempting to restart nscd service unfortunately didn't do much, and niether did running /usr/sbin/nscd -i passwd. Do any of you admin-gurus have any suggestions? Honestly, I'm happy I made it this far. I'm almost there guys!

    Read the article

  • Issues with VSFTPD / FTP on Linux Ubuntu server - Steps for Troubleshooting?

    - by jnolte
    I am dealing with an issue I am unclear on how to resolve and have been pulling my hair out for some time. I have been trying to configure an FTP user using the following (we use this same documentation on all servers) Install FTP Server apt-get install vsftpd Enable local_enable and write_enable to YES and anonymous user to NO in /etc/vsftpd.conf restart - service vsftpd restart - to allow changes to take place Add WordPress User for FTP access in WP Admin Create a fake shell for the user add "usr/sbin/nologin" to the bottom of the /etc/shells file Add a FTP user account useradd username -d /var/www/ -s /usr/sbin/nologin passwd username add these lines to the bottom of /etc/vsftpd.conf - userlist_file=/etc/vsftpd.userlist - userlist_enable=YES - userlist_deny=NO Add username to the list at top of /etc/vsftpd.userlist restart vsftpd "service vsftpd restart" make sure firewall is open for ftp "ufw allow ftp" allow modify the /var/www directory for username "chown -R /var/www I have also went through everything listed on this post and no luck. I am getting connection refused. Sorry for the poor text formatting above. I think you get the idea. This is something we do over and over and for some reason it is not cooperating here. Setup is Ubuntu 12.04LTS and VSFTPD v2.3.5 Thank you in advance.

    Read the article

  • What are some of the best answer file settings for a WDS Deployment?

    - by drpcken
    I've had my head buried in answer files for days now and have gotten quite comfortable setting them up, test, etc... I use a handful of Components to help my migrations, for my unattend.xml I like: Windows-International-Core-WinPE -- this is good for setting Locales the preboot environment (en-us for us english US speakers). Keeps me from having to set these on the initial image boot. Windows-Setup_neutral -- I like the WindowsDeploymentServices -> ImageSelection, especially if I'm only pushing a single image. This keeps me from having to select it each time. My OOBE_Unattend.xml is really useful and I barely have to touch anything during this part of the installation: Windows-Shell-Setup_neutral -- This lets me put a ProductKey in for my MAK volume license (very useful and time saving). I can also set the TimeZone for the installation. Windows UnattendedJoin_neutral -- I couldn't live without this component. It joins the machine on my domain before logging in as a domain administrator. I would hate to not have this ability. Windows-International-Core -- Again this component really speeds up the OOBE process. I configure my locals and time zone so I don't have to do it by hand when the machine enteres OOBE. Windows-Shell-Setup -- Allows you to configure an autologon when the new machine is finished. I like to logon as a domain admin automatically for customizing and troubleshooting the new machine immediately after it is imaged. Also the OOBE component under here lets me skip the EULA, Hide Wireless Setup, and set my default NetworkLocation. All of this makes the entire OOBE totally automated. What are some other good components I am missing as far as helping me get these images pushed and configured as quickly as possible?

    Read the article

  • Setting the default permissions for files uploaded via FTP to a directory

    - by Kerri
    Disclaimer: I'm just a web designer/coder, and server admin stuff is my weakest point of them all. So be easy on me (and very specific). I'm using a simple CMS (Unify) on a site, where part of the functionality is that the client can upload files to a specified directory (using FTP). The permissions for the upload directory are set to 755. But when files are uploaded through the interface, they are uploaded with permissions set to 640 (instead of 644), so site visitors cannot acces the files. When I emailed the CMS's support about this, they told me that it was a server setting, and I need to make sure that files uploaded through FTP are set to 644. Makes perfect sense, but I have no idea how to do this. Any help would be greatly appreciated. This site is a shared site hosted by Network Solutions (Unix), so my access options are limited. I can edit .htaccess files, and php.ini, but that's about all I have access to. It appears I can't even log on via shell. ETA: 11/11/2010 Thanks all. I was able to work around this problem by setting up the CMS's settings in a different way. I'd be interested in following up on Nick O'Niel's suggestions, because I think he's on the right track, but unfortunately I can't access the necessary files on this particular server. So, anyway, I'm leaving this open, since the original questions isn't exactly resolved. Unfortunately, I probably can't put a correct answer to the test, since the shared server in question has nearly all of its config files tightly locked down.

    Read the article

  • How to make Virtualbox, OpenVPN, and Win2008 Web R2 like one another?

    - by Aquitaine
    Back with web developer guy wearing net admin hat. Hopefully this is an easy one. We have two servers on a public network at a hosted facility. Server A is our public-facing web server and server B is our database server. Both are running Windows 2008 Server R2 Web Edition. We want Server B isolated from everything except Server A, such that anyone who has to connect to server B goes through the VPN on Server A. It's not perfect since we have no access to do this on the router side, but it's what we've got. We've set up VirtualBox and OpenVPN Access Server on Server A. It has one network interface set to 'NAT' mode, such that OpenVPN gets its IP at 10.0.2.x, and to connect to the OpenVPN interface, I go to the local IP for the Virtualbox network adapter, 192.168.56.x, which works as I configured the appropriate ports using VBoxManage. My question is, do I need to be using Bridged Networking and give the VPN server its own IP, or is there some way to tell the server (either Windows or the Virtualbox OpenVPN) that 'any public connection on the real external IP on port X should be directed to this internal LAN address of 192.168.1.x on port Y'? OpenVPN itself doesn't seem to be aware of the server's real external IP unless we put it in Bridged networking mode; is that necessary or advisable? We're without RRAS since this is Web edition, but I feel like what we're going for is pretty simple. Thanks! Aq

    Read the article

  • Create a wifi hotspot in a place where an authentication is required

    - by SoftTimur
    I live in a residence where Internet is provided via cable. Once the computer is connected to the cable, launching a browser will trigger an authentication, I have a username and password to enter, then the internet will be connected. With a gateway (e.g. Wireless Cable Voice Gateway Model CBVG834G) and 2 cables, two PCs can connect to the Internet with my account at the same time. Now the question is, I don't like the cable, and would like to create a wifi hotspot. It seems realizable with the same gateway. According to the instruction on page 2-4 of the manual: Enter http://192.168.0.1 in the address field of your Internet browser. Log in to the gateway with either of the default user names, MSO or admin... However, while connecting to the Internet successfully via cable and the gateway (e.g. google works), opening 192.168.0.1 oddly gives me an error on the browser: Does anyone know what happened? Is it due to the authentication required by my residence? Is there any other way to build a hotspot of wifi? PS: My system is MAC OS

    Read the article

  • Understanding encryption Keys

    - by claws
    Hello, I'm really embarrassed to ask this question but its the fact that I don't know anything about encryption. I always avoided it. I don't understand the concept of encryption keys (public key, private key, RSA key, DSA key, PGP Key, SSH key & what not) . I did encounter these in regular basis but as I said I always avoided them. Here are few instances where I encountered: Creating Account: A public RSA or DSA key will be needed for an account. Send the key along with your desired account name to [email protected] I really don't know what are RSA/DSA or How to get their keys? Do I need to register some where for that? Mailing: I'm unable to recall exactly but I've seen some mails have some attachments like signature or the mail footer will have something called PGP signature etc.. I really don't get its concept. GIT Version control: I created account in assembla.com (for private GIT repo) and it asked me to enter "SSH keys" to my profile. Where am I gonna get these? Why do I need it? Isn't SSH related to remote login (like remote desktop or telnet)? How are these two SSHs related & differ? I don't know in how many more situations I'm going to encounter these things. I'm really confused and have no clue about where to start & how to proceed to learn these things. Kindly someone point me in correct direction. Note: I've absolutely zero interested in encryption related topics. So, there is no way I'm going to read a graduate level book on this subject. I just want to clear my concepts without going into much depth.

    Read the article

  • Time-Machine backup over SSH tunnel to NFS mount

    - by BTZ
    I've recently started using a new NAS which runs CentOS 6.2. One of the purposes of the NAS would be to serve as a backup target. Whilst I have been using Apple's Time-Machine for a while and I am very satisfied with it, I'd like to continue using it. Backing up directly to an address in my network is no hassle; all works fine. For security reasons I'd like all my traffic to go through an ssh tunnel to the NAS. This way I can avoid needing to get a VPNserver (for personal reasons). As of NFSv4 the NFS deamon is bound to port 2049, which makes it easy for me to direct all traffic through a ssh tunnel. Tunnel: ssh -f admin@ms -L 2000:localhost:2049 -N Mount: mount -t nfs -o nfsvers=4,rw,proto=tcp,sync,intr,hard,timeo=600,retrans=10,wsize=32768,rsize=32768,port=2000 localhost:/mac_backup /Volumes/backup This works fine for Finder/terminal and throughput is almost equal to direct traffic. (CPU of the NAS does ride high when I reach max bandwidth though) Now the problem: With Time-Machine I can't use the NFS mount point mounted on localhost. TM seems to try to connect to it and then give me a "OSStatus error 65". I also tried using NFSv3 (I correctly forwarded all ports) with no luck. Can anyone shed a light on this and/or give a solution?

    Read the article

< Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >