Search Results

Search found 89612 results on 3585 pages for 'sof user'.

Page 2052/3585 | < Previous Page | 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059  | Next Page >

  • VPN Error 691 but server says authenticated on server

    - by Andy
    Hello all, I have a problem with a vpn connection on Windows XP SP3 that appears to be related to an account (maybe privilleges or an option that I have missed). When connecting using my account, which is a domain administrator account it will connect to through the vpn fine. However, using an account created for another person they receive Error 691: Username or Password is not valid for this domain. On the domain controller (windows 2003) I see a logon successful message: User DOMAIN\user was granted access. Fully-Qualified-User-Name = int.company.net.au/People/Management/User NAS-IP-Address = 10.30.0.3 NAS-Identifier = not present Client-Friendly-Name = MelbourneCore Client-IP-Address = Router-ip Calling-Station-Identifier = not present NAS-Port-Type = Virtual NAS-Port = 77 Proxy-Policy-Name = Use Windows authentication for all users Authentication-Provider = Windows Authentication-Server = undetermined Policy-Name = Remote VPN Access Authentication-Type = MS-CHAPv1 EAP-Type = Does anyone have any ideas as to where else I should look for finding a solution? If i use the wrong password it gives a logon failure error in the event viewer. Also removing them from the remote access group gives a logon failure error. Nothing appears in the event viewer on the local machine. In the past all that is required is to add them into our Remote Access Users group. Any help?

    Read the article

  • Safe place to put an executable file on Windows 7 (and Windows XP)

    - by Ricket
    I'm working on a tweak to our logon script which will copy an executable file to the local hard drive and then, using the schtasks command, schedule a task to run that executable daily. It's a standalone executable file, and when run it creates a folder in the working directory (which would be the same directory as the executable in this case). In Windows XP, of course, it can be put anywhere - I'd probably just throw it in C:\SomeRandomFolder and let it be. But this logon script also runs on Windows 7 64-bit machines, and those are trickier with UAC and all that. The user is a local administrator but UAC is enabled, so I'm pretty sure that the executable would be blocked from copying to a location like C:\ or C:\Program Files (since those seem to be at least mildly protected by UAC). The scheduled task needs to run under the user's profile, so I can't just run it with SYSTEM and ignore the UAC boundaries; I need to find a path which the user can copy into. Where can I copy this standalone executable file, so that the copy operation succeeds without a UAC prompt on Windows 7, the path is either common to both WinXP and Win7 or uses environment variables, and the scheduled task running with user permissions is able to launch the executable?

    Read the article

  • How can I set up Redmine => Active Directory authentication?

    - by Chris R
    First, I'm not an AD admin on site, but my manager has asked me to try to get my personal Redmine installation to integrate with ActiveDirectory in order to test-drive it for a larger-scale rollout. Our AD server is at host:port ims.example.com:389 and I have a user IMS/me. Right now, I also have a user me in Redmine using local authentication. I have created an ActiveDirectory LDAP authentication method in RedMine with the following parameters: Host: ims.example.com Port: 389 Base DN: cn=Users,dc=ims,dc=example,dc=com On-The-Fly User Creation: YES Login: sAMAccountName Firstname: givenName Lastname: sN Email: mail Testing this connection works just fine. I have, however, not successfully authenticated with it. I've created a backup admin user so that I can get back in to the me account if I break things, and then I've tried changing me to use the ActiveDirectory credentials. However, once I do, nothing works to log in. I have tried all of these login name options: me IMS/me IMS\me I've used my known Domain password, but no joy. So, what setting do I have wrong, or what information do I need to acquire in order to make this work?

    Read the article

  • Virtual PC duplication process

    - by Toddintr
    This is the process I use for duplicating a Virtual PC (on Windows 7): 1 - Create a new VPC. 2 - Install Windows 7 on the new VPC. 3 - Configure the new Windows 7 installation (install Windows updates, install applications, etc). 4 - Run Sysprep on the new VPC. 5 - Shut down the new VPC. 6 - Make a copy of the new VPC's VHD file. 7 - Create a new VPC, specify "use existing VHD file" in the wizard and provide the name of the copied VHD file. Above works fine but there is one point that threw me off: During the OOBE for the duplicated VPC, when asked for a user name, I had to specify a different user name than the one I had specified for the base VPC. This makes sense because the copied VPC already has that user name. But what I did not understand is why I was asked for a new user name at all? Is it because it is part of the OOBE process and when the OOBE was designed by Microsoft, they did not think of the fact that base OS images could be copied? Thanks - -Todd

    Read the article

  • Win 2008 R2 terminal server and redirected printer queue security

    - by Ian
    I have a case where I need a non-priv account to be able to make a modification to the redirected printer. I know, its not advisable but we're not giving them access - changes will be made in code. So, following the docs (http://technet.microsoft.com/en-us/library/ee524015(WS.10).aspx) I modified the default security for new printer queues. This doesnt work though as windows doesn't seem to assign the privs you configure in the printer admin tool to redirected printer queues. As I test I added a non-priv test user to the default security tab in the printer admin tool (control panel - admin tools - printer admin. I assigned it all privs (its a test) and logged the user into the terminal server. The redirected printers duely appeared as usual. However if I open the printer properties - security tab, the user appears in the list of accounts/groups but the options I selected (all privs) are not set. Instead the user special privs box is marked and when I click on 'advanced options' and view them, there is nothing marked. So, something is clearing these options.... the question is, why and how can I convince it not to? Ian

    Read the article

  • ColdFusion 9 server not restaring - “Permission denied” errors

    - by Xevi Pujol
    I had to restart my ColdFusion 9 server on CentOS because of a memory performance issue, but now the server won't restart again. When looking at cfserver.log I can see how there's "Permission denied" errors all along. The ColdFusion application folder (/opt/coldfusion9/) is owned by nobody:root, as that fixed a similar problem that we had a few weeks ago. Also, the last time this CF server was running correctly, the JRE user that was being used was nobody. Maybe CF is trying to restart using another user (presumably apache) and that creates permission issues? However, I'm not sure how to check this hypothesis. Where's the config file that tells CF what JRE user to utilize? If I can change that, I could try to specify nobody there. Any other ideas also welcome. UPDATE: The runtime user that Coldfusion will utilise is defined in /etc/init.d/coldfusion_9. I fixed the problem by being consistent with the users: I needed to revert the ownership of the folder /opt/coldfusion9/ back to apache:root, which matches the init file.

    Read the article

  • Ruby Passeger + Nginx or lighthttpi + fgci for shared hosting

    - by devnull
    I have set up a passenger + nginx setup and I plan to offer a free non-commercial hosting (or in fact on the fly deployment) for rack-based frameworks (e.g. camping, sinatra). I am facing an "issue" with passenger. For each application you need to configure nginx.conf (it would be the same with apache so it is not an nginx issue) with: server { ... passenger_base_uri /app1; passenger_base_uri /app2; passenger_base_uri /app3; } Now this is not inherently bad as, in theory, I could allow a user to run just one app on his webspace but even in this case I need to create a new server directory on nginx e.g. (user.domain.com). As this will mainly be used to deploy apps the behavior I am looking at is more the possibility to auto map several apps (e.g. app1, app2, app3, app4) under the same server (your app.com/app1 yourapp.com/app2) without having to update the nginx or apache file each time. This seems to be a limitation in passenger. As such I am thinking about an alternative with lighttpd and fastcgi. Would this allow immediate deployment without touching the lighttpd config file e.g. I create a new directory with app2 and it will run immediately ? What is your experience in performance difference between passenger + nginx vs. lighttpd + fastcgi ? thanks in advance scenario details: on nginx + passenger - user cannot add a new sub-folder and run another sinatra/camping app without declaring the path on nginx.conf and restarting the server; wished behavior with the new setup: - user can add a new folder with a new app and it would run on lighttpd+fcgi without any extra configuration of the web server;

    Read the article

  • samba "username map" stopped to work after upgrade to 3.6

    - by Kris_R
    It was time to upgrade our group server (new HDs, problems with old installation of DRBD, etc..). Going as usually for CentOS i upgraded whole system from 6.3 to 6.4 The later one came with samba 3.6 as the old one was 3.5. I transferred most of users by copying /etc/password, /etc/shadow and samba accounts with pdbedit. Homes were on nfs-drive. The translation of unix accounts to samba accounts are located in /etc/samba/smbusers. Strangely enough on some windows clients there was problem to connect to samba-shares. In one case the only thing that worked was, instead of giving windows name, to use the unix account. In another one, it was possible to mount network drive and to open it in Windows Explorer, however other applications like "Total commander" at the attempt of opening this drive gave the message "Cannot connect to z:" (sometimes at this moment user/pass were requested). The smb.conf has following entries: [global] security = user passdb backend = tdbsam username map = /etc/samba/smbusers ... [Kris] comment = Kris's Private path = /SMB/Users/Kris writeable = yes read only = no browseable = yes users = krisr printable = no security mask = 0777 force security mode = 0 directory security mask = 0777 force directory security mode = 0 force create mode = 0775 force directory mode = 6775 The smbusers: # Unix_name = SMB_name1 SMB_name2 ... krisr = Kris Of course testparm runs without any errors. I was used from samba 3.5 to outputs of form Mapped user kris to krisr. Nothing like this happens now. Just message check_sam_security: Couldn't find user Kris in passdb. I read on web that some guys had problem with 3.6 and security = ADS, but these were not helpful for me. I'm seriously thinking about downgrading back to samba 3.5 but before this step I wanted to ask if somebody knows the solution of these problems.

    Read the article

  • Strange Apache Webdav situation (OSX Will connect, Ubuntu will not)

    - by mewrei
    So basically my situation is that I have an Apache 2.2 webserver running on Linux on another box, and I have it configured to serve up webdav. Now here's the weird part, I can access the server just fine on my Mac using the "Connect to Server" dialog (even moved like 5GB of files over the connection). On my Ubuntu desktop cadaver will connect as well and allow me to browse. However when I try to use Xmarks (BYOS Edition) or the GNOME "Connect to Server" dialog, it gives me a 403 Forbidden error. My server does digest authentication if that makes any difference. Here's part of my apache2.conf file <VirtualHost *:80> DocumentRoot "/path" <Directory "/path"> Dav on AuthType Digest AuthName iTools AuthDigestDomain "/" AuthUserFile /path/to/WebDavUsers Options None AllowOverride None <LimitExcept GET HEAD OPTIONS> require valid-user </LimitExcept> Order allow,deny Allow from All </Directory> <Directory "/path/*/Public"> Options +Indexes </Directory> <Directory "/path/user"> <LimitExcept GET HEAD OPTIONS> require user user </LimitExcept> </Directory> </VirtualHost>

    Read the article

  • Linux Tuning for High Traffic JBoss Server with LDAP Binds

    - by Levi Stanley
    I'm configuring a high traffic Linux server (RedHat) and running into a limit I haven't been able to track down. I need to be able to handle sustained 300 requests per second throughput using Nginx and JBoss. The point of this server is to run checks on a user's account when that user signs in. Each request goes through Nginx to JBoss (specifically Torquebox with JBoss A7 with a Sinatra app) and then makes an LDAP request to bind that user and retrieve several attributes. It is during the bind that these errors occur. I'm able to reproduce this going directly to JBoss, so that rules out Nginx at least. I get a variety of error messages, though oddly JBoss stopped writing to the log file recently. It used to report errors about creating native threads. Now I just see "java.net.SocketException: Connection reset" and "org.apache.http.conn.HttpHostConnectException: Connection to http://my.awesome.server:8080 refused" as responses in jmeter. To the best of my knowledge, I have plenty of available file handles, processes, sockets, and ports, yet the issue persists. Unfortunately, I have very little experience tuning servers. I've found a couple useful documents - Ipsysctl tutorial 1.0.4 and Linux Tuning - but those documents are a bit over my head (and just entering the the configuration described in Linux Tuning doesn't fix my issue. Here are the configuration changes I've tried (webproxy is the user that runs Nginx and JBoss): /etc/security/limits.conf webproxy soft nofile 65536 webproxy hard nofile 65536 webproxy soft nproc 65536 webproxy hard nproc 65536 root soft nofile 65536 root hard nofile 65536 root soft nproc 65536 root hard nofile 65536 First attempt /etc/sysctl.conf sysctl net.core.somaxconn = 8192 sysctl net.ipv4.ip_local_port_range = 32768 65535 sysctl net.ipv4.tcp_fin_timeout = 15 sysctl net.ipv4.tcp_keepalive_time = 1800 sysctl net.ipv4.tcp_keepalive_intvl = 35 sysctl net.ipv4.tcp_tw_recycle = 1 sysctl net.ipv4.tcp_tw_reuse = 1 Second attempt /etc/sysctl.conf net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.core.netdev_max_backlog = 30000 net.ipv4.tcp_congestion_control=htcp net.ipv4.tcp_mtu_probing=1 Any ideas what might be happening here? Or better yet, are there some good documentation resources designed for beginners?

    Read the article

  • Openfiler crashing without cause or leaving any log messages

    - by user44725
    So my linux machine keeps crashing, without so much as a bye or a leave. I've tried and tried and failed again to work out whats happening. Any help would be much appreciated. Linux chai 2.6.29.6-0.24.smp.gcc3.4.x86_64 #1 SMP Tue Mar 9 05:06:08 GMT 2010 x86_64 x86_64 x86_64 GNU/Linux Openfiler Here is what the /var/log/messages file says at the time of the latest crash. Nothing that unusual - just greg logging in and out via samba. You'll notice there is a cron running for root every minute - ignore this - this isn't the issue either it was some check I've been doing to discover its problem. Jun 2 10:32:01 chai crond(pam_unix)[16529]: session closed for user root Jun 2 10:32:49 chai samba(pam_unix)[15454]: session opened for user greg by (uid=0) Jun 2 10:33:01 chai crond(pam_unix)[16537]: session opened for user root by (uid=0) Jun 2 10:33:04 chai crond(pam_unix)[16537]: session closed for user root Jun 2 10:41:40 chai syslogd 1.4.1: restart. Jun 2 10:41:43 chai syslog: syslogd startup succeeded That restart was called manually by hand - by clicking the restart button on the box. So basically messages isn't revealing many secrets. dmesg only shows from startup. If there is any output I should paste. Just say when and where and it'll be done. Thanks for your help! Tim

    Read the article

  • Need Corrected htaccess File

    - by Vince Kronlein
    I'm attempting to use a wordpress plugin called WP Fast Cache which creates static html files from all your posts, pages and categories. It creates the following directory structure inside wp-content: wp_fast_cache example.com pagename index.html categoryname postname index.html basically just a nested directory structure and a final index.html for each item. But the htaccess edits it makes are crazy. #start_wp_fast_cache - do not remove this comment <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{REQUEST_METHOD} ^(GET) RewriteCond /home/user/public_html/wp-content/wp_fast_cache/%{HTTP_HOST}%{REQUEST_URI}x__query__x%{QUERY_STRING}index.html -f RewriteCond %{HTTP_USER_AGENT} !(iPhone|Windows\sCE|BlackBerry|NetFront|Opera\sMini|Palm\sOS|Blazer|Elaine|^WAP.*$|Plucker|AvantGo|Nokia) RewriteCond %{HTTP_COOKIE} !(wordpress_logged_in) [NC] RewriteRule ^(.*)$ /home/user/public_html/wp-content/wp_fast_cache/%{HTTP_HOST}%{REQUEST_URI}x__query__x%{QUERY_STRING}index.html [L] RewriteCond %{REQUEST_METHOD} ^(GET) RewriteCond %{QUERY_STRING} ^$ RewriteCond /home/user/public_html/wp-content/wp_fast_cache/%{HTTP_HOST}%{REQUEST_URI}index.html -f RewriteCond %{HTTP_USER_AGENT} !(iPhone|Windows\sCE|BlackBerry|NetFront|Opera\sMini|Palm\sOS|Blazer|Elaine|^WAP.*$|Plucker|AvantGo|Nokia) RewriteCond %{HTTP_COOKIE} !(wordpress_logged_in) [NC] RewriteRule ^(.*)$ /home/user/public_html/wp-content/wp_fast_cache/%{HTTP_HOST}%{REQUEST_URI}index.html [L] </IfModule> #end_wp_fast_cache No matter how I try and work this out I get a 404 not found. And not the Wordpress 404, and janky apache 404. I need to find the correct syntax to route all requests that don't exist ie: files or directories to: wp-content/wp_fast_cache/hostname/request_uri/ So for example: Page: example.com/about-us/ => wp-content/wp_page_cache/example.com/about-us/index.html Post: example.com/my-category/my-awesome-post/ => wp-content/wp_fast_cache/example.com/my-category/my-awesome-post/index.html Category: example.com/news/ => wp-content/wp_fast_cache/example.com/news/index.html Any help is appreciated.

    Read the article

  • Securing SSH/SFTP and best practices on security

    - by MultiformeIngegno
    I'm on a fresh VPS with Ubuntu Server 12.04. I wanted to ask you the good practices to apply to enhance security over a stock Ubuntu-server. This is what I did up to now: I added Google Authenticator to SSH, then I created a new user (whom I'll use instead of 'root' for SSH & SFTP access) which I added to my /etc/sudoers list below 'root', so now it's: # User privilege specification root ALL=(ALL:ALL) ALL new_user ALL=(ALL:ALL) ALL Then I edited sshd_config and set PermitRootLogin to 'no'. Then restarted the ssh service. Is this ok? There are a few things I'd like to ask you though: 1) What's the sense of adding a new (sudoer) user whilst the root user still exist (ok it can't access with root privilege but it's still there..)? 2) System files are owned by 'root'.. I want to use my new_user to access via SFTP but with it I can't edit those files!! Should I mass-CHMOD 'em so that new_user has write perms too? What's the good practice on this? Thanks in advance, I hope you'll tell me if I did something wrong and/or other ways to secure the system. :)

    Read the article

  • Forwarding HTTP Request with Direct Server Return

    - by Daniel Crabtree
    I have servers spread across several data centers, each storing different files. I want users to be able to access the files on all servers through a single domain and have the individual servers return the files directly to the users. The following shows a simple example: 1) The user's browser requests http://www.example.com/files/file1.zip 2) Request goes to server A, based on the DNS A record for example.com. 3) Server A analyzes the request and works out that /files/file1.zip is stored on server B. 4) Server A forwards the request to server B. 5) Server B returns file1.zip directly to the user without going through server A. Note: steps 4 and 5 must be transparent to the user and cannot involve sending a redirect to the user as that would violate the requirement of a single domain. From my research, what I want to achieve is called "Direct Server Return" and it is a common setup for load balancing. It is also sometimes called a half reverse proxy. For step 4, it sounds like I need to do MAC Address Translation and then pass the request back onto the network and for servers outside the network of server A tunneling will be required. For step 5, I simply need to configure server B, as per the real servers in a load balancing setup. Namely, server B should have server A's IP address on the loopback interface and it should not answer any ARP requests for that IP address. My problem is how to actually achieve step 4? I have found plenty of hardware and software that can do this for simple load balancing at layer 4, but these solutions fall short and cannot handle the kind of custom routing I require. It seems like I will need to roll my own solution. Ideally, I would like to do the routing / forwarding at the web server level, i.e. in PHP or C# / ASP.net. However, I am open to doing it at a lower level such as Apache or IIS, or at an even lower level, i.e. a custom proxy service in front of everything. Thanks.

    Read the article

  • Serving and caching content from Amazon S3 with Tomcat

    - by Rob
    Hi all, We're looking to serve a range of content using Amazon S3 as a store for the content and Tomcat to host the web application. The content is divided into free and paid for content. We intend to authenticate the users when they access the web application running in Tomcat. Based around their authentication we are able to tell if the user has access to paid for content or simply free stuff. So I envision the flow of a request being something like this: Authenticated request to Tomcat If user is "paid" user, display links to premium content Direct requests for paid content back through Tomcat to prevent direct access to it by non-paying users. Tomcat makes request to S3 through a web cache to keep our costs down Content is returned to user. As we have to pay for each request to S3, I'd ideally like to cache content locally to the Tomcat instance after it has been requested for the first time to keep costs to a minimum and to speed things up. I would also like to be able to invalidate this cache if we publish fresh content to S3. So to confirm my proposal: Client Request - Tomcat - Web Cache - S3 To invalidate the cache, I was thinking of using something like PubSubHubbub with the cache waiting for updates to the feed for content that it should invalidate. I'd appreciate some general feedback on this approach as I've no real experience of caching and I'm sure I've made some invalid assumptions. I'd also appreciate any recommendations for caching technologies. Thanks.

    Read the article

  • Group policy not applying to security group

    - by ihavenoideawhatimdoing
    Preface: I have enough privileges to create GPOs in my OU, and have made a few of them for some simple tasks (like deploying a printer to certain users). Not actually a sysadmin...I'm a developer who is winging it. I wanted to create a GPO that would set a mapped folder for a certain security group (which I recently created and that contains only myself). Did the following: Created the GPO in MyOU - Users Removed the default Authenticted Users under Security Filtering Add the security group with my account to Security Filtering Set up the mapping via the User Configuration option Changed GPO Status to "Computer configuration settings disabled" Left WMI filtering to Closed the GPO at this point... Logged in as the target user; ran gpupdate /force Logged out, logged in, ran gpresult /r, no mention of my GPO Rebooted Logged in, re-ran gpupdate /force Logged out, logged in, ran gpresult /r, still no mention of my GPO If I log in with another completely different user, their RSOP information shows that the new GPO is being ignored due to a security restriction, so it appears to be "working" for other users. I just can't get it to actually show up in RSOP for the user it should be working. Is there anything else I can do short of rebooting endlessly and crossing my fingers?

    Read the article

  • Optimize apache for 10K+ wordpress views a day on 2GB RAM E6500 CPU

    - by Broke artist
    I have a dedicated server with apache/php on ubuntu serving my Wordpress blog with about 10K+ pageviews a day. I have W3TC plug in installed with APC. But every now and then server stop responding or goes dead slow and i have to restart apache to get it back. Heres my config what am i doing wrong? ServerRoot "/etc/apache2" LockFile /var/lock/apache2/accept.lock PidFile ${APACHE_PID_FILE} TimeOut 40 KeepAlive on MaxKeepAliveRequests 200 KeepAliveTimeout 2 StartServers 5 MinSpareServers 5 MaxSpareServers 8 ServerLimit 80 MaxClients 80 MaxRequestsPerChild 1000 StartServers 3 MinSpareServers 3 MaxSpareServers 3 ServerLimit 80 MaxClients 80 MaxRequestsPerChild 1000 StartServers 3 MinSpareServers 3 MaxSpareServers 3 ServerLimit 80 MaxClients 80 MaxRequestsPerChild 1000 User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} AccessFileName .htaccess Order allow,deny Deny from all Satisfy all DefaultType text/plain HostnameLookups Off ErrorLog /var/log/apache2/error.log LogLevel error Include /etc/apache2/mods-enabled/.load Include /etc/apache2/mods-enabled/.conf Include /etc/apache2/httpd.conf Include /etc/apache2/ports.conf LogFormat "%v:%p %h %l %u %t \"%r\" %s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %s %O" common LogFormat "%{Referer}i - %U" referer LogFormat "%{User-agent}i" agent CustomLog /var/log/apache2/other_vhosts_access.log vhost_combined Include /etc/apache2/conf.d/ Include /etc/apache2/sites-enabled/

    Read the article

  • How can I do a large file upload using Sinatra, haml, nginx, and passenger?

    - by mmr
    Hi all, I need to be able to allow a user to upload 30-60 mb files at a time. Right now, I'm solving the problem with a simple form post: %form{:action=>"/Upload",:method=>"post",:enctype=>"multipart/form-data"} - @theModelHash.each do |key,value| %br %input{:type=>"checkbox", :name=>"#{key}", :value=>1, :checked=>value} =key %br %input{:type=>"file",:name=>"file"} %input{:type=>"submit",:value=>"Upload"} This form allows the user to select processing options contained in theModelHash and upload a file for processing. Problem is, this method both freezes the user's UI and also requires that the entire form be reposted when the user presses the 'back' button. I've looked at SWFUpload, but have no idea how to integrate that into my relatively simple app. There's a page here about integrating it with Rails, but I'm using Sinatra, and am new enough to this whole web programming thing that I don't know how to modify those files to work with what I need to do. Is there a how-to to add large file uploads to my form there? Something relatively simple that just adds in a progress bar and doesn't repost? I feel like I'm having to triple the size of my application just to make this feature play nice, and that's bothering me a bit.

    Read the article

  • ProFTPD / PAM issues with new centos/virtualmin install

    - by iamthewit
    Hi All, I just installed CentOS 5.4 on a rackspace cloud server and installed virtualmin which all seemed to go fine. The only problem I have is that I can not access the virtual servers directories via FTP. I get the following from filezilla: Status: Connecting to 1.1.1.1:21... Status: Connection established, waiting for welcome message... Response: 220 FTP Server ready. Command: USER username Response: 331 Password required for username. Command: PASS ******* Response: 230 User username logged in. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is current directory. Command: TYPE I Response: 200 Type set to I Command: PASV Response: 227 Entering Passive Mode (1,1,1,1,216,214) Command: LIST Error: Connection timed out Error: Failed to retrieve directory listing and I get this from my /var/secure/log file Sep 22 19:40:42 stickeeserver proftpd: pam_unix(proftpd:session): session opened for user username by (uid=0) Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - USER nastypasty: Login successful. Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - Preparing to chroot to directory '/home/username' Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - mod_delay/0.5: delaying for 728 usecs Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - error setting IPV6_V6ONLY: Protocol not available Any help would be greatly appreciated, I'm not totally new to Linux but it's not my strongest subject. I do like to know exactly why problems occur though and how exactly to fix them so the more detail the better! cheers

    Read the article

  • Apache showing 500 error during Active Directory LDAP authentication

    - by Tyllyn
    I have Apache (on Windows Server) set up to authenticate one directory through Active Directory. Config settings are as follows: <LocationMatch "/trac/[^/]+/login"> Order deny,allow Allow from all AuthBasicProvider ldap AuthzLDAPAuthoritative Off AuthLDAPURL ldap://<ip-redacted>:3268/cn=Users,OU=MyBusiness,DC=<dc-redacted>,DC=local?sAMAccountName?sub?(objectClass=*) AuthLDAPBindDN trac@<dc-redacted>.local AuthLDAPBindPassword "<password-redacted>" AuthType Basic AuthName "Protected" require valid-user </LocationMatch> Watching, Wireshark, I see the following get sent through when I visit the page: To the AD server: bindRequest(1) "trac@<dc-redacted>.local" simple And from the AD server: bindResponse(1) success I'm assuming this means that the auth was successful... but Apache doesn't think so. It returns a 500 server to me. Apache logs show the following: [Thu Nov 18 16:21:12 2010] [debug] mod_authnz_ldap.c(379): [client 192.168.x.x] [7352] auth_ldap authenticate: using URL ldap://<ip-redacted>:3268/cn=Users,OU=MyBusiness,DC=<dc-redacted>,DC=local?sAMAccountName?sub?(objectClass=*), referer: http://192.168.x.x/trac/Trac/login [Thu Nov 18 16:21:12 2010] [info] [client 192.168.x.x] [7352] auth_ldap authenticate: user authentication failed; URI /trac/Trac/login [ldap_search_ext_s() for user failed][Filter Error], referer: http://192.168.x.x/trac/Trac/login Now, that log file shows a failed auth for a blank user. I am confused. Any idea what I am doing wrong... and how I can get the Apache authentication working? :) Thanks!

    Read the article

  • Multiple LDAP servers with mod_authn_alias: failover not working when the first LDAP is down?

    - by quanta
    I've been trying to setup redundant LDAP servers with Apache 2.2.3. /etc/httpd/conf.d/authn_alias.conf <AuthnProviderAlias ldap master> AuthLDAPURL ldap://192.168.5.148:389/dc=domain,dc=vn?cn AuthLDAPBindDN cn=anonymous,ou=it,dc=domain,dc=vn AuthLDAPBindPassword pa$$w0rd </AuthnProviderAlias> <AuthnProviderAlias ldap slave> AuthLDAPURL ldap://192.168.5.199:389/dc=domain,dc=vn?cn AuthLDAPBindDN cn=anonymous,ou=it,dc=domain,dc=vn AuthLDAPBindPassword pa$$w0rd </AuthnProviderAlias> /etc/httpd/conf.d/authz_ldap.conf # # mod_authz_ldap can be used to implement access control and # authenticate users against an LDAP database. # LoadModule authz_ldap_module modules/mod_authz_ldap.so <IfModule mod_authz_ldap.c> <Location /> AuthBasicProvider master slave AuthzLDAPAuthoritative Off AuthType Basic AuthName "Authorization required" AuthzLDAPMemberKey member AuthUserFile /home/setup/svn/auth-conf AuthzLDAPSetGroupAuth user require valid-user AuthzLDAPLogLevel error </Location> </IfModule> If I understand correctly, mod_authz_ldap will try to search users in the second LDAP if the first server is down or OpenLDAP on it is not running. But in practice, it does not happen. Tested by stopping LDAP on the master, I get the "500 Internal Server Error" when accessing to the Subversion repository. The error_log shows: [11061] auth_ldap authenticate: user quanta authentication failed; URI / [LDAP: ldap_simple_bind_s() failed][Can't contact LDAP server] Did I misunderstand? AuthBasicProvider ldap1 ldap2 only means that if mod_authz_ldap can't find the user in ldap1, it will continue with ldap2. It doesn't include the failover feature (ldap1 must be running and working fine)?

    Read the article

  • No apparent reason for high load average

    - by Oz.
    We have several web servers running on Amazon (ec2) c1.xlarge, over Amazon AMI. The servers are duplicates of each other, running the exact same hardware and software. Each server spec is: 7 GB of memory 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each) 1690 GB of instance storage 64-bit platform I/O Performance: High API name: c1.xlarge A couple of weeks ago we have run a yum upgrade on one of the servers. Starting on this upgrade the upgraded server started showing a high load average. Needless to say, we did not update the other servers and we can not do so until we understand the reason for this behavior. The strange thing is that when we compare the servers using top or iostat, we can not find the reason for the high load. Note that we have moved traffic from the "problematic" server to the others, which have made the "problematic" server less crowded in terms of requests, and still his load is higher. Do you have any idea what could it be, or where else can we check? Many thanks for the help! Oz. # # proper server # w command # 00:42:26 up 2 days, 19:54, 2 users, load average: 0.41, 0.48, 0.49 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT pts/1 82.80.137.29 00:28 14:05 0.01s 0.01s -bash pts/2 82.80.137.29 00:38 0.00s 0.02s 0.00s w # # proper server # iostat command # Linux 3.2.12-3.2.4.amzn1.x86_64 _x86_64_ (8 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 9.03 0.02 4.26 0.17 0.13 86.39 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 1.63 1.50 55.00 367236 13444008 xvdfp1 4.41 45.93 70.48 11227226 17228552 xvdfp2 2.61 2.01 59.81 491890 14620104 xvdfp3 8.16 14.47 94.23 3536522 23034376 xvdfp4 0.98 0.79 45.86 192818 11209784 # # problematic server # w command # 00:43:26 up 2 days, 21:52, 2 users, load average: 1.35, 1.10, 1.17 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT pts/0 82.80.137.29 00:28 15:04 0.02s 0.02s -bash pts/1 82.80.137.29 00:38 0.00s 0.05s 0.00s w # # problematic server # iostat command # Linux 3.2.20-1.29.6.amzn1.x86_64 _x86_64_ (8 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 7.97 0.04 3.43 0.19 0.07 88.30 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 2.10 1.49 76.54 374660 19253592 xvdfp1 5.64 40.98 85.92 10308946 21612112 xvdfp2 3.97 4.32 93.18 1087090 23439488 xvdfp3 10.87 30.30 115.14 7622474 28961720 xvdfp4 1.12 0.28 65.54 71034 16487112

    Read the article

  • Is iptable capable of this or should I go with mod_proxy?

    - by Jesper
    I'm trying to configure my network to receive an incoming connection on one device and then redirect it to another device on a specific port. Right now I'm on about port 80 and a device running apache. The problem I'm facing is that when the forwarding is done it also sets the source ip to the first device instead of the source ip the user that connects to the service has. Let me illustrate it: [Internet User] = 7.7.7.7 connects to [Device 1] = 1.1.1.1:80 [Device 1] forwards it to [Device 2] = 1.1.1.2:80 [Device 2] outputs response that [Internet User] sees So on [Device 2] I will naturally see [Device 1]s IP in the logs, but I wanna see if there is a way to connect the internet user through [Device 1] to [Device 2] while seeing the real source IP in the logs on [Device 2]. Is that possible? My rule-set looks like this at the moment: (on Device 1) iptables -P FORWARD ACCEPT iptables -t nat -I PREROUTING -j DNAT -p tcp --dport 80 --to-destination 1.1.1.2:80 iptables -t nat -I POSTROUTING -j SNAT -p tcp -d 1.1.1.2 --to-source 1.1.1.1 On [Device 2] it accepts all incoming on port 80 from [Device 1] as well as accepts all related and established connections. So, would there be any way to get the real source onto [Device 2]? Let me know if you need more information!

    Read the article

  • Windows Server 2008 R2 - VPN Folder Sharing Permissions

    - by daveywc
    I have setup VPN access to my Windows Server 2008 R2 server using RRAS. Clients can connect, run applications, view shares etc. My problem is that one of the applications that they use relies on some network shares. The application is not able to access the shares unless the user first goes into Windows Explorer and accesses the share, providing their user name and password (the same one that they use to connect via the VPN). Previously on a different Windows Server 2008 (not R2) this was not necessary i.e. the application and user could access the share without providing another user name and password. I have tried giving the Everyone group full control over the shared folder - both on the Security tab and in the Permissions area under Advanced Sharing on the Sharing tab. This still did not resolve the issue. (I don't really want to give Everyone access anyway - I was hoping that granting access to a group that the VPN users had membership of would be enough). I have also turned off password protected sharing in the Advanced Sharing Settings area of the Network and Sharing Center (under both Home or Work and Public). So my question is what is preventing my VPN users from having access to these folders without having to re-supply the same login and password that they use to access the VPN? And what is the best practice in this type of scenario?

    Read the article

  • Permission problem with Git (over SSH) on FreeBSD

    - by vpetersson
    We're having permission problem with Git on FreeBSD. The setup is fairly straight forward. We have a few different repos on the same server. For simplicity, let's say they reside in /git/repo1 and /git/repo2. Each repo is owned by the user 'git' and a self-titled group (eg. repo1). The repo is configured with g+rwX access. Every user who commits to the repository is also member of the group for the repo (eg. repo1). The Git repositories all have 'sharedRepository = group' set. So far so good, all users can check out the code from the repositories, and the first user can commit without any problem. However, when the next user tries to commit to the repositories, he will receive a permission error. We've been banging our heads with this issue for some time now, and the only way we've managed to resolve it is by running the following script between commits (which is obviously very inconvenient): find /git/repo1 -type d -exec chmod g+s {} \; chmod -R g+rwX /git/repo1 chown -R git:repo1 /git/repo1/ cd /git/repo1 git gc Anyone got a clue to where the problem lies?

    Read the article

< Previous Page | 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059  | Next Page >