Search Results

Search found 18347 results on 734 pages for 'generate password'.

Page 591/734 | < Previous Page | 587 588 589 590 591 592 593 594 595 596 597 598  | Next Page >

  • Create account for service

    - by Andy
    I am configuring a new server. The server is running Hudson that is going to copy some files from this server to another. The other server is a virtual machine. Both running Windows Server 2012. Hudson is started on server A with log on as "Local System". When I come to the copy phase it says "Access denied". Changing the log on to "Administrator" works. However, I guess this is bad. I do not have much experience with user management. I tried to create a own hudson account on both servers A and B. I tried to log on as hudson account in the service-management but it doesn't start. How would you create an account for this particular service that has access to the shared folder on server B and can be used to start the service on server A? I guess I need two accounts with same username and password on server A and server B? The folder on Server B is shared with everyone and the guest account is enabled.

    Read the article

  • Configuring postfix with Gmail

    - by MultiformeIngegno
    This is what I did.. sudo apt-get install postfix This is my /etc/postfix/main.cf: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=no smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache myhostname = tsXXX561.server.topcloud.it alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = relayhost = [smtp.gmail.com]:587 mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = loopback-only default_transport = smtp relay_transport = smtp inet_protocols = all # SASL Settings smtp_use_tls=yes smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = noanonymous smtp_sasl_tls_security_options = noanonymous smtp_tls_CAfile = /etc/postfix/cacert.pem Then I created the file /etc/mailname with my hostname as content: tsXXX561.server.topcloud.it Then I created the file /etc/postfix/sasl_passwd: [smtp.gmail.com]:587 [email protected]:gmail_password Then sudo postmap /etc/postfix/sasl/passwd sudo cat /etc/ssl/certs/Thawte_Premium_Server_CA.pem | sudo tee -a /etc/postfix/cacert.pem service postfix restart Still sends nothing... I'm on Ubuntu Server 12.04.

    Read the article

  • “NT AUTHORITY\ANONYMOUS LOGON” error in Windows 7 (ASP.NET & Web Service)

    - by Tony_Henrich
    I have an asp.net web app which works fine in Windows XP machine in a domain. I am porting it to a Windows 7 stand alone machine. The app uses a web service which makes a call to sql server. The web server (IIS 7.5) and SQL Server are on the same stand alone machine. I enabled Windows authentication for the website and web service. The web service uses a trusted connection connection string. The web service credentials uses System.Net.CredentialCache.DefaultCredentials. I noticed username, password and domainname are blank after the call! The webservice and web site use the 'Classic .NET AppPool' with NetworkServices identity. I am getting an exception "NT AUTHORITY\ANONYMOUS LOGON" in the database call in the web service. I am assuming it's related to the blank credentials. I am expecting ASPNET user to be the security token to the database. Why is this not happening? Did I miss a setting? (Usually this happens when sql server and web server are on two different machines in a domain, delegation & double hopping, but in my case everything is on a dev box)

    Read the article

  • Connection between Asp.Net and Oracle 10g Express Edition

    - by l3gion
    Hello, I'm struggling to find a way to connect my Asp .Net + C# application with my Oracle 10g Express Edition. Here's my scenario, I'm at Mac OS and I have 2 Virtual machines, one for Win 7 (VS 2010 app) and another with a Parallels Virtual Appliance with Oracle 10g Express Edition 1.1. Which provider (Oledb, ODP.NET, etc..) should I use? How to make the connection to the server in C#? Right now I have this: <appSettings> <add key="conn" value="Data Source=10.211.55.11;Persist Security Info=True;User ID=l3gion;Password=l3gion;" /> </appSettings> And at the .cs file: SqlCommand cmd = new SqlCommand("insert_thing", new SqlConnection(ConfigurationManager.AppSettings["conn"])); cmd.CommandType = CommandType.StoredProcedure; *insert_thing is a stored procedure Using this I got this error: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) I've searched for some possible solutions. Tried some, including: firewall disabled, allow remote connection at oracle express edition using this cmd line ("EXEC DBMS_XDB.SETLISTENERLOCALACCESS(FALSE);").. The error persists. Can anyone guide me into the right direction? I'm a newbie with this type of things. Thank you for your patience. regards

    Read the article

  • Apache config: Permissions, Directories and Locations

    - by James Murphy
    I'm trying to get my head around apache configuration to fix a problem I'm having but after a few hours I've decided to ask here. This is what I've got at the moment: DocumentRoot "/var/www/html" <Directory /> Options None AllowOverride None Deny from all </Directory> <Directory /var/svn> Options FollowSymLinks AllowOverride None Allow from all </Directory> <Directory /opt/hg> Options FollowSymLinks AllowOverride None Allow from all </Directory> <Location /hg> AuthType Digest AuthName "Engage HG" AuthDigestProvider file AuthUserFile /opt/hg/hgweb.users Require valid-user </Location> WSGISocketPrefix /var/run/wsgi WSGIDaemonProcess hg processes=3 threads=15 WSGIProcessGroup hg WSGIScriptAlias /hg "/opt/hg/hgweb.wsgi" <Location /svn> DAV svn SVNPath /var/svn/repos AuthType Basic AuthName "Subversion" AuthUserFile /etc/httpd/conf/users require valid-user </Location> I'm trying to get my head around how it's all laid out and how directories relate to locations/etc For /hg I get asked for a password but to /svn I get a 403 forbidden... the error I get is: [client 10.80.10.169] client denied by server configuration: /var/www/html/svn When I remove the entry it works fine.. I can't figure out how to get it linking to the /var/svn directory

    Read the article

  • How to allow users to transfer files to other users on linux

    - by Jon Bringhurst
    We have an environment of a few thousand users running applications on about 40 clusters ranging in size from 20 compute nodes to 98,000 compute nodes. Users on these systems generate massive files (sometimes 1PB) controlled by traditional unix permissions (ACLs usually aren't available or practical due to the specialized nature of the filesystem). We currently have a program called "give", which is a suid-root program that allows a user to "give" a file to another user when group permissions are insufficient. So, a user would type something like the following to give a file to another user: > give username-to-give-to filename-to-give ... The receiving user can then use a command called "take" (part of the give program) to receive the file: > take filename-to-receive The permissions of the file are then effectively transferred over to the receiving user. This program has been around for years and we'd like to revisit things from a security and functional point of view. Our current plan of action is to remove the bit rot in our current implementation of "give" and package it up as an open source app before we redeploy it into production. Does anyone have another method they use to transfer extremely large files between users when only traditional unix permissions are available?

    Read the article

  • Installing Debian 7.1 on FakeRAID/Intel Z77 results in boot with no grub menu

    - by user198982
    I'm trying to install Debian 7.1 from DVD onto 2x500GB drives which are set up in a FakeRAID mirror using the on-board FakeRAID provided by the Z77 chipset. I have followed the guide here https://wiki.debian.org/DebianInstaller/SataRaid. Namely, I booted into the expert install with the 'dmraid=true' option added, installed onto the RAID mirror which the installer correctly detected, then installed grub2 onto /dev/mapper/.. raid volume. I chose to use LVM (so a boot partition + LVM volume). As per the guide, I have uncommented the "GRUB_DISABLE_LINUX_UUID=true" line in "/etc/default/grub" and ran "update-grub" then "grub-install /dev/mapper/.." (with the right RAID device in the command). However, after I rebooted the system, all I got was a grub console. It did not load the menu. I checked and it seems that it never even generated a menu file. I re-installed Debian a few times since, trying out different options and also a few workarounds people posted online, but to no avail. The best I am getting is a grub console. No menu. Some times it will generate the grub.cfg, some times it won't, depending on the workaround I try. I was wondering if anyone else has experienced this issue. There is no need to preach how I should not use FakeRAID. I have seen others trying to figure this out so I think a resolution to this issue would be of interest to more than just me. Also, I first installed the system onto a small drive for testing something else. I made a backup with Acronis and was able to restore that onto the RAID mirror by using Universal Restore. When I installed it onto a 500GB without RAID, backed up using the same method, then restored onto a RAID volume of the same size, it would not boot and I got grub errors. Weird. I can post more details, just let me know what you want to see.

    Read the article

  • CentOS 6.5 new Kernel not active after reboot

    - by Kristofer
    Today I was running some yum updates and wanted to verify that everything went through fine by making sure I had a new kernel. To my surprise I noticed that CentOS was still running 2.6.32-431.5.1.el6.x86_64 even though it looked as though 2.6.32-431.23.3.el6 was installed. Indeed 2.6.32-431.23.3.el6 shows up in /etc/grub.conf but not in the upstart boot options. Any ideas why? In the update log it says: ---> Package kernel-firmware.noarch 0:2.6.32-431.5.1.el6 will be updated ---> Package kernel-firmware.noarch 0:2.6.32-431.23.3.el6 will be an update Could this be the reason? What does "will be an update" mean? My /etc/grub.conf: # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd0,0) # kernel /vmlinuz-version ro root=/dev/mapper/VolGroup00-root # initrd /initrd-[generic-]version.img #boot=/dev/vda default=0 timeout=5 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu password --encrypted $1$auui(i$sODM4ni/Zts9IlMWu.wWF/ title CentOS (2.6.32-431.23.3.el6.x86_64) root (hd0,0) kernel /vmlinuz-2.6.32-431.23.3.el6.x86_64 ro root=/dev/mapper/VolGroup00-root rd_NO_LUKS LANG=en_US.UTF-8 KEYBOARDTYPE=pc KEYTABLE=sv-latin1 rd_NO_MD rd_LVM_LV=VolGroup00/swap SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=VolGroup00/root rd_NO_DM rhgb quiet rhgb quiet audit=1 initrd /initramfs-2.6.32-431.23.3.el6.x86_64.img title CentOS (2.6.32-431.5.1.el6.x86_64) root (hd0,0) kernel /vmlinuz-2.6.32-431.5.1.el6.x86_64 ro root=/dev/mapper/VolGroup00-root rd_NO_LUKS LANG=en_US.UTF-8 KEYBOARDTYPE=pc KEYTABLE=sv-latin1 rd_NO_MD rd_LVM_LV=VolGroup00/swap SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=VolGroup00/root rd_NO_DM rhgb quiet rhgb quiet audit=1 initrd /initramfs-2.6.32-431.5.1.el6.x86_64.img title CentOS (2.6.32-431.el6.x86_64) root (hd0,0) kernel /vmlinuz-2.6.32-431.el6.x86_64 ro root=/dev/mapper/VolGroup00-root rd_NO_LUKS LANG=en_US.UTF-8 KEYBOARDTYPE=pc KEYTABLE=sv-latin1 rd_NO_MD rd_LVM_LV=VolGroup00/swap SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=VolGroup00/root rd_NO_DM rhgb quiet rhgb quiet audit=1 initrd /initramfs-2.6.32-431.el6.x86_64.img

    Read the article

  • VPN - local and remote networks IP collision

    - by Guido García
    I have created a VPN connection in Windows using the New Network Connection wizard that comes with Windows. It works without problems in most places, but there is one concrete place where, despite the connection to the remote public IP works fine, it is not able to validate the login/password and establish the VPN connection. In this place, the network is 10.0.0.x (the same I use in other places where I am able to connect). The remote network is 192.168.x.x, so I suspect there is some kind of IP collision, because before connecting, a traceroute to i.e. 192.168.0.40 does not fail. 1 4 ms 1 ms 1 ms LINKSYS [10.0.0.1] 2 5 ms 1 ms 1 ms 172.26.27.1 3 4 ms 5 ms 3 ms 192.168.1.100 ... (more) I can't modify the local network further than the first router (10.0.0.1). That is the only different I've found so far. Any idea about how to solve it? Thank you.

    Read the article

  • User not in the sudoers file. This incident will be reported

    - by Sergiy Byelozyorov
    I need to install a package. For that I need root access. However the system says that I am not in sudoers file. When trying to edit one, it complains alike! How I am supposed to add myself to the sudoers file if I don't have the right to edit one? I have installed this system and only administrator. What can I do? Edit: I have tried visudo already. It requires me to be in sudoers in the first place. amarzaya@linux-debian-gnu:/$ sudo /usr/sbin/visudo We trust you have received the usual lecture from the local System Administrator. It usually boils down to these three things: #1) Respect the privacy of others. #2) Think before you type. #3) With great power comes great responsibility. [sudo] password for amarzaya: amarzaya is not in the sudoers file. This incident will be reported. amarzaya@linux-debian-gnu:/$

    Read the article

  • SSLVerifyClient optional with location-based exceptions

    - by Ian Dunn
    I have a site that requires authentication in order to access certain directories, but not others. (The "directories" are really just rewrite rules that all pass through /index.php) In order to authenticate, the user can either login with a standard username/password, or submit a client-side x509 certificate. So, Apache's vhost conf looks something like this: SSLCACertificateFile /etc/pki/CA/certs/redacted-ca.crt SSLOptions +ExportCertData +StdEnvVars SSLVerifyClient none SSLVerifyDepth 1 <LocationMatch "/(foo-one|foo-two|foo-three)"> SSLVerifyClient optional </LocationMatch> That works fine, but then large file uploads fail because of the behavior documented in bug 12355. The workaround for that is to set SSLVerifyClient require (or optional) as the default, so now the conf looks like this SSLCACertificateFile /etc/pki/CA/certs/redacted-ca.crt SSLOptions +ExportCertData +StdEnvVars SSLVerifyClient optional SSLVerifyDepth 1 <LocationMatch "/(bar-one|bar-two|bar-three)"> SSLVerifyClient none </LocationMatch> That fixes the upload problem, but the SSLVerifyClient none doesn't work for bar-one, bar-two, etc. Those directories are still prompted to present a certificate. Additionally, I also need the root URL to accessible without the user being prompted for a certificate. I'm afraid that will cancel out the workaround, though.

    Read the article

  • Document Map in MS Word 2007 going bonkers

    - by rzlines
    I'm working on a large project report in Microsoft Word 2007 and have been using the document map to generate the index. I have been carefully selecting the headers that need to be added to the document map but I saved the document and opened it up today to work on it - the document map has added whatever it pleases there. This is a temporary fix from a post that I found after extensive searching that works, but when I save and close the document and open it up again I face the same dilemma: I have noticed that when Word stuffs up the document map after opening the file, I can undo this by using the UNDO button. Word calls it ’Autoformat’. I have also fixed a file that has had the document map screwed permanently (i.e saved with it) by selecting all (CTRL+A),selecting the PARAGRAPH drop down menu in the HOME TAB and in the OUTLINE drop down box, selecting ’Body Text’. This removed all the problems and did not seem to affect my outline level paragraph headings. This is also another temporary fix but I have to be on my toes not to let Word auto format at the start of the document. I also can't afford to entirely turn off auto format as I need it. I’ve solved this problem for me. When you open the file, a progress bar at the bottom first says Opening (ESC to Cancel) and then it says Word is formatting the document (ESC to Cancel). If I cancel the second process, TOC fine. No cancelling, TOC screwed. Can anyone work out how to switch off the autoformatting? This is the post in which i found for the temporary fix

    Read the article

  • ssh works fine when using public interface, but slow when using private interface

    - by Kevin M
    My Linux(UbuntuEEE) to Linux(CentOS) ssh takes a long time to log in(~15 seconds) when using the private interface, but not when using the public one. I have a Linux box acting as my router. As such, it has multiple interfaces(75.xxx.xxx.xxx, 192.168.1.1). I can ssh in from the internal interface(192.168.1.65 to .1), but it will take a while. I can ssh into the public address, and it goes quickly(~1 second). I have another box that I can ssh into the inside interface from and it goes quickly. iptables is set to accept packets coming into the interface immediately. sshd's UseDNS is normally on; I get the same problem if I turn it off and restart sshd. I normally use public-key authentication; I have done a mv ~/.ssh/ ~/ssh/ and it will ask me for a password after going slowly. After logging in(using either interface), speed is quick. ssh client version(via ssh -v):OpenSSH_4.7p1 Debian-8ubuntu1.2, OpenSSL 0.9.8g 19 Oct 2007 ssh server version(via rpm -qv openssh_server):openssh-server-4.3p2-29.el5

    Read the article

  • Plesk FTP not working but SFTP and Shell is working

    - by shamittomar
    I am facing a strange problem. The FTP on my Plesk VPS is not working. Whenever I try to connect, FileZilla FTP client says: Status: Resolving address of xxxxxxxxxxxxx.com Status: Connecting to xxx.xxx.xxx.xxx:21... Status: Connection established, waiting for welcome message... Error: Could not connect to server So, it's not even going to the step of asking username/password. So, it's something else. The SFTP on port 22 is working fine. Also, I can successfully do shell access and run commands. But, I NEED FTP access too on port 21. I have searched everywhere but can not find any setting to enable it. This is the Plesk version info: Parallels Plesk Panel version 9.5.2 Operating system Linux 2.6.26.8-57.fc8 CPU GenuineIntel, Intel(R) Pentium(R) 4 CPU 3.00GHz Any help is appreciated. [EDIT]: The firewall is not blocking it. I have checked it on server and there are absolutely no blocking rule. Firewall states: All incoming/outgoing connections are accepted on FTP And on client-side (my PC), I can connect to other FTP servers so this is not an issue in my PC's firewall. Moreover, I can not even connect to the FTP from online FTP clients like net2ftp.

    Read the article

  • Can't connect to Server Manager from Windows 7

    - by SAdmin317
    I have a Windows 7 Pro 64bit with SP1 desktop that has RSAT tools installed. I opened Server Manager and can't connect to the server (Server 2008 R2 core). I followed the guide to enable everything on the server, added a registry key to enable read-only on the device manager as well. On the Windows 7 PC I turned on winrm, did the quick config, and added the server IP and name as trusted hosts. I still get an error when connecting. "Connecting to the remote server failed with the following error message: The WinRM client cannot process the requests. If the authentication scheme is different from Kerberos, or if the client computer is not joined to a domain, then HTTPS transport must be used or the destination machine must be added to the TrustedHosts configuration setting...." I also added the name of the server to the windows 7 /etc/hosts file. Ping the server name translates to the IP of the server. Also opened up the firewall for "Remote Volume Management" Both machines are on the same Workgroup, using the same Administrator account, with the same password. Any help appreciated.

    Read the article

  • Explorer.exe hangup during move large file into external drive

    - by PiotrK
    During move large files (700mb+) to external drive formated NTFS via USB 3.0 I've noticed strange things about explorer.exe (I am using Windows 7 up to date) Sometimes after move file the explorer get stuck (ie. it can happen after few files during move of several large files) - moving window freeze and I am unable to kill explorer (via taskmgr, or cmdline TASKKILL). In command line I've got something like this (taskmgr shows that explorer.exe is still running - I've got the same PID every time I try to kill it and no diagnostic message): C:\Windows\system32TASKKILL /F /IM explorer.exe SUKCES: proces "explorer.exe" o identyfikatorze PID 6296 zostal zakonczony. C:\Windows\system32TASKKILL /F /IM explorer.exe SUKCES: proces "explorer.exe" o identyfikatorze PID 6296 zostal zakonczony. If I try to run another explorer.exe process at this point, I got desktop icon and start bar back but I cannot open any explorer window After few minutes explorer.exe finally dies and I am able to rerun it without rebooting File that I moved have two copies - one local and one on the external drive (the original file wasn't delete after move); Both copies seems to contain the same data (same length and CRC info) If this happen during move of multiply files, only some files are moved and one of them have two copies (both locally and on the external drive) What can I do to fix those explorer hangs? Added: The same problem exist when copying files, it hangsup between large files Similar problem exist when I tried to use TotalCommander (x64): copying paused at 80% of one of files, TC didn't hung up (but clicking cancel in copying dialog box doesn't have any effect). During this pause I can't kill TotalCmd.exe just like Explorer.exe Added (2): This problem seems to disappear when I use 32 bit applications (like TotalCommander (x86) ), but I need to do more testing to be sure of this Added (3): There are several errors in event log, source: disk, id: 11, qualifiers: 49156, task: 0, level: 2, keywords: 0x80000000000000 (This may be important, and I forgot to mention this): Main disk is encrypted via Truecrypt (boot-in password)

    Read the article

  • Passwortgeschützter Traffic-meter

    - by UncleBob
    Hallo erstmal, ich habe hier ein kleines Problem für das ich bis jetzt noch keine Lösung habe. Ich lebe in Bosnien und teile hier die Internetverbindung mit der Vermieterin, und wie es in Bosnien so ist haben wir keine Flatrate, sondern eine 15 Giga traffic limite. Das wäre eigentlich mehr als genug, wenn der Sohn der Vermieterin nicht immer überziehen würde, sodass die Rechnungen immer ziemlich teuer ausfallen. Ich habe ihm bereits ein Messprogramm installiert, aber das schaltet er offensichtlich aus sobald er in die Nähe seiner Limite kommt und behauptet dann die Limite nicht überzogen zu haben. Ich brauche also mindestens ein Messprogramm das Passwortgeschützt ist und/oder im Log Zeiten vermerkt wärend denen es nicht eingeschaltet war. Noch besser wäre ein Programm das ihm den Netzzugriff einfach abklemmt wenn er seinen Anteil überschreitet, also eine Mischung aus Trafic-meter und Parental Guard. Kann mir da jemand weiterhelfen? Gtranslated version Hi first, I have a small problem for which I yet have no solution. I live in Bosnia and share the Internet connection here with the owner, and how it is in Bosnia, we do not have a flat rate, but a 15 Giga traffic limite. That would actually would be more than enough, if the son of the landlady does not always cover so that the bills always turn out quite expensive. I have it already installed a monitoring program, but he apparently turns out as soon as he comes close to its limit and then claims not to have the limit excessive. I therefore need at least a measurement program that is password protected and / or in the log notes During low periods where it has not turned on. Even better would be a program that disconnects him from accessing the network if it simply exceeds its share, ie a mixture of Traffic parameters and Parental Guard. Can someone help me there?

    Read the article

  • Implementing emailing (bulk & event based) features for my website.

    - by Kabeer
    Hello. For my upcoming social networking website, I am looking for suggestions on the best way to implement emailing. Here are some of my requirements and constraints: Requirements: - Should be able to send emails based on events (new registrations, change password, etc.), promotions (advertisements based on user consent), bulk mails (newsletters), reminders (profile updates), etc. I hope I got the point through. - Should be able to process faults (incorrect email address, mail-box full, etc) - User initiated invites (inviting friends to connect) Constraints: - As of now I am looking at Godaddy for hosting. Subsequently I shall move to, may be Amazon Cloud. Godaddy seems to be excruciatingly conservative (not bad always) when it comes to the ability to send email. - My tests on Godaddy so far have been discouraging. There is limit to no. of emails I can send and sometimes if emails carries special characters it throws strange exceptions like there was a virus affected attachment (even though I hadn't attached a thing). The replies from Godaddy support have been equally funny. My intent is not to portray Godaddy as wrong but I am looking for a work-around that frees me from said constraints. I am looking for a mechanism / service that is either free of very cost effective. I wonder how other sites address this. Mine is a .Net / Windows based application.

    Read the article

  • Nginx: Disallow index.html in URL

    - by Martin Vilcans
    We're generating a site consisting of only static files (using Assemble). Having the .html extension on URLs looks so nineties, so we generate every static HTML file in its own directory and call it index.html. For example, the url http://www.example.com/foo/bar/ is in the file /var/www/foo/bar/index.html. This works well, but there is one small thing nagging me: Now there are two possible URLs to the same resource: http://www.example.com/foo/bar/ (slash URL) http://www.example.com/foo/bar/index.html (index.html URL) By accident someone may link to the index.html form of the URL, which is bad for SEO and looks ugly (remember the nineties?). Is it possible in Nginx to give a 404 error on the index.html URL, but serve the slash URL? I tried this: location ~ /index\.html$ { return 404; } But it seems that Nginx does some internal rewrite of the slash URL to the index.html URL, and then matches this location so we get a 404 even on the slash URL. Note that to catch mistakes, we want index.html URLs to be an error, not just redirect to the slash URL.

    Read the article

  • Connections to SSH and Samba suffer from heavy delay

    - by Till Helge Helwig
    There are a lot of questions about SSH connections being delayed, which usually can be fixed by disabling the DNS lookups. Unfortunately this doesn't seem to be my problem. Our development server is accessed via SSH and Samba. When opening a connection to the server (either SSH or Samba) it takes a very long time. Accessing a Samba share via Windows is basically impossible because there is a timeout. Using smbclient works, but takes ages. When opening a SSH connection I get immediately prompted for the password and after hitting Enter the terminal instantly shows the MOTD. Afterwards it takes about a minute for the prompt to appear. I watched the load of the server while connecting via SSH and Samba and could not find anything out of order. There is nothing out of the ordinary running and hogging up memory and CPU or something. I have no clue where this delay might come from. I already tried UseDNS no in sshd_config and proxy_dns = no in smb.conf, but to no avail. Any idea about what might cause this would be greatly appreciated!

    Read the article

  • My home router randomly disconnects me and I'm unable to reconnect to it

    - by Roy Tang
    It's happened a few times, I'm not sure how to diagnose/debug, so any advise would be appreciated. Symptons: sometimes the router will randomly disconnect; the connection icon on my desktop (wired to the router) gets that yellow "!" symbol that tells me my connection just went down. At this point I'm unable to ping the router. afterwards I try to reset the router by removing then reconnecting the power jack on the router side (this is the fastest way as I can't reset the power strip it's connected to without rebooting my desktop. the router has a reset thingy, but it's one of those things where i have to find a pin to stick into the hole, and when I get disconnected I usually need to get reconnected immediately so I just pull and put back the power jack), but even after that the connection has the same state. after the router reboots, if I try to connect to it using a wifi device like my ipad, the ipad prompts me for the wifi password even though it had already "remembered" all the settings for this router forever after i finally decide to reboot the power strip, and my desktop and the router boot up again, the connection returns to its normal state somewhat and i'm able to connect to it as normal using the desktop and wifi devices. What do I need to check the next time this happens so I can figure out the problem? Is it possibly because we've been using the power jack on the router as an easier way to reboot it? Should I be shopping around for a new router? If it helps, the router is a DLink DIR-300

    Read the article

  • How to use a common library of environment variables among different languages?

    - by JDS
    We have three main languages with which we perform system tasks: Bash, Ruby, and PHP, and Perl. Four, four main languages. We use managed environment variables to provide authorization info that automated scripts need. For example, a mysql user account and password. We'd like to use one single managed file to maintain these variables. In some instances, for example, in cron, these environment variables are not available. They are made available in CLI scripts because we source the env file in everyone's profile. But something like cron doesn't do that. On the CLI, when the env file is sourced, any given script can access those variables. Bash has them directly, PHP in $_ENV, ruby in ENV, etc. We can't source the file into non-Bash scripts, because most languages implement shell commands by running them in a subshell. We considered parsing the Bash, converting to the script's lang, and running the equivalent of "exec(parsed_output)" on the resulting strings. What is a good solution to providing managed environment vars to scripts running in cron, or similar?

    Read the article

  • Issues connecting to HP ProCurve switches

    - by BriGuy
    We are having a very strange issue trying to connect to our infrastructure switches via SSH. When you first try connecting to them, the switches will prompt for the password - and then just sit there after it is entered. If you create a second SSH session to the switch (while letting the first one remain open and just sitting there) it will let you log right in. The switches are doing the same thing with RADIUS and local authentication. The other strange part to all of this, is that about 10 switches started doing it all at the same time. As far as actual configuration of the switches, nothing has changed. Occasionally, one switch will start working like normal, but then stop again. These are all HP ProCurve managed switches, but all different models/firmware. Some switches that are not working are using the same firmware as others that are working. UPDATE: 20130312 I am also seeing this same behavior when trying to use telnet. The first telnet session just hangs there, and the second telnet session will let me log in. Rebooting the switches seems to get them working, but I still have 5 production switches that cannot easily be rebooted because of their production roles. Is anyone aware of anything else that can be switched on/off that may reset the logon for remote management or something like that?

    Read the article

  • PC dies when running at 100% CPU

    - by user155631
    I recently wrote some Java code to generate images of the Mandelbrot set (fractal). I made use of the new Fork/Join facility in Java 7 to run separate threads on all four cores (2 real, 2 virtual)simultaneously, using a large number of iterations for greater accuracy. The problem is, the process runs fine for about a minute, and then it's as if someone has pulled the plug and the PC just dies. I thought it must be the CPUs overheating, so I ran Real Temp to monitor the temperature. It's an Intel i3 processor. I can see the temperature creeping up to 70 degrees, and then it seems to level off there and run for about another 30 seconds before dying. According to Real Temp, there's still a gap of 35 degrees between the actual temperature and TJ max. I also tried disabling "CPU TM function" in the BIOS, but the problem still occurs. A colleague suggested that it might be a power supply problem, so I borrowed a more powerful PSU (can't remember what wattage it was, but it's higher than mine which is 500W). The exact same thing still happens though. Is anyone able to suggest what the problem might be, or what I can try next?

    Read the article

  • Linux: CIFS/Samba mount hangs for several minutes

    - by Pistos
    I have a small local network which has a Gentoo box and a Windows box. I mount a share originating on the Windows box onto the Gentoo box with a command like: mount -t cifs -o username=WindowsUsername,password=thepassword,uid=pistos //192.168.0.103/Users /mnt/windowsbox Most of the time, everything Just Works, and I can read and write without problems. However, every few weeks or so, the connection or the mount point seems to go dead or hang, such that any process that tries to access the mount point gets stuck in D state (disk, or I/O wait). These processes become impervious to TERM and KILL signals. Disconnecting and reconnecting the Windows box from the network does not help. The frozen state lasts for 5+ minutes. It's really frustrating and gets in the way of normal work, because it freezes Save As dialogues, ls commands, etc. If I issue a umount on the mount point, it either hangs also, or reports that the mount point is in use. Eventually, the dead state resolves itself, and the mount point gets unmounted, or it becomes possible to umount with no delay. My guess is that this happens when the connection/mount has gone idle, or when the Windows machine has been idle. I am not really sure. Why is this happening, and what can I do to prevent it? Or how can I successfully kill these D-state processes at will? Possibly related: CIFS mounts hang on read

    Read the article

< Previous Page | 587 588 589 590 591 592 593 594 595 596 597 598  | Next Page >