Search Results

Search found 18728 results on 750 pages for 'setup deployment'.

Page 435/750 | < Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >

  • vsftpd: refusing to run with writable root inside chroot

    - by MrROY
    I want to setup a anonymous only ftp server (able to upload files). Here is my config file: listen=YES anonymous_enable=YES anon_root=/var/www/ftp local_enable=YES write_enable=YESr. anon_upload_enable=YES anon_mkdir_write_enable=YES xferlog_enable=YES connect_from_port_20=YES chroot_local_user=YES dirmessage_enable=YES use_localtime=YES secure_chroot_dir=/var/run/vsftpd/empty rsa_cert_file=/etc/ssl/private/vsftpd.pem pam_service_name=vsftpd But when i try to connect it: kan@kan:~$ ftp yxxxng.bej Connected to yxxx. 220 (vsFTPd 2.3.5) Name (yxxxg.bej:kan): anonymous 331 Please specify the password. Password: 500 OOPS: vsftpd: refusing to run with writable root inside chroot() Login failed Can anyone help ?

    Read the article

  • Leveraging .Net 4.0 Framework Tools For Encrypting Web Configuration Sections

    - by Sam Abraham
    I would like to share a few points with regards to encrypting web configuration sections in .Net 4.0. This information is also applicable to .Net 3.5 and 2.0. Two methods can work perfectly for encrypting connection strings in a Web project configuration file:   1-Do It All Yourself! In this approach, helper functions for encrypting/decrypting configuration file content are implemented. Program would explicitly retrieve appropriate content from configuration file then decrypt it appropriately.  Disadvantages of this implementation would be the added overhead for maintaining the encryption/decryption code as well the burden of always ensuring sections are appropriately decrypted before use and encrypted appropriately whenever edited.   2- Leverage the .Net 4.0 Framework (The Way to go!) Fortunately, all needed tools for protecting configuration files are built-in to the .Net 2.0/3.5/4.0 versions with very little setup needed. To encrypt connection strings, one can use the ASP.Net IIS Registration Tool (Aspnet_regiis.exe). Note that a 64-bit version of the tool also exists under the Framework64 folder for 64-bit systems. The command we need to encrypt our web.config file connection strings is simply the following:   Aspnet_regiis –pe “connectionstrings” –app “/sampleApplication” –prov “RsaProtectedConfigurationProvider”   To later decrypt this configuration section:   Aspnet_regiis –pd “connectionstrings” –app “/SampleApplication”   The following is a brief description of the command line options used in the example above. Aspnet_regiis supports many more options which you can read about in the links provided for reference below.   Option Description -pe  Section name to encrypt -pd  Section name to decrypt -app  Web application name -prov  Encryption/Decryption provider   ASP.Net automatically decrypts the content of the Web.Config file at runtime so no programming changes are needed.   Another tool, aspnet_setreg.exe is to be used if certain configuration file sections pertinent to the .Net runtime are to be encrypted. For more information on when and how to use aspnet_setreg, please refer to the references below.   Hope this helps!   Some great references concerning the topic:   http://msdn.microsoft.com/en-us/library/ff650037.aspx http://msdn.microsoft.com/en-us/library/zhhddkxy.aspx http://msdn.microsoft.com/en-us/library/dtkwfdky.aspx http://msdn.microsoft.com/en-us/library/68ze1hb2.aspx

    Read the article

  • Desktop switcher appears to be broken for quick double-switches

    - by Jon Blackburn
    I'm wondering if anyone else has seen this. I have three virtual desktops aligned in a horizontal row. In the middle desktop I have only a single application window. I have keyboard shortcuts mapping to navigate between the desktops. Obviously, I never use the up/down arrows because I only have one row of workspaces. Here's the problem, which only started to happen after I installed 12.04.1: When I rapidly hit to go from workspace 1 to workspace 3, the window on workspace 2 gets moved to workspace 1. I have checked using both Unity and Gnome3, and the behavior is the same under both. If I change back to the default workspace setup (a 2x2 grid of desktops) things seem to settle down (i.e., no wandering windows). Not every type of application window behaves the same way. I couldn't get a Chrome browser to jump from 2 to 1, but both Terminal and Terminator exhibit the behavior. Any thoughts? Better workarounds? Thanks in advance.

    Read the article

  • BIOS password and hardware clock problems

    - by Slartibartfast
    I have HP 6730b lap top. I've bought it used and installed (Gentoo) linux on it. BIOS is protected with password, and guy I bought it from said "I've tweaked BIOS from Windows program, it never asked me for password". I've tried to erase password by removing battery, but it's still there. What did get erased obviously is hw clock. This is what hapends: a) I can leave lap top in January 1980 and it works b) I can correct system time, but boot wil fail with "superblock mount time in future" from where I need to manually do fsck and continue boot c) I can correct system time and sync it with hwclock -w but than it will behave as b) and it will reset BIOS time to 1.1.1980 00:00 So I need either a way to bypass a BIOS password (wich after lot of googling seems impossible),a way to persist a clock, or a setup that will enable hw clock in eighties, system clock in present time and normal boot.

    Read the article

  • shared web hosting architecture in a university setting

    - by gaspol
    We're in the process of creating a shared webhosting infrastructure for our university. Departments within the university can host their sites on this infrastructure. We're thinking of setting up multiple, load balanced web servers attached to shared storage (for web content and Apache config files). There will also be database servers behind these web servers. Does anyone have any other suggestions about this? Any recommendations for an alternative setup? Would having cPanel/WHM/Plesk be a good idea to automate account creation/maintenance?

    Read the article

  • VPN only connects once and then fails

    - by Toby Allen
    I have a VPN connection setup to my office from home, it works great except for one thing. Once I connect and then disconnect the next time I go to connect to the vpn it hangs on verifying your username and password. No matter how many times I try. Once I restart my router it works again. My router is a belkin, the router on the other end is a Draytek. Any ideas? Is there a cache somewhere that needs to be reset, a setting?

    Read the article

  • Cloud Backup: Getting the Users' Backs Up

    - by Tony Davis
    On Wednesday last week, Microsoft announced that as of July 1, all data transfers into its Microsoft Azure cloud will be free (though you have to pay for transferring data out). On Thursday last week, SQL Azure in Western Europe went down. It was a relatively short outage, but since SQL Azure currently provides no easy way to take a standard backup of a database and store it locally, many people had no recourse but to wait patiently for their cloud-based app to resume. It seems that Microsoft are very keen encourage developers to move their data onto their cloud, but are developers ready to do it, given that such basic backup capabilities are lacking? Recently on Simple-Talk, Mike Mooney described a perfect use case for the Microsoft Cloud. They had a simple web-based application with a SQL Server backend; they could move the application to Windows Azure, and the data into SQL Azure and in the process free themselves from much of the hassle surrounding management and scaling of the hardware, network and so on. It was a great fit and yet it nearly didn't happen; lack of support for the BACKUP command almost proved a show-stopper. Of course, backups of Azure databases are always and have always been taken automatically, for disaster recovery purposes, but these are strictly on-cloud copies and as of now it is not possible to use them to them to restore a database to a particular point in time. It seems that none of those clever Microsoft people managed to predict the need to perform basic backups of Azure databases so that copies could be stored locally, outside the Azure universe. At the very least, as Mike points out, performing a local backup before a new deployment is more or less mandatory. Microsoft did at least note the sound of gnashing teeth and, as a stop-gap measure, offered SQL Azure Database Copy which basically allows you to create an online clone of your database, but this doesn't allow for storing local archives of the data. To that end MS has provided SQL Azure Import/Export, to package up and export a database and its data, using BACPACs. These BACPACs do not guarantee transactional consistency; for example, if a child table is modified after the parent is copied, then the copied database will be in inconsistent state (meaning, to add to the fun, BACPACs need to be created from a database copy). In any event, widespread problems with BACPAC's evil cousin, the DACPAC have been well-documented, and it seems likely that many will also give BACPAC the bum's rush. Finally, in a TechEd 2011 presentation tagged "SQL Azure Advanced Administration", it was announced that "backup and restore" were coming in the next SQL Azure CTP. And yet this still doesn't mean that we'll get simple backups as DBAs know and love them. What it does mean, at least, is the ability to restore any given database to a point in time within a 2-week window. For the time being, if you want a local copy of your data and don't want to brave the BACPAC, one is left with SSIS or BCP, creative use of schema and data comparison tools, or use of SQL Azure Backup (currently in beta) in order to perform this simple but vital task. Cheers, Tony.

    Read the article

  • Connection to mysql server in SYN_SENT

    - by Sunil
    We are facing a strange problem from last few days between our application server and database server(Mysql): connection to database server from application server hangs in SYN_SENT state and after that we are not able to make any connection to database server on mysql port(3306). When we checked the netstat output on database server its in SYN_RECV state. What I can figure out is mysql server is receiving the SYN request and responding also and its not reaching to the client hence SYN_RECV at server side and SYN_SENT at client side. I think SYN_SENT state should go after some time and because of this other db connection attempts to same server should not hang. Does anybody have any idea how can we resolve this issue? Out setup details : Application server: RHEL 5.4, kernel-release = 2.6.18-164.el5, x86_64 Database server: Mysql Version : 5.1.49 RHEL 5.4, kernel-release = 2.6.18-164.el5, x86_64

    Read the article

  • Unity isn't starting on 13.10 (with Cinnamon 2.0 installed)

    - by Sam Pearman
    Since upgrading to 13.10, I can't log in to unity desktop. Light dm works correctly, but attempting to log in tries to start the session then drops back to light. I've already dropped to terminal (ctrl+alt+f2) and done this: sudo apt-get update sudo apt-get install --reinstall ubuntu-desktop sudo apt-get install unity Logging in as a guest session also fails. Logging in to other window managers works with varying degrees of success. Note: I have Cinnamon 2.0 installed from PPA. I'm using a 2 monitor setup. Also of note is that the session prior to my upgrade to 13.10 the background of unity failed to display at all, instead showing what was there in the screen buffer from the previous frame. The entire OS worked correctly otherwise though, so I just ignored it for the session. No other upgrades or even updates were done prior to this occurring. My upgrade path to 13.10 was basically this: Install 13.04 alongside Windows 7, use ubuntu as a glorified web browser for a while, get updates (in preparation for 13.10), install 13.10. I also used Unity Tweak Tool to change some aspects of unity, particularly auto-hide. Any help or ideas would be appreciated, as I'm typing this on my phone :(

    Read the article

  • SBS 2011 on different subnet than domain computers

    - by Ravi
    The setup is as follows: SBS 2011 in datacentre on subnet A Domain PCs at another location on subnet B There is a site-to-site VPN. The domain PCs have joined the domain and have the SBS as their primary DNS server. The domain PCs can ping the DC but the problem is that the DC cannot ping any of the remote subnet (subnet B) SBS --Switch -- Router A ------------------- Router B -- Switch -- Domain PCs What is strange is that router A can ping any host on the subnet B. Another host on Subnet A can also ping any host on subnet B. It's only the DC which cannot ping anything to that specific remote subnet B. I did a tracert from the SBS to router B. The packet reaches Router A from the SBS but then it fails. Am I missing some specific settings that needs to be done when SBS is on a different subnet than its member pcs ?

    Read the article

  • Printer server drivers for Samsung ML printer series

    - by drpcken
    I have a Printer Server running server 2008. I'm trying to setup some Samsung ML printers on it (ML-3050, 2150 and 2570 printers). The print server is server 2008 and all my clients are Windows 7 x86. I just never know which drivers to get for the server. On the site they have Universal, PCL5 and PCL6, PostScript, and SPL drivers. From what I've read PS is the best right? I just have problems getting them installed on the server and having clients connect. Which driver type is the best?

    Read the article

  • Is there any way to make cherokee server portable?

    - by Tom
    I develop on different machines. I use MAMP, I have it installed on my dropbox folder and created symbolic links to the applications folder. That way if I work one day on my desktop and make changes to let's say a database schema and next day I work from my laptop I won't have to do any db migration stuff the same applies for all the apache virtual hosts I have setup using MAMP. Everything is portable. I recently started using Cherokee server and I like it a lot. I would like to replace MAMP with Cherokee but first I need to be able to make it portable. I don't want to have to configure multiple virtual hosts, settings, etc., on multiple machines. Is there any way I can set up Cherokee to be as portable as MAMP? What if I want to run Cherokee from a thumbdrive?

    Read the article

  • Reset user passwd when you don't know it

    - by warren
    I have a small problem. I have shared keys setup on my domain, so I never type my password to login anymore. I've forgotten my password now. This is a problem because only my user can sudo. Password authentication for root has been disabled, so without my password, I cannot do maintenance on my web server. Is there a way to reset my password as my [now only] key-authenticated user? Specifically, can this be done on CentOS 4?

    Read the article

  • Fedora vs Ubuntu to host Subversion and Bugszilla over Apache

    - by Tone
    I'm not interested in a flame war of Ubuntu vs Fedora vs whatever. What I am interested in is whether or not I should move my current Ubuntu server to Fedora. I have been able to get Subversion setup and hosted via Apache over https and it works quite well (I'm a .NET guy so this was all new to me). I'm having trouble though with installing Bugszilla - have run into some issues getting all the perl scripts to run successfully so my questions are: 1) Will Bugszilla will install easier on Fedora? Can I just install a package instead of having to download the tar.gz file and untar it, run perl scripts, etc. 2) Is Fedora considered to be a better production server system? I have no desire for a GUI, just need it to host Subversion, Bugzilla over Apache2, and act as a file and print server for my home network.

    Read the article

  • Reverse proxy 502 bad gateway

    - by Brian Graham
    I have setup a subdomain to proxy my plesk panel, but when saving pages I am getting 502 Bad Gateway error instead of a completion message. I am running CentOS 6. Here is my vhost.conf configuration for http://plesk.domain.tld/: RewriteEngine On RewriteCond %{SERVER_PORT} ^80$ RewriteRule $ https://plesk.domain.tld/ [R,L] Here is my vhost_ssl.conf configuration for https://plesk.domain.tld/: SSLProxyEngine On <Location /> ProxyPass https://localhost:8443/ ProxyPassReverse https://localhost:8443/ </Location> I have more than enough (and I have even checked) RAM, CPU and HDD. There are no spikes. As well, the posted information does save, it just errors when trying to show me a "This information has been saved." green/red block. Here is the relevent error from /var/log/nginx/error.log (IP/Host Filtered): 2014/05/29 02:42:41 [error] 8046#0: *402 upstream prematurely closed connection while reading response header from upstream, client: 173.238.XX.XX, server: plesk.domain.tld, request: "POST /smb/web/edit HTTP/1.1", upstream: "https://198.100.XX.XX:7081/smb/web/edit", host: "plesk.domain.tld", referrer: "https://plesk.domain.tld/smb/web/edit"

    Read the article

  • SVN: Error validating server certificate for svn hook linux

    - by Dr Casper Black
    Hi, I managed to setup a SVN (over SSL) server and TortoiseSVN client on Win. I made a Post-Commit Hook for test project. The Post-Commit will update the web dir so the App in PHP can be executed with the newest version. It all works when done over shell. The only problem is, when i commit the changes over the client in Win the change is commited but HOOK throws error post-commit hook failed (exit code 1) with output: Error validating server certificate for 'https://SERVER_IP:443': - The certificate is not issued by a trusted authority. Use the fingerprint to validate the certificate manually! - The certificate hostname does not match. Certificate information: - Hostname: DEVSRVR - Valid: from Fri, 28 Jan 2011 09:22:45 GMT until Sat, 28 Jan 2012 09:22:45 GMT - Issuer: PHP, SS, SS, SRB - Fingerprint: 5f:d0:50:d6:dd:a6:d4:64:a5:ac:3a:4b:7c:7d:33:e3:75:dd:23:9f (R)eject, accept (t)emporarily or accept (p)ermanently? svn: OPTIONS of 'https://SERVER_IP/svn/myproject/trunk': Server certificate verification failed: certificate issued for a different hostname, issuer is not trusted (https://SERVER_IP)

    Read the article

  • Setting jQuery after ASP.net AJAX partial post back

    - by Steve Clements
    OK, so for some reason you have a mega mashup solution with ASP.net AJAX, jQuery and web forms.  Perhaps you are just on the migration from AjaxControlToolkit to the jQuery UI framework – who knows!! Anyway, the problem is that when you post back with something like an UpdatePanel, you will find that your nicely setup jQuery stuff, like the datepicker for example will no longer work. You may have something like this… $(document).ready(function () {     $(".date-edit").datepicker({ dateFormat: "dd/mm/yy", firstDay: 1, showOtherMonths: true, selectOtherMonths: true }); });   When you’re ASP.net UpdatePanel post back, you will find that your datepicker has gone.  Bugger! Well you need to add this little gem to set it back up again once the UpdatePanel comes back to the page. var prm = Sys.WebForms.PageRequestManager.getInstance(); prm.add_endRequest(function () {     $(".date-edit").datepicker({ dateFormat: "dd/mm/yy", firstDay: 1, showOtherMonths: true, selectOtherMonths: true }); });   Or like me, you would have a javascript function, something like InitPage(); do all your work in there and call that on document.ready and endRequest. Your choice…you have the power   Share this post :

    Read the article

  • How do I target a specific driver for libata kernel parameter modding?

    - by DanielSmedegaardBuus
    Sorry for the cryptic title. Not sure how to phrase it. This is it in a nutshell: I'm running a 22-disk setup, 19 of those in a ZFS array, 15 of those backed by three port multipliers attached to SATA controllers driven by the sata_sil24 module. When running full speed (SATA2, i.e. 3 Gbps), the operation is pretty quirky (simple read errors will throw an entire PMP into spasms for a long time, sometimes with pretty awful results). Booting with kernel parameter libata.force=1.5G to force SATA controllers into "legacy" speeds completely fixes all issues with the PMPs. Thing is, my ZFS pool is backed by a fast cache SSD on my ICH10R controller. Another SSD on this same controller holds the system. Doing libata.force=1.5G immediately shaves about 100 MB/s off the transfer rate of my SSDs. For the root drive, that's not such a big deal, but for the ZFS cache SSD, it is. It effectively makes the entire zpool slower for sustained transfers than it would've been without the cache drive. Random access and fs tree lookups, of course still benifit. I'm hoping, though, that there's some way to pass the .force=1.5G parameter on to just the three SATA controllers being backed by the sata_sil24 module. But listing the module options for this, no such option exists. Is this possible? And if so, how? Thanks :)

    Read the article

  • Wrong Sound Blaster's PCI ID within Windows

    - by pavian
    I own Sound Blaster X-Fi Titanium, it was working with no problems under Linux or Windows 7. It's original PCI ID is 1102:000b but now I see different within MS Windows. BIOS setup: 1102:000b GNU/Linux: 1102:000b Windows 7: 1102:000d Windows 8: 1102:000d In last days I'm experimenting with IOMMU PCI passthrough in Xen and I tried to pass this device to virtual Windows 7 and 8. Here I found this problem. I don't know if this is just coincidence or reason of my problem but it's wrong even in physical system. Windows detects 1102:000d as a High Definition Audio sound device (I guess this name, I have localized Windows, but this is general name, the same was with Realtek HDA before drivers), it's playing but it's unstable (Windows speaker testing can crash that application) and I can't install Creative software. Used driver is hdaudio.sys. Booting in BIOS or UEFI mode doesn't change anything. Nor CMOS clean. Someone met the same problem.

    Read the article

  • FCGI & recompiling python code without restarting apache.

    - by Zayatzz
    Hello At one hosting company, they used to run python projects with fcgi. They had set it up so that when i changed django.fcgi file, which put django & my project on pythonpath, my project code was instantly recompiled. Because of that a friend set up hosting for our shared project in his server using fastcgi. It has been set up and the python scripts execute as they should, but what we do not know is, how to set it up so that my project would be recompiled when my setup file has been changed. Alan

    Read the article

  • Mount a share on a Mac using a login hook

    - by Arcath
    I have a script that mounts a Samba share to a folder on the desktop, it runs no problem but when its setup as a LoginHook it doesn't mount the folder. Does anyone have a working login hook that mounts a share that they can post? Or know any issues with mounting shares during login? This is my Script: #!/usr/bin/env ruby @domain="Lancaster" @user=ARGV[0] #@[email protected](/\n/,"") @userfolder="/Users/" + @user.to_s @smbshare="//#{@user}@hercules/everyone" system("mkdir #{@userfolder}/Desktop/everyone") system("mount_smbfs #{@smbshare} #{@userfolder}/Desktop/everyone | #{@userfolde$ system(" /usr/bin/osascript <<-EOF tell application \"System Events\" activate display dialog \"Welcome to the #{@domain} domain #{@user}\n\nY$ end tell EOF ")

    Read the article

  • Domain and TS migration

    - by Windex
    The migration steps outlined by Microsoft in the ts migration seem to deal with moving TS to a different server on the same domain and call for adding the licensing service to another system, move the licenses and then put TS on whatever server you want. However with migrating the domain as well I don't have any place to move the TS server to. So my thought was to simply re-activate my licenses on the new server using the same method as a new TS setup. My question is essentially will this work the way I think it will or will the MS activation clearing house deny the new server? Is there a procedure to follow that "deactivates" the licenses on a server so that the clearing house knows there are some free? (FWIW I can look up the license information through the eopen website and have access to the original license doc.)

    Read the article

  • PowerDNS: multiple supermasters and transfering domain

    - by blauwblaatje
    Hi, I've got a setup with multiple supermasters (bind) and multiple superslaves (pdns). It all seems to work just fine, pdns is being updated when I'm adding or changing a domain. But, when I want to migrate a domain from one master to another, pdns doesn't like it. It tells me the new server isn't a master for this domain, although I deleted the domain on the old server. Now, I think that part of the problem is, that pdns doesn't get an update when a domain is deleted, which would also explain a lot of dead domains in my pdns. It looks like the slave is constantly polling a server and getting RCODE=5 back. The master isn't aware of the domain and the slave thinks the master still serves that domain. Anyone familiar with this problem?

    Read the article

  • How do I install kivy?

    - by aspasia
    I was trying to install Kivy (by following the instructions here). I downloaded and installed all packages where the installation process went through without giving me any errors. However, when later I enter below command; sudo easy_install kivy It looked like it was going to work but it ends with an error by displaying following lines, which I don't comprehend: Detected compiler is unix /tmp/easy_install-BtOA_u/Kivy-1.8.0/kivy/graphics/texture.c:8:22: fatal error: pyconfig.h: No such file or directory #include "pyconfig.h" ^ compilation terminated. error: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 I saw a similar question asked as; Problem with kivy installation. However, this didn't work for me though the question suggests installing libgles-mesa-dev-lts-raring which I did as below; sudo apt-get install libgles-mesa-dev-lts-raring which then gave below; E: Unable to locate package libgles-mesa-dev-lts-raring (sorry for being so specific and perhaps obvious, but I'm in the early stage of learning my way around linux). This user was running Ubuntu 12.04, and most other questions related to this I've seen came from people with a different release from mine, which has led me to believe that that is the reason why the suggestions to those didn't solve my problem. I'm using Ubuntu 13.10

    Read the article

  • send outgoing email via postfix from mail client

    - by Ey Jay
    I have installed postfix on my ubuntu that is hosted on digitalocean. What I want to do is. With my smtp server setup, I want to use it to send mail from my email client. I don't need to receive, I just need to send. I can telnet example.com 25 successfully, I received the email in my inbox, but when I tried using in a email client. smtp: example.com:25 user: smtp1user password: smtp1userpassword I get an error that says "Server doesn't respond. Try changing the port." I dont know how to proceed.

    Read the article

< Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >